A delightful discovery

Generally, I think it’s kind of silly to spend time patting yourself on the back about what you did.  Especially when there is so much more yet to be done, but I wanted to share an experience and an anecdotal measure of what a good process looks like.  (Apologies up front for the self-congratulating nature of this).

Almost 10 years ago, before I learned a thing about LEAN or Six Sigma, I worked for a large company.  When I was there, I put into place a very simple multi-stream waterfall process.  I didn’t know anything about LEAN, but I knew at the time that I didn’t want a burdensome process, just an effective one.  Books like Steve McConnell’s Rapid Development and my past experiences were my only guides.

I had no idea at the time if the process was a good one.  I had no idea when I left if the people who stayed would still use the process or abandon it the minute I was out the door.

Recently, I had the need to recreate something like that process I put in place 10 years ago.  You’ll find from my writing that I have not jumped on the Agile bandwagon.  Based on my own research, the work of individuals like Boehm and Turner in Balancing Agility and Discipline and a healthy amount of valid criticism from the Internet community, I’m convinced turning to a craft-like approach to software is not the answer.  Thus, my need for a simple multi-stream waterfall process…

So, I called up a friend who still worked at my former employer and asked for the old process documentation I had created.  I was convinced that they were no longer following it, so I should be getting back exactly what I had left them with.  To my surprise I was told that no they did not have what I wanted.  Not because they had promptly filed it in the trash, but because it had evolved!

They sent along a package of templates and flows which, while it strongly resembled what I had left, they had expanded upon and improved to meet changing situations.  The spirit of what I had created 10 years ago was still intact!

Two things are important about that experience.  One, I was with that company almost 5 years before I left.  That duration, the authors of LEAN Thinking point out, is about the amount of time needed to make a change permanent.  Had I left a year or two into the process change, it may have not survived.  For five years I pushed hard on my team for change, and I had left thinking that it was all for naught.  Not so!  The type of persistence that Womack and Jones write about paid off.

The other was that processes will continue to evolve.  On one hand, I was kind of hoping to get back exactly what I had left them with.  I was a little disappointed that I did not because selfishly, it was no longer quite mine.  But, after a moment of reflection, I realized that the fact that it was living and evolving was necessary to its survival as a viable process.  It also meant that team, many of whom are still there from when I was there, has taken ownership of it and made it their own.

It’s personally gratifying to see a process survive for so long and grow and change and be perfected but not abandoned for the latest fad.  But what’s more important, the team has been able to steadily improve by working from standard process and continuously, patiently modify it in little ways.  It’s heartening to know that Kaizen works.  We talk about it a lot, but how often do we get to look back 10 years and see its effects?

An interesting way to level demand

This evening, we went to a restaurant in Providence, RI called Fire and Ice. It is a form of restaurant that I’ve seen called a Mongolian BBQ. Essentially, you fill a bowl with an array of starches, vegetables, meat and sauce and hand it to a cook who places it on a large round flat-top grill.

What is really interesting about this is the way they handle the complications of all these different ingredients that need to cook at different speeds. Towards the outer edge of the grill it is cooler, and it is hotter towards the middle. As a result, when the cook takes your bowl, he lays out its contents in a line from the edge towards the middle, placing the meats closest to the center and the vegetables to the outside. If you have shrimp or fish, it goes somewhere halfway between the edge and center.

What’s more, you can also have a burger made on this grill, and though I didn’t watch closely, it is easy to imagine that if you want a rare burger you place it closer to the outside, and for more well done, place it towards the center of the grill.

In this fashion, the chefs slowly walk in a clockwise circle around the grill, giving the various columns of food a few tosses. In two or three trips round, your food is fully cooked and they simply take off each column of food in a simple first on, first off manner and deliver it to the waiting customer.

The solution is so elegantly simple. You get served in the order you came to the grill, so you never wait unusually long. The food cooks in uniform time, despite not needing uniform treatment. And the routine for the cooks is incredibly standard, despite the potentially millions of unique combinations that can be created.

What other solutions are out there to take dissimilar needs and devise ways to take the non-standard out? I think there is a lot to be learned from how this restaurant functions.

Perspective on Requirements

It’s probably not all that often in our lives that we get to see things from both sides of the requirements.  As developers, managers, testers, etc. we don’t author the requirements.  We don’t generate the idea.  We help the folks on the business side make requirements out of their ideas.

And, we constantly bemoan how horrible it is that business people can’t seem to put requirements on paper to save their lives.  They don’t know what they want.  They keep changing their minds.  They’re too detailed – they give us solutions instead of problems.  It seems like they can never do right by us.

In an interesting experiment, I happen to be working with a group of college students who are doing a Senior year project.  As part of that project, the college solicited assistance from a range of companies to act as customers and have the students solve a programming project for them.

Without getting into detail of what I proposed, I gave it a bit of thought, and ultimately settled on a simple tool to assist with generating data for testing.  It seemed straight-forward enough.  As I sat there writing up a short requirements document for the students, I thought to myself how pleased I was to get a chance to do this.  The tool was so simple it didn’t need use cases.  In my mind, just a context diagram or two, a simple data model to illustrate the goal, and some text around the various types of output it had to generate was all that was needed.

In my head, having been a programmer many years, I knew exactly how I would solve it, but I tried very hard to not impose my idea of a solution on the students.  After all, I had very little strong opinion on how most of the solution should work – certainly no opinion on a user interface, how it would connect to the database to create the data (could be ETL, could be ODBC, could be something else), etc.

On the day of the first meeting with the students, I printed out 5 copies of my requirements and off I went to meet with them, as well as their advisers, on campus.  Even after explaining the problem several times, the students seemed confused.  The advisers seemed confused.  In the whole room of people, it seemed like I was the only one who wasn’t confused.

What had gone wrong?  Were my requirements that bad?  How was it that someone who had so much experience on the development side of the business couldn’t write a requirements document to save my life? 

It was a day or two later when I had a serendipitous moment.  In working with another totally unrelated team on process work, we were discussing the need for the team to have autonomy to make their own decisions.  This is not normally something I’d support since autonomy = not standard.  When we finally rolled out a necessarily vague process to enable their “autonomy” the users of the process freaked out.  They didn’t know how to use it.  They wanted much more detail.  The autonomy they desired was only after they had seen a complete answer.  Then they could pick it apart and tweak it.

That’s when it struck me.  I wrote my requirements giving the students enormous autonomy.  As a developer who had been through lots and lots of projects, I too desired autonomy to make design decisions to help solve the problem.  So, when I wrote my requirements, I wrote them the way I would like to receive them.  With 15+ years of development experience, I don’t need someone to spoon feed me requirements anymore.  But these were college students.  Having solved very few real world programming problems (perhaps none, for some of them), giving them a problem to solve was vague, hand-wavy, and generally dissatisfying.  They freaked out because they didn’t have the experience to derive an answer themselves; they needed it to be more detailed and perhaps even give them a (partial) solution to the problem.

Had I done that, the students could riff on the solution, but not create from nothing.  I’m not sure how to incorporate this discovery into a methodology yet.  If you encounter experienced developers, they don’t need that kind of detail, but if you encounter less experienced developers, they do.  You could err one way or the other, but why write lots of detail to be ignored by developers who don’t need it?  Or worse, why generate so much job dissatisfaction by treating them like college students?

It’s not a useful philosophy if it doesn’t work

I suppose I’m opening a can of worms with the title of this post, as LEAN could be viewed as “a philosophy that doesn’t work”, but I’m going to venture down this path anyway.

In this case, the philosophies I’m referring to are immutable design tenets which fail to meet the needs of users.  We see this in the way Service Oriented Architecture (SOA) is implemented and many other places.  For example, there’s a frequent debate about whether services ought to be coarse grained or fine grained.  Do you create a web service that when asked a question returns everything we know or just answers the question.  Take eBay’s API for example, if you request information about an item, you get everything under the sun about that item, whether you want to use 1 field or all of them (of which there are hundreds).  The problem becomes one of performance eventually.  Pulling together all that data costs time, so if you have a time sensitive application then a speedy, and targeted response is important.

A recent example that came up was the decision to have a reporting dashboard store no data.  The theory was that the dashboard was not a data mart so all the data needed would have to come from the source systems at the time of request.  The issue was that one of the things the users wanted was a running history of how the system looked at certain points in time, not just how the system looked at this moment.  The source system didn’t keep a history, and as a vendor product was unlikely to be modified to do so.  Unless the dashboard stored at least some data, we would be unable to meet the customer’s needs.

One can understand the philosophy “we’re not a data mart”, and try to adhere to it in most cases.  Indeed, it’s a decent philosophy since it avoids data update errors, space considerations, and probably a host of other issues.  But, if you can’t meet your customer’s needs, then you can’t cling to that attitude.  As Benjamin Franklin said, the only certainties in life are death and taxes.  One cannot assume that your philosophy will meet all needs and then refuse to meet the customer’s needs when that ideal is violated.

Software process design falls into the category as well.  I’m sure you will have noted that I am a proponent for simple, one-size-fits-all decisions about how processes should work.  That is, I start with the assumption that I can design a process which will work uniformly for the smallest bug fix to the largest project.  I still believe that should be your starting point, and I still believe that you shouldn’t create an exception to the process if you can show that a single process can be adjusted to meet that situation.  I believe that we often add exceptions to the process because it’s easier to band-aid something on rather than look at the process holistically to see how it could be adjusted.  At some point, however, you may discover a situation that cannot be handled, in which case, figure out a way to make the decision point extremely clear in the process and diverge where you must.

Start by saying “no”

There’s a fine line between giving people a sense of ownership for the process and turning a good process into a bad compromise solution.  As we start down the path of designing a process, it is often that we begin to ask questions about how the basic process will work in outlier situations.  As we do this, we attempt to modify the process to handle the special case.

I encourage you not to do this.  In most cases, though the special circumstance may exist, it’s difficult (especially in software development, with so many confounding factors) to tell if the special circumstance really justifies special treatment.  Take for example, a measurement system for determining software quality.  A typical approach I see is that we begin with a simple measurement such as “number of defects found per unit of work tested (function point, line of code, etc.).”  And then someone will say something like “but this is a legacy system, so we should measure new systems this way, but we shouldn’t use it for old systems since you might be finding a latent bug.  Sure, that’s true, you might, but time and time again we’ve studied and found strong evidence that the best predictor of production defects is recent changes to production.  Bugs do remain latent in the system, that’s true, but the larger affect are the changes we make today.  We needed provide special treatment in our measurement system for legacy systems, because evidence points to the contrary of such a need.

Start simple.  Start generic.  Start with a broad brush.  When you get the data back, then check to see if it fails to deal with some situation poorly and adjust.  But don’t start up front envisioning all the ways you process won’t work before you ever get a chance to try it out.

Standard deviation matters

I was recently reviewing two health plans which I could join.  One was a typical HMO-type plan and the other a high deductible plan.  High deductible plans have been pushed pretty heavily lately, what with the focus on rising health care costs, the recent legislation, etc.  I don’t want to jump into the fray on the political aspects, but I do want to share with you an observation on being risk adverse.

The way that a high deductible plan keeps premiums down is by requiring the policy holder to pay everything out of pocket until you reach some limit.  For that reason, high deductible plans are often called catastrophic coverage plans, since they only become helpful if you incur massive costs.  In exchange, you pay very low premiums.  And that’s because you get very little benefit most of the time.  These plans, from everything I’ve read online, are great for young and healthy people, but not so good (obviously) for people with chronic conditions or families.

I am married and have two kids, but I was willing to run the numbers and look at the odds to see whether I should go with a traditional plan or a high deductible plan.  First you have the premiums to pay.  If you use no care than a high deductible plan beats a traditional plan hands down in premiums.  The rest is an odds game.  How many times will you have to visit the doctor?  How many prescriptions will you need?  What are your odds of having a catastrophic illness?  For a traditional plan, each one of these events costs you just a copay – typically $20.  For a high deductible plan, you have to pay the full amount until you reach the yearly limit.  (Never mind the fact that if you have a chronic condition you’ll reach that limit year after year after year).

Because I’m a data person, I happen to have tons of data on how often we visit the doctor and my current insurer was kind enough to send me a complete history of all charges so I had a reasonable idea of what various things would cost me.  Then, I built two models, one using a traditional plan and the other using a high deductible plan.

Guess what, in the end, odds were that even with my family (and we make our fair share of doctor visits), I’d likely pay less with the high deductible plan than with a traditional plan.  It wasn’t true in every situation, the potential cost to me of the high deductible plan overlaps with the potential costs of the traditional plan, but the premiums for a traditional plan are so much higher that it was difficult to overcome them.

But, there was something else I observed.  Not only did my models calculate the average result, but they also calculated the range of possible results.  And not surprisingly, the high deductible plan had a much, much higher standard deviation.  This isn’t surprising, since each medical event in a high deductible plan costs a lot more, and so the variability in the result is potentially much greater.

Ultimately, that’s why I stuck with a traditional plan.  Yes, I’m very likely to pay more, but the result is predictable, and I like predictable outcomes.  (I also like not having to concern myself with whether its worthwhile to go to the doctor).  If you’re a risk adverse person, and if studies about investing are to be believed you are more risk adverse than you think you are, then a traditional plan produces a more comfortable financial result.  It’s like buying bonds instead of stocks.  You may not come out way ahead, but it’s much more unlikely that you’ll come out far behind.

Consider that when designing a process as well.  Consistency is important.  You can end up with a software development process that sometimes produces perfect code, but sometimes produces awful code, or you could have a process which produces consistently average code.  Which one is easier to deal with for continuous improvement?

More information isn’t always better

I’ve been reading a few things at the same time, one being Systems Thinking and the other being a wired article about an unorthodox approach to improving safety.  In a bit of serendipity, Systems Thinking talks about organizations which are paternalistic, or as they put it, uniminded.  These organizations rely heavily on communication to direct what is essentially a complex but compliant machine.  In the same way that I direct my hands to type this blog entry, it’d be chaos if my hands decided they had a mind of their own and wanted to do something different.

What’s interesting to me is how I observe uniminded organizations manifesting themselves in problem solving.  Whenever something goes wrong with a uniminded organization, it’s common to hear someone say “we just need to communicate more” or “we just need to communicate better.”  Essentially, if the hands and arms and legs of the organization knew in advance what the mind was thinking they could have executed its will better.  Thus, the solution becomes “more communication.”

What’s interesting about the wired article is that what it appears to be advocating for is LESS communication and MORE vagueness, ultimately resulting in perhaps what Systems Thinking might call purposeful organizations.  That is, each car/person/bicycle in the system has a purpose and in attempting to achieve that purpose wants to cause no harm to the others in the system.  That doesn’t mean its filled with communication and direction on how to behave.  Quite the opposite.  By taking away the information it forces everyone in the system to think about what they are doing, rather than just boldly assuming that every party in the system is going to behave by some set of rules to achieve a common goal.

Could less communication turn out to be a good thing – forcing every person to think about the job they’re doing and how to do it best without causing harm to everyone else?

Empty Boxes

How many times have you gone to draw a process flow and someone can articulate it perfectly to you?  It doesn’t happen that often, but when it does, my first reaction is typically to be impressed.  “Hey, if these guys have a process down, they must be in better shape than lots of others.”

So, then I’ll say, “can I see the artifacts for process step 2?”  Suddenly there’s a blank stare, a long pause, another blank stare and perhaps an excuse as to why there aren’t any artifacts to look at.  What’s a process step that can show no proof that it has transformed the customer’s request into something else?  An artifact doesn’t necessarily mean a long document, but it does mean there’s some evidence that the process step does something and that it is clear what that is.  Moreover, there should be other things inside that process box, like standard work instructions.

If you go to open up a process box and there’s nothing to look at inside, do you really have a process?  If you have a bunch of empty boxes in a warehouse do you have products inside?  Closing your eyes and praying won’t make product appear in a box any more than it will make meaningful work occur inside a shell of a process.  Just because you have a process map doesn’t mean you have a process.

Threading the Needle

Back when I was a teenager I had a video game that was a fighter pilot simulator.  I want to say it was called “f-17” or something.  I have no idea if that’s a real fighter jet or not and I have no idea if that was really the name of the game or not.  It’s not particularly relevant to the story anyway.

The game, like many games, involved a series of scenarios that you had to complete, usually involving blowing up either enemy jets and/or enemy ground-based targets like bunkers, sonar arrays, etc.  A ways into the game, you came to a scenario that I will never forget for being so very frustrating.  The mission was called “Threading the Needle.”

The reason it was called this was because there were 3 or 4 enemy radar stations lined up in a zig-zag pattern.  The range of each station was such that there was a very narrow winding path you could fly that would avoid detection by all the stations.  If you got detected, the mission was over.

The thing was, it was really hard to fly such an exacting path.  I was probably a bit impatient as well, so I don’t recall that I ever made it past that mission.  My frustration with how delicate an operation it was has stuck with me for 20+ years.

And today, while driving home it came to mind for some reason.  But the difficulty of this silly video game has relevance to process work.  There are solutions we devise to process problems which are like threading the needle.  They’re delicate operations which avoid a series of hazards, but just narrowly.  If the process is done perfectly you get great results.  But if you don’t do the process perfectly, you get some ridiculous failure.

My viewpoint is, when you design a process which is essentially threading the needle, you’re asking for trouble.  It’s just too exacting and too subtle to be able to follow it faithfully and get good results all the time.  Look for answers which don’t require a delicate balancing act to make a successful outcome.  Threading the needle on process is going to be just as frustrating as me trying to fly a video game fighter jet through a narrow, winding corridor.

When doing more may be leaner

We often think that the route to being a leaner organization is to do less, but we sometimes forget that the very tangible “less” we’ve created may not be the right solution for the organization.  Take, for example, requirements documentation.

I was recently reviewing a requirements document that was a whopping 488 pages long.  For those of you with the Agile bent that I don’t have, your immediate reaction might be to replace it with user stories and iterations.  In fact, this project, by Boehm’s standards laid out in Balancing Agility and Discipline, would not be well suited for Agile at all.  Despite the document’s unwieldy size, the requirements were extremely well known and unlikely to mutate much over the course of the project.  The team size was relatively large.  The business would not be in a position to make decisions rapidly because the requirements were driven by an external client.  The team skill tended towards average performers.  Everything about this project said plan driven was the right choice.

So the team had done what it thought was the leanest thing possible.  Rather than producing many documents up front, it produced one monolithic monster.  The argument was, this is leaner because this document serves all readers.  As we explored the document with the various customers we discovered something interesting.  The developers hated the document.  It didn’t suit their needs.  The customers hated the document.  They didn’t feel comfortable signing off on all the technical detail that had gotten into it.  We couldn’t find anyone who liked the format of the document.

In some senses, a single document appeared to be the leanest thing possible to do.  The tangible “we’ll write less if we just do one document” was undone by the fact that the product now served nobody particularly well.  In essence, we lost sight of the fact that translating what the customer says to us into usable requirements is value added work and therefore shouldn’t be what we’re targeting to lean out at first.  Sure, there may be optimizations to the value added work, but there was much more non-value added work going on as a result of this choice.

Since the document suited nobody well, everyone was creating interim documents to re-translate the parts they needed into a structure that would be usable for them.  That’s clear unnecessary processing – redoing the work that someone else did.

In this case, more documents would have been better.  Documents which separated out what the developers need to know to create the code and what operations folks need to know to support the client and more.  With those in hand, good translations of the requirements would lead to good code and avoid the significant rework the team was experiencing just trying to figure out which parts of the document applied to them.

Whether less is really more depends on what you’re getting less of.  Less clarity in exchange for less documentation?  That’s not the direction we want to head in.