Following fast

Some time ago, unfortunately I can’t cite the source, I read that one of Toyota’s major strategies was not necessarily to innovate but to be able to follow quickly in the footsteps of innovators. The point was, whether or not correctly attributed, innovation is overrated if you can copy someone else’s innovation quickly enough.

Well today I saw this article on Huffington Post… A sort of rant about why 2048 was a rip off of “threes!” and how you should go out and buy Threes! because they put effort into their game and the ripoff only spent a couple weeks being constructed.

I won’t comment on the justness component of the article, but I do think that it’s an excellent illustration of following fast. 2048 is indeed very similar to its predecessor. I’ve played both since my father-in-law first challenged me to beat his top score on Threes! and subsequently pointed me to the equally addicting and free game, 2048. However, 2048 does have some differences and for the casual game player it’s far less frustrating. So, should every company invest 14 months inventing a game, or is a two week knockoff done by a single person and given away adequate?

Particularly in game playing, the marketplace is a highly commoditized place. If you need an entertainment fix, almost any game will do, so while each game is unique, you’re not just competing against other games like yours, you’re competing against all other games which satisfy the same need.

For better or worse the free market is pretty indifferent towards fairness, so recognizing that and following fast may be the way to go. And oh, it has another advantage, one that was articulated by Tom Demarco in his work “The Deadline.” If you have an existing product to copy, specifications for how your product should at least mostly work are right there in the form of the existing software and user manuals.
>

Satisfying the majority of needs may mean you satisfy nobody

How could my title be true, you ask?  After all, if you satisfy the majority of needs that people express musn’t you necessarily satisfy the majority of people?  I think not.  Take, for example, eliciting requirements for a new application.  Imagine that out of the 100 people you ask that they all provide 20 requirements and 18 of them are identical.  So you look at all the unique requirements of which you have, let’s say 218 – the 18 identical ones everyone shares plus 2 unique ones for each person.  You set out to do the 18, figuring you’re covering 90% of what everyone says they want, right?

And when you put it into production, nobody is happy with it?  Why not, because each person had 2 more requirements that they wanted and didn’t get.  So, while you satisfied 90% of everyone’s requirements, you satisfied exactly 0% of anyone’s complete requirements.

Yes, my example is extreme at best, but you get the idea.  You can certainly try to prioritize requirements based on how many people ask for something, but unless you take into account whether the unique requirements they have are a deal-breaker, it may not matter if you complete the common stuff with speed and elegance.  In today’s economy, people have lots and lots of choices, and while you can’t satisfy everyone all the time, you can certainly not quite satisfy everyone all the time.

Richness versus Recall

Alistair Cockburn presents an interesting insight in his presentation “I come to bury Agile, not praise it.”  On slide 12, he presents the richness of the communication channel as an important part to getting information across.  Surely, you’ve experienced this yourself with a never ending chain of back and forth emails that were quickly resolved with a single 1 minute phone call to clarify.

Therefore, it makes enormous sense to replace communication of low richness with communication of high richness, right?  Well, I’m not sure it’s that black and white.  In order to use information effectively, you not only have to be able to communicate it, but also to recall it when you need to use it again.

For example, you sit down and have a conversation with the user and then turn around to write some code.  The ability to translate what the user asked for into code depends not only on having the conversation but remembering all the details of the conversation correctly.

So, do you have an Eidetic memory?  Probably not.  How long can you accurately recall a conversation?  Long enough to turn it into code faithfully?  Probably not as well.  You can probably remember the nominal case, but what about all the exception handling you discussed?

Now, I’m not saying you should communicate via email or paper only since that’s clearly silly, but on the other extreme, you probably shouldn’t communicate orally only as well.  Indeed, merging the face to face conversation with documentation helps manage both the completeness of conversation and ability to recollect details when you need it.

Operator?

Does anyone remember the kids’ game “Operator” (not, the silly surgery game, Operation).  In Operator, which for us was usually played while sitting at the lunch table, the first kid would pick a simple phrase like “there is a cat in my house” and then, they would whisper this phrase into the ear of the person next to them.  Of course, since it was a loud lunch-room, it’d get slightly misheard and by the time it had been passed along 20 kids or so, the last kid would hear “there is a flat on my grouse” (which doesn’t make a lot of sense, of course).  The final kid would say the phrase they heard aloud and everyone would laugh, particularly when the first kid would announce what they had originally started with.  All in all, it was a very silly game, but kids liked it.

Today, we apparently still play that game, only we play it with more serious consequences.  When your testers write test cases directly from the system design or technical design, you’re playing operator.  Each time down the line, you’re counting on the translation from the prior step to this step to be faithful.  Like making a copy of a copy of a copy of a copy of an original document, it just keeps getting blurrier and blurrier.

If you want your testers to produce the most faithful interpretation of the requirements they can, they need to work from the requirements document.  If they assume that the analyst or technical lead has done the translation faithfully, then they are working from something that is potentially suspect.

Now, I’m all for using tools to do requirements trace-ability, but I have to say, even with tools like ReqPro or others that offer what they often call “ReqPro Lite” functionality, there’s still sense in making sure that your final check of the system is linked directly back to the original document you started with.

Don’t play Operator with your customer’s requirements.  Everyone should hear them first-hand, especially those who are responsible for checking they were implemented properly.

Why root cause confuses

The term “root cause” seems to confuse software developers, and it seems to have been that way for a long time.  When a developer talks about “root cause” they tend to mean the place where the system started to go wrong. 

For example, if you call a webservice and it returns bad results that the caller might have detected there are two things you could do.  One, you could fix the caller to handle the error gracefully (often called defensive coding), or two you could fix the webservice to not return bad results.  Most developers I’ve run into will tell you that fixing the web service is “root cause.”  The idea being that if the web service didn’t do something bad, there’d be no need for defensive coding in the caller.

Fair enough, but this isn’t root cause.  Root cause has to go further, a lot further.  If root cause only goes so far as to fix the issue at the point of origination, then all you’ve done is fix one issue.  Instead, you hve to be asking the question: “why did we make that mistake in the first place?”  Was the webservice wrong because of a coding issue?  Requirements?  Design?

And further, why did you make a requirements, design or coding error?  What can be done to catch issues of this type in the future?  The bug you fixed is fixed, it isn’t going to be the next bug you deal with, but it is going to be one of a pattern of issues that you are not handling.

When you think about root cause, think beyond fixing the bug at the source.  That’s helpful, but it isn’t exactly the root cause of why the problem was introduced in the first place.  If you intend to stop future issues, you have to go further than just fixing the issue you have now.

What’s an implied requirement?

I think we all know what an implied requirement is – it’s the unspoken expectation of the requirement we do specify.  As I was driving today, I was trying to find a simple example that would illustrate just how many decisions a software developer has to make when you leave requirements implied.  Not that you should necessarily expect that everything gets specified, just that you don’t always realize how much you are leaving to chance.

So, here’s my requirement: the user enters two numbers and the program adds them together and spits out the result.

That seems simple enough, and let’s say it’s a command line application, so no UI to deal with.  It should be all of about 4 lines of code, right?

Ok, but what if the user enters something that’s not a number?  Sure, simple right, we’ll just block non-numbers.  Well, except that there are some “non-numbers” which are numbers.  Pi (3.1415926…)?  e (2.71828)?  Should we allow those?  If this were a scientific app, we’d probably have to allow those.  And scientific notation perhaps as well (2.024 x 10^5)?

Ok, but what about irrational numbers (besides PI)?  Can the user specific 1/3 + 2/3 and get 1 or will your software return .999999999?  Some languages (like LISP) understand irrational numbers.  Others, like C, don’t natively.  I think most typical users would expect that fractions should be added properly, even if they’re irrational, but who’s to say for sure?

And what about i?  Imaginary numbers?  Do we support those?

How about the use of commas?  Is 1,234.56 considered valid?

What if the users enters something invalid?  Should we consider it a zero or return an error?  If we use C’s conversion functions, they’ll just convert invalid numbers into 0, which though an obvious implementation is probably not quite what you’d expect if you tried adding “2 + cat”, right?

Why bring this up?  Because the other day I saw code which checked a field for a valid number and it used a regular expression that looked like this: [09\.]*.  Unfortunately, that would allow “0.0.2” or “…” or “0…9” but not allow “1,234.56”.  Clearly, what appears simple is really not so simple.  A better regular expression (pardon that I didn’t test this out) would be: [09]*\.?[09]+. Which is to say, optional numbers followed by an optional “.” (but only one “.” if there is one at all) and then at least one mandatory digit.  That’s an improvement, but it doesn’t support commas, which are a bit more complicated yet.

People are good at understanding add two numbers and doing the right thing.  Computers are not.  And thus, the job of translating all those implied requirements falls to the developer, who may or may not do a good job at it.  It depends somewhat on what your developer thinks about and can be mitigated by utilizing some form of review process to ensure thoroughness.  But the point is simple, and at least as of this writing necessary to repeat.

When someone says “I want a programming language in which I need only say what I wish done,” give him a lollipop

Assume nothing, or at the very least you can assume that your developer will make the assumption for you and it may not be what you want.  It’s a trivial example, but you can quickly see how simple requests hide potentially significant issues.

Perspective on Requirements

It’s probably not all that often in our lives that we get to see things from both sides of the requirements.  As developers, managers, testers, etc. we don’t author the requirements.  We don’t generate the idea.  We help the folks on the business side make requirements out of their ideas.

And, we constantly bemoan how horrible it is that business people can’t seem to put requirements on paper to save their lives.  They don’t know what they want.  They keep changing their minds.  They’re too detailed – they give us solutions instead of problems.  It seems like they can never do right by us.

In an interesting experiment, I happen to be working with a group of college students who are doing a Senior year project.  As part of that project, the college solicited assistance from a range of companies to act as customers and have the students solve a programming project for them.

Without getting into detail of what I proposed, I gave it a bit of thought, and ultimately settled on a simple tool to assist with generating data for testing.  It seemed straight-forward enough.  As I sat there writing up a short requirements document for the students, I thought to myself how pleased I was to get a chance to do this.  The tool was so simple it didn’t need use cases.  In my mind, just a context diagram or two, a simple data model to illustrate the goal, and some text around the various types of output it had to generate was all that was needed.

In my head, having been a programmer many years, I knew exactly how I would solve it, but I tried very hard to not impose my idea of a solution on the students.  After all, I had very little strong opinion on how most of the solution should work – certainly no opinion on a user interface, how it would connect to the database to create the data (could be ETL, could be ODBC, could be something else), etc.

On the day of the first meeting with the students, I printed out 5 copies of my requirements and off I went to meet with them, as well as their advisers, on campus.  Even after explaining the problem several times, the students seemed confused.  The advisers seemed confused.  In the whole room of people, it seemed like I was the only one who wasn’t confused.

What had gone wrong?  Were my requirements that bad?  How was it that someone who had so much experience on the development side of the business couldn’t write a requirements document to save my life? 

It was a day or two later when I had a serendipitous moment.  In working with another totally unrelated team on process work, we were discussing the need for the team to have autonomy to make their own decisions.  This is not normally something I’d support since autonomy = not standard.  When we finally rolled out a necessarily vague process to enable their “autonomy” the users of the process freaked out.  They didn’t know how to use it.  They wanted much more detail.  The autonomy they desired was only after they had seen a complete answer.  Then they could pick it apart and tweak it.

That’s when it struck me.  I wrote my requirements giving the students enormous autonomy.  As a developer who had been through lots and lots of projects, I too desired autonomy to make design decisions to help solve the problem.  So, when I wrote my requirements, I wrote them the way I would like to receive them.  With 15+ years of development experience, I don’t need someone to spoon feed me requirements anymore.  But these were college students.  Having solved very few real world programming problems (perhaps none, for some of them), giving them a problem to solve was vague, hand-wavy, and generally dissatisfying.  They freaked out because they didn’t have the experience to derive an answer themselves; they needed it to be more detailed and perhaps even give them a (partial) solution to the problem.

Had I done that, the students could riff on the solution, but not create from nothing.  I’m not sure how to incorporate this discovery into a methodology yet.  If you encounter experienced developers, they don’t need that kind of detail, but if you encounter less experienced developers, they do.  You could err one way or the other, but why write lots of detail to be ignored by developers who don’t need it?  Or worse, why generate so much job dissatisfaction by treating them like college students?

When doing more may be leaner

We often think that the route to being a leaner organization is to do less, but we sometimes forget that the very tangible “less” we’ve created may not be the right solution for the organization.  Take, for example, requirements documentation.

I was recently reviewing a requirements document that was a whopping 488 pages long.  For those of you with the Agile bent that I don’t have, your immediate reaction might be to replace it with user stories and iterations.  In fact, this project, by Boehm’s standards laid out in Balancing Agility and Discipline, would not be well suited for Agile at all.  Despite the document’s unwieldy size, the requirements were extremely well known and unlikely to mutate much over the course of the project.  The team size was relatively large.  The business would not be in a position to make decisions rapidly because the requirements were driven by an external client.  The team skill tended towards average performers.  Everything about this project said plan driven was the right choice.

So the team had done what it thought was the leanest thing possible.  Rather than producing many documents up front, it produced one monolithic monster.  The argument was, this is leaner because this document serves all readers.  As we explored the document with the various customers we discovered something interesting.  The developers hated the document.  It didn’t suit their needs.  The customers hated the document.  They didn’t feel comfortable signing off on all the technical detail that had gotten into it.  We couldn’t find anyone who liked the format of the document.

In some senses, a single document appeared to be the leanest thing possible to do.  The tangible “we’ll write less if we just do one document” was undone by the fact that the product now served nobody particularly well.  In essence, we lost sight of the fact that translating what the customer says to us into usable requirements is value added work and therefore shouldn’t be what we’re targeting to lean out at first.  Sure, there may be optimizations to the value added work, but there was much more non-value added work going on as a result of this choice.

Since the document suited nobody well, everyone was creating interim documents to re-translate the parts they needed into a structure that would be usable for them.  That’s clear unnecessary processing – redoing the work that someone else did.

In this case, more documents would have been better.  Documents which separated out what the developers need to know to create the code and what operations folks need to know to support the client and more.  With those in hand, good translations of the requirements would lead to good code and avoid the significant rework the team was experiencing just trying to figure out which parts of the document applied to them.

Whether less is really more depends on what you’re getting less of.  Less clarity in exchange for less documentation?  That’s not the direction we want to head in.

Changing requirements, why and when to say “no”

One of the biggest challenges I hear with developing software is requirements stability.  Change has a nasty effect on code quality, and that effect is not necessarily overcome simply by going to Agile software development.  Change in itself is not a bad thing, but how we deal with change can be.

If we simply march forward leaving the delivery date as is, we’ve effectively played a game of chicken with a well known limitation – in any stable software development methodology you may set two of these three parameters: features, time or cost.  Pick any two; the third you cannot control.  So, when change comes along and you don’t modify the resources or the delivery date AND you add more features an implicit feature gets dropped.  That implicit feature is quality. 

We’ve gone so far to study this, and found the effect to be both statistically significant and practically significant.  Projects which accept change controls have defect densities which are, on median, 85% higher than projects which do not experience change.

So what can we do?  The first reaction I’ve heard from development teams is to go tell the business to produce better requirements.  Fair enough.  If the business knew exactly what they wanted and specified it, the issue would be solved.  Alas, even though LEAN thinking would prefer we solve the problem holistically, we often find ourselves in situations where we lack the ability to change the inputs we get.  We have to do the best we can with less than perfect requirements, and we need to know where to push back for more clarity and when to refuse change.

Here are the rules of thumb that I believe could guide these decisions:

  1. Differentiate between “I didn’t know I had that requirement” and “I didn’t bother to figure it out.”  There is a place for new requirements to appear that nobody realized we had.  That’s distinctly different from a lazy business partner who thinks you can make do with a napkin spec and is unwilling to sit with you to be constantly involved.  For the former, c’est la vie, accept they exist and determine what is going to change to include it – late delivery, more resources, or let the quality slide.  For the latter, spend some time bugging the user up front to help them figure out their requirement(s).  Even though they’re unwilling to do it themselves, prototype something up and show them.  Spend a little bit of money to learn a lot rather than having an expensive rework situation later.
  2. Differentiate between “I’d like this different” and “this matters”.  If upon building the piece of the system the users says “wouldn’t it be neat if…”, put it on the to-do list.  If the user can articulate a competitive disadvantage if the feature isn’t changed, change it.  As an organization, we all have lots of ideas about how software should work, but you can get drowned in gold plated features that have negligible impact on value in the eyes of our customer.  As a development organization it is not wrong to ask the business to articulate how this change will improve the customer experience and politely suggest that maybe that change wait until later or perhaps never.
  3. Tom and Mary Poppendieck suggest “delay commitment.”  They’re drawing from LEAN’s philosophy of pull.  If nobody is asking for it, then why are you building it?  Generally, I agree with this concept – don’t gold plate your software with developer ideas that nobody asked for.  However, if a user knows they require a new report, you aren’t delaying commitment when you ask them to specify what that is.  What you can delay commitment on, however, is exactly how it looks.  The critical piece to architecting a system isn’t what the user interface (and yes, a report is a form of a user interface) looks like, it’s knowing what data you’ll need when and where.  You don’t need commitment on the smallest detail, but you do need commitment on the key data elements you’ll have to spit out.  If you can’t get the key bits, maybe it’s time to put the brakes on.

It’s never my goal to not serve our customers, but there are techniques we can leverage to make change less likely to occur.  We will never avoid all change, because there are things nobody could have anticipated such as changes in the marketplace.  But we can create a sense of stability by proactively extracting knowledge instead of attempting to read the tea leaves of vague requirements.  It doesn’t need to be all or nothing.  Then combine it with a LEAN development approach to create the software efficiently so requirements don’t become stale just because you’re moving slowly.