Changing requirements, why and when to say “no”

One of the biggest challenges I hear with developing software is requirements stability.  Change has a nasty effect on code quality, and that effect is not necessarily overcome simply by going to Agile software development.  Change in itself is not a bad thing, but how we deal with change can be.

If we simply march forward leaving the delivery date as is, we’ve effectively played a game of chicken with a well known limitation – in any stable software development methodology you may set two of these three parameters: features, time or cost.  Pick any two; the third you cannot control.  So, when change comes along and you don’t modify the resources or the delivery date AND you add more features an implicit feature gets dropped.  That implicit feature is quality. 

We’ve gone so far to study this, and found the effect to be both statistically significant and practically significant.  Projects which accept change controls have defect densities which are, on median, 85% higher than projects which do not experience change.

So what can we do?  The first reaction I’ve heard from development teams is to go tell the business to produce better requirements.  Fair enough.  If the business knew exactly what they wanted and specified it, the issue would be solved.  Alas, even though LEAN thinking would prefer we solve the problem holistically, we often find ourselves in situations where we lack the ability to change the inputs we get.  We have to do the best we can with less than perfect requirements, and we need to know where to push back for more clarity and when to refuse change.

Here are the rules of thumb that I believe could guide these decisions:

  1. Differentiate between “I didn’t know I had that requirement” and “I didn’t bother to figure it out.”  There is a place for new requirements to appear that nobody realized we had.  That’s distinctly different from a lazy business partner who thinks you can make do with a napkin spec and is unwilling to sit with you to be constantly involved.  For the former, c’est la vie, accept they exist and determine what is going to change to include it – late delivery, more resources, or let the quality slide.  For the latter, spend some time bugging the user up front to help them figure out their requirement(s).  Even though they’re unwilling to do it themselves, prototype something up and show them.  Spend a little bit of money to learn a lot rather than having an expensive rework situation later.
  2. Differentiate between “I’d like this different” and “this matters”.  If upon building the piece of the system the users says “wouldn’t it be neat if…”, put it on the to-do list.  If the user can articulate a competitive disadvantage if the feature isn’t changed, change it.  As an organization, we all have lots of ideas about how software should work, but you can get drowned in gold plated features that have negligible impact on value in the eyes of our customer.  As a development organization it is not wrong to ask the business to articulate how this change will improve the customer experience and politely suggest that maybe that change wait until later or perhaps never.
  3. Tom and Mary Poppendieck suggest “delay commitment.”  They’re drawing from LEAN’s philosophy of pull.  If nobody is asking for it, then why are you building it?  Generally, I agree with this concept – don’t gold plate your software with developer ideas that nobody asked for.  However, if a user knows they require a new report, you aren’t delaying commitment when you ask them to specify what that is.  What you can delay commitment on, however, is exactly how it looks.  The critical piece to architecting a system isn’t what the user interface (and yes, a report is a form of a user interface) looks like, it’s knowing what data you’ll need when and where.  You don’t need commitment on the smallest detail, but you do need commitment on the key data elements you’ll have to spit out.  If you can’t get the key bits, maybe it’s time to put the brakes on.

It’s never my goal to not serve our customers, but there are techniques we can leverage to make change less likely to occur.  We will never avoid all change, because there are things nobody could have anticipated such as changes in the marketplace.  But we can create a sense of stability by proactively extracting knowledge instead of attempting to read the tea leaves of vague requirements.  It doesn’t need to be all or nothing.  Then combine it with a LEAN development approach to create the software efficiently so requirements don’t become stale just because you’re moving slowly.

Not one of the critical few

Ingrained in the Six Sigma school of thought is the critical few – the 80/20 rule. It is an important rule. In practice, there are a handful of things which often allow you to make big leaps from an incapable process to a capable one. There are more subtle characteristics of the process which can be refined to continually improve the performance, but this isn’t step change, it is refinement. And then there’s a class of things that just don’t matter.

Recently, while attempting to facilitate a process design effort, I spent a lot of time thinking about the things that don’t matter. That may have been because that’s all anyone spent their time talking about. And as facilitators, we were enablers of this dragging on. Having been instructed to drive to a single standard process and toolset, we discussed every little one-off thing that people wanted to allow for in the process to see if we could squeeze them out. A day’s worth of 25 people’s time to design a process spent talking about the equivalent of the carpet color.

We wanted perfect compliance to the standard, and that meant a standard which was not necessarily all-inclusive (because some of these one-off requests were truly ridiculous by any standard). This is where I believe we got off track with process work. Process design is about controlling the critical few things which will make the difference in process performance.

But that is not what we were discussing. We were discussing nuances, oddball cases, odd uses of the process, and data elements that some teams wanted and others didn’t. We talked about the 1% and largely ignored the 99%. We talked about things that weren’t going to make the difference, whether they were one way or another.

To begin with, we didn’t know what was going to make the difference. We hadn’t studied the existing processes to understand what made them work – what really mattered and what didn’t. This created unnecessary room for debate because we were unable to bring adequate materials to the table to help the team work through their differences. We had little to no information on what mattered and what didn’t.

Instead of define-measure-analyze-improve-control we just went right into improve. And there we got bogged down discussing every little quirk, because we didn’t know what else we ought to be talking about. Or more importantly, what we shouldn’t be talking about.

Instead of a conversation that was “do we really need that? How many of our teams use that process step?” we could have instead said “sure, it doesn’t matter to me if you allow for that.” And we’d be saying that not because we didn’t care but because we actually knew what did matter. Everything else, the little things that we debated with the teams could have instead been bargaining chips that we could dole out in heaps and have given up basically nothing that really mattered. We could have had a strong position, not because we won all the arguments but because we knew which battles were worth fighting and which were worth conceding.

Had we known what things were not one of the critical few things, we could have appeared very agreeable and allowed the teams as much “leeway” in the process as they claimed they needed. All along we’d be giving up nothing. Nothing that really mattered anyway.

It’s a reminder why a thorough measurement and analysis of a process is important. It isn’t just discovering what the current state is (measurement), but it also understanding why it works (analysis). And from there, narrowing down the bits of process that really do matter, and just letting the rest go. Some things just don’t matter.

It’s a net, people

Ah nets, the greatest safety device possibly ever invented. And also very handy for catching fish. Or is it? In software development, we always talk about nets. Testing is a net to catch all the bugs that developers write. Peer review is a net to catch all the design flaws that analysts create (and possibly a net to catch code bugs as well). But a net it truly is, and nets, by design, have holes in them.

Nets catch big things, like big fish, and let the little ones through. That’s fine if you don’t mind the little fish escaping, but what if instead of being fish the little things getting by are defects? Sure, the really enormous fish, er defect, gets caught, but the little ones, the hundreds of thousands of little ones slip right through.

Nets serve a purpose, but they are imperfect and will always allow some things that are small enough to fit between the spaces to slip through. There are finer nets than others. General purpose nets, like peer review, tend to catch more than special purpose nets like regression testing. After all, peer review (in theory) looks for a broad range of things while regression testing only looks for what it already knows about.

But the purpose of my post is not to talk about the fineness of any net but instead to remind you that no matter how fine your net is, it is still a NET. And you really don’t want nets when it comes to software development, you want walls – barriers that allow NOTHING undesirable by. The barrier that we most often overlook in software development is error proofing the process.

Sure, for new code that’s hard to do, but many systems are configuration driven, and require changes to be made to many parts of the system to enable new functionality. We often need entries in the tables to create the foreign keys, entries in tables to join values together. Multiple rows of configuration working in concert make the right behavior. But what do we do? We write instructions on how to configure the system. Rather than do that, if there’s a limited number of possible results you want to support (say a bunch of client features), then either write the system with a single item of configuration (so you don’t have to worry about updating the right thing in every place) to make it work OR write a tool that makes sure you can configure all those settings with a single click. Error proof. Make it easy to do the right thing.

Walls, not nets, is where the answer lies.

Don’t unecessarily break cultural norms

I’d recently been involved with a post mortem of a project we’d been working on.  One part of that session was to perform plus-deltas. Plus-deltas are essentially a coarse way of getting feedback from the team about what they liked, what they didn’t like, what they want more of, etc.

We were doing plus-deltas by writing up sticky notes and posting them on whiteboards inside key themes. Themes might be “the process”, “the tools we have”, “the people on the team”, and so on. As a little twist, the facilitator chooses 3 color sticky-notes and gives each sticky color a role. There are plusses (things we want to continue to do), deltas (things we want to stop doing or do differently) and he adds a third thing (which he calls “wannas”) which are things that we think we should start doing.

So if you were going to choose a color for each type of sticky-note based on what type of thing was going to be written on the sticky-note, what would you choose?

I’ll wait while you think about it…

[ insert Jeopardy music here ]

… ok. Let me guess, you chose the following:

Good things (plusses) go on green sticky-notes.

Bad things (deltas) go on red sticky-notes.

“Wanna” things go on some neutral color (maybe blue) or yellow.

Was I close? Did you ever consider putting good things on the red sticky-notes or bad things on the green sticky notes? No? Why not?

Because we have cultural norms about these colors. Green in the US means good while red means bad. That’s not true in China and Japan, where red is a good color. I’m not sure how they feel about green, but red isn’t a bad thing. Alas, for some reason this is what the facilitator chose:

Plusses are to be put on yellow sticky-notes.

Deltas are to be put on green sticky-notes.

“Wannas” are to be put on red sticky-notes.

Watching the teams work, you can imagine the confusion. The facilitator had to put up a cheat sheet about what color meant what. When the overhead projector went to sleep and our cheat sheet wasn’t visible, people were lost as to which color to use. Work stopped while people tried to remember which sticky color meant what. It was ridiculous, and completely avoidable.

In designing a process, there are things that you want to change, like the way teams work or even the corporate culture about reporting errors. And then there are things you should never change, like things that are cultural norms that would be extremely hard for people to not do the way they’ve always done. Regardless of how you feel about it, it probably isn’t sound advice to start making everyone walk on the left side of the hallway as a process improvement because it’s just going to screw people up unnecessarily. We have a norm about that that extends beyond the company, so don’t mess with it.

Simpler can be better

We over complicate.  Almost everything.  We have advanced strategies even when we haven’t been particularly good at the basics.  But I didn’t learn an important lesson about simpler being better from work today.  After all, it’s Sunday and I’m not at work.  But more importantly, I learned simpler is better from playing a video game.

Our friends have a Nintendo Wii.  We also have one, though we don’t own many games for it.  Our daughter never plays the Wii at home.  And furthermore, she’s just under 3 years old.  Her skill with a Wii controller is, from what we can tell, fairly limited.  But when she sees it being played at our friends’ house, she really wants to try it.

And she asks very politely to play.  In this case our friends’ son was playing Wii Playground.  It’s a series of mini-games like dodgeball, tetherball, paper airplanes, and slot cars.  My daughter has no capabilities to do anything advanced with a Wii Remote.  She can wave it about and press a button and hold it.  She has no idea about pressing a button several times, or pressing the B button.  All she can do is press the A button, and wave it about.

So our friends’ son set our daughter up with slot cars.  There are all kinds of cool things you can do with slot cars.  You can press “A” twice to make the car turbo boost, you can move the remote left and right to switch lanes, you can press “B” to activate special powers.  And doing all this, you can hop between lanes, catching speed boosts on the track and blasting in front of your opponents.

Now my little one doesn’t know about winning or losing, she just likes to try.  I’m happy for her to do that.  After all, she’s not even 3.  Anyway, once she had the remote in hand, the race started and she held down the “A” button to make the car go.

Sure enough, the car took off down the track.  Our little one didn’t move the car between lanes, she didn’t use the turbo boost, she didn’t activate any special powers.  She didn’t ram her opponents off the track.  Despite all the capabilities that this game had, she used none of them.  She just put the pedal to the metal (so to speak) by holding onto the “A” button.

She won!  She won 3 races in a row… against computer opponents, who have no idea that it’s a 2 year old playing against them!  Computer opponents don’t take it easy on you.  I’ve played the slot cars game – I have LOST at the game.  I tried the advanced strategies.  I tried to jockey for the best slot to be in, use the speed boosts, knock my opponents off the track.

Simpler was better.  My daughter, doing nothing but holding “A” beat the computer consistently.  We tend to over think, to try and use advanced techniques to get a huge edge.  We don’t need those things!  Sure, my daughter didn’t beat the opponents by a hundred car lengths, or even ten.  She just beat them.

Think simple.  Try basic.  Try boring.  Try consistent, but unexciting.  The results might surprise you.  They sure surprised the heck out of me.

Good to be unskilled?

Have you ever wanted someone to say this about your company “wow, we are really horrible at doing X”?  And I don’t mean in a thank-god-they-are-admitting-they-have-issues kind of way, but more in a “I’m really proud we can’t do that well” kind of way.  Up until recently, I thought the answer to my question was “are you crazy!?!?  Of course not!  I want us to do everything really well.”

Like many large companies, ones I’ve been working with are undergoing a tough time due to the economy.  And of course, that means layoffs.  People that I liked were not so lucky.  You get around to talking with these folks about how the experience of being laid off is.  I mean, it can’t be fun, but you want to know if it went relatively well.

Of course, it doesn’t go well.  I realize there’s no good way to lay someone off, but there are less bad ways.  I’ve heard horrid rumors of other companies laying people off via email and simply just locking the doors to the building.  This experience was nowhere that far down on the scale.

But there are always things you can do better.  For example, the process of laying people off starts first thing in the morning and continues until everyone has been told.  But, since you have no warning as to whether you are going to be laid off or not, those who kept our jobs sit around in our offices panicking that they’re next.  At some point during the day it is over, and wouldn’t you want to know that?  Employees heard nothing until hours after the last layoff had been done.

Unnecessary hours, in my opinion, that nobody should have had to spend worrying needlessly.  I know, I know, think of how the people who were laid off felt.  Was it really that bad of a thing to leave employees wondering?  No, not really, but it could have been done better.

Later on that evening, I considered how lucky we were that the company is terrible at communicating during layoffs.  Why are they so terrible?  Well, it’s a rare occurrence.  If you don’t practice it, even if you learn from a prior experience you never get to apply those learnings.  If a company was expertly prepared to do layoffs, I’d be a little worried.

Sure, isn’t it great that they are super-capable?  No!  It’s awful!  It’s something they shouldn’t be doing, something that they have rarely had to do.  They should be god-awful at it.  Frankly, it hurt at the time, but now I’m downright pleased.

And not just communicating layoffs applies here.  Disaster recovery of all forms might be fair game.  I mean, if you’ve gotten your system, process, product, whatever it is, so reliable that you’re unprepared for when it  fails, that might actually be a good thing.

If you fail all the time, you’ll have the people and processes in place to deal with the failure.  You’ll be expert fire fighters, and that’s just not the place you want to be.

I’m not offering a free pass to companies for not being capable of dealing with a disaster that is likely to occur – like your servers or network going down – but at some point if you’ve really gotten good at something, I’d expect you’d be bad at dealing with the outlier.

Could it be good to be unskilled at something you shouldn’t be doing in the first place?  I think so.