Heroism works… Sometimes.

One of the big challenges about statistics is that they aren’t a guarantee of anything. For example, projects that don’t do unit testing typically have 35 to 50 percent higher defect densities than projects which do unit testing. When that “bad” project gets into into integration testing, things are pretty sure to not go well.

But, there’s something worse than ignoring the statistics in the first place. Sometimes, when bad things come to pass, we reach for cliches like “when the going gets tough, the tough get going.”

It was just the other day that I was discussing one of these projects that had gone south. The QA team has successfully used the data they had to convince the project manager that things weren’t going to play out well. The project manager took that to his manager, who said, effectively, “what have you got against teamwork?” He was saying this like a football coach attempts to pep up his losing team.

We love the pep talk. How many sports movies focus on an underdog who pulls together after a rousing speech from the coach? How many movies show the rousing speech followed by the team losing? None that I can think of. Why? Because nobody writes stories about teamwork failing. That’d be a depressing story. But, in every game, at halftime, you can bet that both coaches are busily trying to pep up their team. And that means, that fifty percent off the time, more or less, teamwork (and heroism) DOESN’T WORK!

We only see the times it works out and we only remember the times it works out. The losers don’t write history.

So, getting back to this project. There is some chance that the team will pull together and that they’ll rescue the project. Then, the manager will forget the data that predicted that they’d be in a mess in the first place, and remember that heroism seemed to solve it. Heroism is a dangerous thing, because either you’re a hero, or you’re the dragon’s dinner. Only one comes home to tell the story; the cautionary tale isn’t alive to tell it.

The Wise Man

It is the wise man who knows that he knows nothing, to paraphrase Socrates.  Our knowledge of things is meager, certainly compared to all the things we as a species might ever come to know.  Every thing we learn seems to uncover new questions that need answers.  It’s an ever expanding universe of possibilities with things we never considered before cropping up all the time.

Yet, when most of us start our job each morning, how many of us remember this?  How many of us realize that the way we do whatever it is we do is only based upon (hopefully, at the minimum) the best that we know?  More importantly, how many of us realize that they best we know may not be the best that the world knows?  And how many of us realize that what we collectively know is very likely not the best that the world has yet to discover?

If we sit down and do something, even in a standard way, but never consider that we don’t yet (and likely never will) know the best way, then how do we ever expect to improve?  Improvement comes from recognizing a gap between the result we are getting now and the result we desire to get.

If we have KPIs and are meeting them, should we equate that to knowing the best way to do something?  Why is it that change is so hard when things are going well and yet we’re desperate for change when doing the same thing we’ve always done is suddenly not making customers happy anymore.

The goal isn’t good enough.  The goal is perfect, and the first step you can take towards that is realizing that we don’t know how to achieve it.  We need to discover it.  We need to have a curiosity about the world – about what our competitors are doing, about what academia is learning, and a recognition that all those things we could learn might get us closer to perfection, but not ever perfect.

It’s the wise man who knows he knows nothing, but it’s the recognition of the gap that should drive us to constantly learn and therefore improve.

Centralization doesn’t work with specialists

There’s always an ongoing debate about whether resources should be centralized or distributed.  Do development teams belong to the business unit they work for or do they belong to a shared IT function who effectively the business pays for software development services.

I’m going to argue that it really doesn’t matter.  The theoretical advantages of centralization are (among others I’m sure) shared standards and efficiency.  Shared standards comes from having a chain of command that can enforce them.  Efficiency comes from creating resource flexibility.  Too little demand for Product X, shift the resources to Product Y, right?

Well, not so fast.  First of all, standards are only useful if people follow them.  Having a central organization which won’t enforce standards might as well not exist at all.  Secondly, unless you break down silos within the central organization, you don’t create any resource flexibility.

This reminds me of a (now defunct) company I worked for very early in my career.  They were a small but successful consulting company, who couldn’t get some of the major contracts because large potential clients wouldn’t do business with a tiny company.  So, to combat this, the company joined forces with 3-4 other companies and formed what was effectively a holding company.  That holding company created the appearance of a larger company, who could now bid on bigger projects.  But, underlying it all, it was still 4-5 companies who acted entirely independently… with the exception that it now paid the salary of a figurehead President and CEO.

Because the company never really integrated anything, we gained no flexibility in resources, in the physical locations we could work from, in increasing the talent pool, and more…

Centralization doesn’t add value unless you actually change the  way the teams are organized.  As long as individuals continue to specialize on a limited set of projects, resource slack in one area can’t be used by another.  Efficiency gained is trapped by artificial barriers.

Instead, when making the decision to centralize, determine whether the resources will be able to generalize their skills so that they can be used more efficiently.  If not, you can centralize if you want, but it won’t make a bit of difference to the company.

Odds are not guarantees

Ever bet on a horse race?  Me either.  But, I’m sure you’re aware of odds.  Odds-makers describe the chances of everything.  What are the chances a horse will finish first, second, place, etc.  Horses with long-shot odds pay out better than horses with good odds.  Why?  Because in order for you to get a big win, you have to bet on a rare event, like a lame, old nag winning the race.  Something that nobody predicts is going to happen.

Even when odds-makers give a horse good odds of winning there is no guarantee that the horse will win.  If there was, betting on horse races will be silly and pointless.  If the outcomes were always known, there would be no big wins to be had.

This is generally true of all statistics.  Statistics refer to the odds of an event.  When some report says “people who smoke have a 150% increased risk of developing cancer” (I’m making those numbers up) they are talking about the odds.  I’m sure you know someone who smoked their entire life and never got cancer.  Fact is, nobody debates that the odds were against them… except the smoker, who says “look at me, I smoked my entire life and nothing bad ever happened to me!”

It may very well be true.  But that person only experienced one lifetime.  If they had to do their life 100 times over, the odds are that they’d more likely than not develop cancer.  When we have software projects, the same holds true.  If we understand factors which are more likely to result in project failure (like not having a risk management plan), there is no guarantee that the project will fail.  In fact, many times it will succeed, and remembering that, people won’t change their behavior.

But (assuming your data is good), over the long run, projects that don’t do risk management will fail more often.  Nobody may notice the project here or there, but over the long run, ignoring statistical realities means doing harm to your company that you can clearly avoid. 

Unfortunately, odds also means that sometimes you will press someone to take action based on the odds, and they won’t, and things will turn out ok.  They’ll use that data point to say “hah!  see, you were wrong, I didn’t do my risk management plan and the project turned out fine.”

To that you should respond, “odds are not guarantees, we do these things because we improve the odds of success.”  Managing the odds is what good management is about.  Nobody can ever guarantee you a good outcome or a bad one.  There are miraculous results, like people surviving plane crashes, that nobody can explain, when the odds are clearly against them surviving.  There are also miraculous results, like projects going well, that nobody can explain, when the odds of the project going well is against them.  Don’t let an unexpected success dissuade you from taking the odds into account.

In the long run, and that’s what really matters, the odds will catch up to you.

If the action isn’t different…

Our family has two pets – a cat and a dog.  Both are getting older, and like humans, there is a rising cost to health care for pets as they age.  Particularly, end of life care, or attempting to prolong life is particularly expensive.  Atul Gawande tells some of that story in his article “Letting Go.”  In addition, I happened to be listening to NPR the other day, where a guest pointed out that 66% of the medicaid costs were incurred by just 25% of the people – mostly elderly in end-of-life care situations.  For pets, we often have to make hard decisions about what care is worth giving and what is not.

Please don’t take this is cold-hearted about my pets, because we do love them and have enjoyed them in our lives for 11 years, but I’m a data person and listen hard to the facts.  I accept, as much as we want to think otherwise, that we’re unlikely to escape the statistical realities.  But, I digress…

Our cat, Lily, had stopped eating.  We took her to the vet for an exam, X-rays, blood work, fluid injections and more to try and get a clue as to what was going on.  After the tests came back, the vet called me and gave us two possible diagnoses.  One was some sort of gastrointestinal issue, the other was cancer.  Her opinion (I’m not sure it was actually backed up by true statistics) was that it was as 50/50 chance to be one or the other.  She proposed that we bring Lily back in for an ultrasound, which was more sensitive to soft tissues than an X-ray.

I asked the vet, “once we know the outcome, what are the treatments?”  In reply, she told me that if it was cancer, she would give Lily steroids to improve her energy level and appetite (plus apparently steroids make cats thirsty so she’d also drink more and avoid dehydration).  But, ultimately, that would only help for a short while and she would degrade again and there’d be little we could do beyond that.  If, however, it was an intestinal issue, the steroids would improve her eating and energy level (as well as her drinking) and she’d be on them the rest of her life, however long that might be.

What value did the ultrasound provide, I wondered.  If it’s cancer, the treatment is steriods.  If it’s not cancer, the treatment is still steroids.  The only difference is that either it would work for a short time or a long time.  So I declined the proposed ultrasound and told her to prescribe the steroids.

The reality is, although it might be interesting to have a diagnosis, in either outcome we would have the same path of treatment.  The avoided costs of the ultrasound, which would run several hundred dollars, could then be spent directly on steriods and specialized food.  Being the optimist, I picked up the steroids and popped off to the pet store for a whole case of special cat food.  My wife thought I was crazy coming home with a whole case.

Of course, just 2 days ago we used up the last of that case, and I bought another case of the food at the store.  I’m glad that the issue appears to be intestinal and not cancer.  But the story applies for all kinds of decisions.  If you take a step to analyze, collect data, etc., but regardless of the outcome it leads to the same next action, skip the analysis.  It’s just a waste of time.

Time, in my case, that could be better spent not traumatizing our cat and instead cuddling with her for however many more months or years of time we get.

“Escalate” is not risk management

I was reading an old risk mitigation plan that someone (it may have been me, but I hope not) wrote.  I don’t know why I was reading it, but that’s pretty irrelevant.  As I was reading it, there was  a line item for a potential risk that data wouldn’t be available for testing.  Under the mitigation plan it read “escalate to management.”

This is not a good mitigation strategy, yet as I mused over it, I realized that it is probably a strategy that is written more often than it should be.  Yet there are so many issues, where to begin:

  1. It’s reactive.  In the event that data is not available, we will escalate to management, it says.  That’s not mitigating a risk, that’s reacting to when a risk becomes an issue.
  2. Management isn’t going to be able to help you.  Management doesn’t have a magic wand to make the data appear.  Management could go ask/yell for some data, but it probably isn’t going to happen all that much faster.
  3. Even if you escalated the risk to management BEFORE it became an issue, it’s still not a great strategy.  It’s a punt.  “Management will fix this for me” is what it says.  Again, unless you’re willing to ascribe magical powers to management that they just don’t have, it’s not going to work.  To manage a risk, it has to be more than proactive, it has to be effective.

Yes, there will always be risks that become issues that nobody even imagines – the “unknown unknowns” as it were.  Risk management is to about managing the known unknowns.  For example, in Nassim Talev’s book The Black Swan he writes about the casinos.  They manage all kinds of known unknown risks – like who is going to try and count cards, or steal, etc.  But, they didn’t have a plan for the unknown unknowns, like the fact that one of Sigfried and Roy’s tigers were going to maul one of them.

As much as Mr. Talev points out a very good point – there are extreme events that could happen – if you don’t at least manage the known risks, you’ll become a victim of one of those long before you get the chance to be wiped out by a huge unknown risk.  Admittedly, it doesn’t sound like much of a reason to manage risk at all, but like all risks, the unknown unknowns might happen but aren’t guaranteed to happen, and thus improving your chances of success by really managing risks is still worth it.

All that said, reacting to risks becoming issues is not risk management.  Mitigation plans must provide an alternate path which may be less ideal, more expensive or even somewhat slower than your original plan, but not as bad as not managing the risk at all.

What aren’t they telling you?

For sure, many companies fail to even listen to what customers are telling them, but what about what they aren’t telling you?  Is the absence of complaints adequate to infer that the software is good?  I would say no.

Have you ever downloaded a little application hoping it would do something for you only to find out that it does close to what you want, but not quite.  You play with it for a bit, and then you’re back out onto the web to search for another solution.  In the meantime, you quietly uninstall the software that didn’t meet your needs.

Did a bug get reported to the company?  No, you simply stopped using it and said nothing.  If you’re a company who only listens to what customers report to you as bugs, you’re missing a huge piece of the picture.  Customers won’t necessarily go out of their way to tell you what they don’t like about your software if it isn’t meeting their needs.  They’ll simply move on.  They report bugs when they (at least, in theory) like what your software can do and want it to work better.  When it’s a non-starter, you’re not likely to hear much at all.

And then you spend time wondering why you aren’t getting rich off your great product.  Maybe it just barely (or greatly) misses the mark on what your target market wants.  How will you ever know you never go out there and ask?

I run into, from time to time, software packages that don’t meet my needs, and most of the time they quite agreeably uninstall themselves and are gone.  Every once in a while, the software upon uninstall takes a moment to ask me for feedback, but it’s a vague open-ended comment box, if anything.  Now, not that you want to annoy your potential customers on uninstall as well, but it might be worth it to at least narrow down why they uninstalled and then ask for some comments to hone in on the issues at hand.

Regardless of how you get your feedback, bug reports are an inadequate way to discern what you need to do.  Listen to what customers are telling you, but then go out and find out what they aren’t telling you as well.

King Philip Came Over From Great spaces.

If you remember high school biology as fondly (or maybe not as fondly) as I do, then the mnemonic “King Philip Came Over From Great spaces” has a certain familiarity to it – Kingdom, Phylum, Class, Order, Family, Genus, species.

All the way back in high school, despite enjoying science greatly, I thought this classification system was a whole bunch of nonsense.  After all, you and I can both point to a dog and know exactly what we’re talking about, right?  In many cases, one might argue that such a rigorous classification system seems silly.  Indeed, if you say K-9 or pooch and I say dog we still know what we’re talking about.

So what would you make of me saying “I had dolphin for dinner last night.”  Some of you might be horrified, for indeed a dolphin is an ocean-going mammal that we’re largely quite enamoured of.  However, dolphin is also the name for a fish, which people eat.  Suddenly, it might be useful for me to distinguish between Tursiops truncatus (adorable, but not so delicious) and Coryphaena hippurus (delicious, but not so adorable).

Science is exacting for a reason.  It minimizes confusion.  And while the names are somewhat esoteric, their purpose even to outsiders is clear… make clear, unambiguous distinctions between two or more things where familiar names can confuse.

Silly as it may seem, adopting something like this within your business makes sense as well.  Take projects, for example.  Typically we manage projects through some tracking system, often so we can bill our time appropriately.  But, then we also have project artifacts like requirements, design documents, etc.  Wouldn’t it be nice to be able to locate everything for a project easily somehow?  We so often want to go back and find artifacts to look at lessons learned, to compare certain behaviors to outcomes, etc.  And yet, we fail to name our projects consistently.  Some people use the full formal name of the project that’s in the billing system, others use a shortened version and still others abbreviate it to just a few letters.  With no consistency, suddenly it becomes quite hard to find everything.

That’s what a scientific nomenclature is designed to solve.  Everyone refers to it the same way.  It eliminates confusion, and it’s a simple step you can take in your processes and projects to minimize confusion and wasted time looking for something that’s right in front of your nose but you don’t see it because it’s inconsistently labeled.

A list of reasons not to do performance reviews

Dr. Edward W. Deming was not a fan of performance reviews.  And up until today, when a few of us got into a heated debate over them, did I understand why.  At it’s most simple, the debate surrounding performance management fell into two camps, that you’ll probably recognize as somewhat aligning with political philosophies.  One camp was “I work hard, and it makes me mad when someone who doesn’t work as hard as me gets the same reward.”  The other camp was essentially the opposite – “it doesn’t matter how hard you work, the company succeeds or fails as a whole.  If the company goes out of business, it doesn’t matter if you were great, the company still failed.”

Obviously, I’m in the second camp.  Having shared goals and shared success or failure motivates people to do things for the greater good even if doing something different might personally benefit them.  Case in point, rewarding heroism.  Often times, the best employees are the ones who quietly work without ever starting a single fire.  The “heroes” are the ones we see, racing in to put out the fire that they probably started.  Sure, in a company that values individual accomplishment, it’s the hero who looks great, but the benefit they got was at the cost of the fire in the first place.  Quietly but effectively working away doesn’t look like much.  It’s a non-event, and it’s hard to reward non-events.

Bottom line, we succeed or fail together, so that’s my number one reason to not do performance reviews.  But, as we argued this through, I’d like to give you another list of reasons, this time more human relations related that also make a case against performance reviews.

  1.  To cite Drive, people aren’t motivated by money.  They’re motivated by having purpose, autonomy and the ability to master something.  Performance reviews focus on how much money (either bonus or raise) I get for my work.  It’s a crummy motivator.
  2. Avoid the stigma of being placed in a middle bucket or lower.  Every time you rate performance on any scale there’s a middle group (and lower).  You could do a 0-100 scale, and being rated 50 or less would be a problem for people.  Likely being rated 90 or less would be an issue, since it reminds us of grades in school.  You could use a under-performs, performs, exceeds, outstanding set of categories (more or less) and people will be unhappy not to be in the exceeds or outstanding buckets.  Nobody wants to be “average” even though statistically speaking the odds are distributed around that.  Chances are, in many if not most ways, people are average.
  3. Avoid the disconnect of “I think I exceed expectations, but you don’t agree.”  Why even get down that path?  What matters is that the company is meeting their goals, and putting people in buckets gives opportunity for not agreeing on where the person is.  It’s an easy conversation to tell an outstanding person that thinks they’re just average that they’re better than that, but the other conversations aren’t so easy.  If you’ve got a real under-performer, you probably shouldn’t be waiting until a formal performance review to deal with that issue.
  4. Avoid getting forced into “raising the bar.”  Over time, tenured employees will become more capable, but the needs of the company due to competition will also increase.  What once was an exceptional performance is now expected.  Just as in Kano analysis, what once was a delighter is now a must be.
  5. Avoid unequal comparisons.  Do you compare people’s performance against company goals or against their own prior performance?  If the person exceeds those goals, they probably always will.  If the person makes great strides, but isn’t up to the company’s goals, should they be punished for making major progress?
  6. Avoid manager to manager inconsistency.  This is one of my own issues.  I set very high standards, and in the past feedback has been that I expect too much.  And yet, my employees have always risen to meet those standards.  But other managers don’t set standards as high as mine, and so depending on the team you are in, the standards change.  However, most teams get the same amount of money to give out for bonuses or raises, so why have a high standard for my team if no reward occurs to the team for being held to a higher standard to begin with?

I’m sure I could go on.  The HR issues are limitless, and the simplest solution may be to just stop doing performance reviews.  There’s evidence that money doesn’t motivate, though a lack of money does demotivate.  Pay enough to take money off the table, and that probably means give raises that maintain purchasing power.  Reward through promotions when the company has a need for a greater role, and differentiate that way.  For bonuses, we succeed or fail together.

Good leaders don’t just provide the direction

This headline is probably old news to almost everyone, but it couldn’t hurt to reiterate it one more time.  We tend to think of leaders as visionaries, not necessarily do-ers.  It comes as no surprise that many who are anointed to be leaders often provide the direction, but lack the ability to see whether said direction has been done.

Organizational inertia is strong, so when a new manager arrives at the organization, s/he may forget that there are two ways employees can make them happy.  The first way is to do what the leader asks, to follow his/her vision and try and make it a reality.  If it happens, and the vision is a good one, the organization can thrive with a change of leadership.

However, there’s another choice.  Instead of actually implementing the vision, you could simply tell elaborate stories about how things are changing without ever doing anything at all.  That’s an organization’s inertia.  It’s easier to tell stories about how doing the exact same thing as we were before is now the new thing than it is to actually implement the new thing, whatever it may be.

If, as you lead the organization, you think you can sit high on your throne and expect things to happen, ask yourself, why would anyone do anything differently if I have no ability to observe that it’s different? 

As the head of a software organization, you can’t readily visit the factory and see parts being made.  You’d think this precludes you from “going to the gemba.”  Not so.  You can see the effects of your change through new artifacts being created by the process, via audits and changed measurement systems.  You’ve got to get out there and ask for those things, for people to provide proof that things are being done differently than before.  

It’s too easy to stay in your comfort zone – both the employee who can pretend to change without changing and the manager who can be content to be told a story.