Why “spend it like it is your own” is a bad mantra

Recently I’ve been hearing people say fairly frequently “spend it like it’s your own.” The intonation behind this is to not waste the company’s money on things that the company doesn’t need. In other words, if someone gave you a million dollars, do what you would do with it. The foregone (but wrong) conclusion is that you will spend it wisely.

Seriously? If someone gave you an extra million dollars are you the person who would set it aside for your kids’ college, fill up an IRA and make sure you had an adequate rainy day fund? Of would you finally buy that beach house you’ve been dreaming of? The new BMW? The huge TV?

Ok, maybe you would, but what about everyone you know? Is everyone you know that responsible? I doubt it. In fact, I’m willing to bet you know a spendthrift or two, or more. People who have to go on vacation every summer but haven’t fully funded their retirement. People with two new cars but $30k in credit card debt. People with the latest gadget, no matter what it is. People who stand in line for the newest iPhone. In short, people who don’t exhibit any self control. People who can’t discern between ‘need’ and ‘want.’

Even if this person isn’t you, these people exist. They’re not bad people, but they have different priorities. Perhaps you recall the cookie study done on a bunch of children? Essentially, they offered kids the choice of one cookie now, or if they could wait a bit, two cookies later. Guess what, most children couldn’t wait. More surprisingly, as they followed these kids through their lives, the ones who could not delay gratification did less well in life overall (how you measure that, I don’t know.). It’s built in to many of us to desire what we can have now, whether it is the best choice or not. So with that in mind, the mantra “spend it like it is your own” probably deserves reconsideration.

Two kinds of inaction

I was recently thinking about inaction in response to some event. It was brought to mind by some research I was reading on giving managers the choice to not act (wish I could remember where I was reading it.). It turns out that given an A or B choice, managers are more likely to make a decision than if you give them an A, B or “no decision” choice. In the latter, they are more likely to not act. I suppose that’s not entirely surprising, since we likely frame our worlds around the options posed to us and don’t think much beyond that. Want someone to take action, offer them only two paths of action.

As I kept thinking on it, I started to wonder what drove inaction among managers. One possible reason for inaction, and probably the more common one I would guess, is not knowing what to do in response to a situation. You see a project risk, but can’t think of a way to mitigate it, so you don’t.

Then there’s another kind of inaction. You don’t take action because you know to not do anything. That is, you observe something happening that might normally drive someone to take action, but because of additional knowledge you know to do nothing. What you’re seeing might be a statistical blip, or real but already mitigated by existing processes, etc. I don’t think this type of inaction happens often enough. Going back to the original research, we don’t offer the choice of inaction enough, and given how little we seem to know about software development within our organizations, even if given the option of inaction, we don’t know when we should choose it.

Data is exciting to me in two ways. One, it shows when something is abnormal, but it also shows when something isn’t abnormal. Knowing that something is typical is a great way to know not to take action. We see it in our own lives in little ways. When you have a newborn, any little cough or sneeze is likely to cause you to race off to the doctor. After a little bit you begin to learn what is normal vs. abnormal with your kid. Suddenly if they get the sniffles you just tuck them up in a blanket, turn on the TV and let them rest. Knowledge allows you to choose inaction (and save you the cost of a visit to the doctor.)

Inaction is free, presuming you know when to choose it. Using data to help you figure out when not to act is a smart investment.

The worse the result, the more you might enjoy it?

A friend recently shared a TEDx talk by Dan Ariely on motivation at work. Now, you’ve probably already read Daniel Pink’s Drive and if not the basic concept is money is not a good motivator. Much to the consternation of companies that continue to focus on a strategy of “pay for performance.”

What was interesting about Ariely’s talk was that he discussed the IKEA effect in people’s endeavors. That is, if you do something yourself, you tend to feel better about it (assign it more monetary value, and assume others will too) than if you came by something easily. In fact, in one of Ariely’s experiments, the harder the task, the more value you ascribed to the outcome, even though an external observer gave the results lower value because the outcome was naturally worse (harder task = harder to do well).

In my mind, this presents a challenging problem for companies. Employees are rewarded by challenging tasks, and the ability to see the quality of the outcome is obscured by how difficult the task was to do. So, in theory, if we assign you a project and through self inflicted means you make it difficult upon yourself, have a miserable time at it, but eventually deliver, you will see that as a better result than if you had gotten there more easily. However, others who are the recipient of your work experience no such effect – your struggles don’t influence how outsiders see the outcome of your work. If it is poor quality, it will be evident that it is.

This sets up a unique challenge for organizations. Things like software processes make projects easier and more predictable. Good for the company, but if Ariely’s research is to be believed, bad for individual motivation. The individual would prefer a more challenging task, perhaps even if the difficulty is self inflicted by not following process, for example. After all, in many cases, the software we develop is hardly novel, and if approached in a structured, careful manner we arrive at a good solution with not a whole lot of drama. But, if that doesn’t provide an emotional reward… well, who’s going to want to work that way?

Personally, I think Ariely is on to something. It is potentially an explanation for why people are so turned off by processes, and moreover perhaps it suggests that rather than let people make tasks artificially difficult for themselves that we ought to introduce some sort of challenge to the work which not only motivates but also adds value. For example, specifying much higher performance needs than required. Food for thought.

Where you work may determine how you approach process improvements

Recently I had been looking at the process work in a number of teams within an organization. There were process groups with in the Project Management Office, Business Analysts, Quality Assurance and Developers. It wasn’t until I got to see a bunch of different teams all at work at the same time that I realized each team approached process improvement completely differently.

You might say that “if you give a man a hammer, the world looks like a nail” applies here. In the PMO, process improvements had clear charters, project plans, and a sense of urgency around execution – but little to no analysis. After all, project managers are all about execution and ROI, so they focused on the things they knew well and gave little thought to those they didn’t know.

In the Business Analysts, they delveloped a taxonomy to refer to each part of the work the already do, and then proceeded, in excruciating detail, to write down everything they knew about the topic. They documented the current process, the desired process, details about how to write use cases (even though that information exists freely out on the Internet.). No stone was left unturned in the documentation process – but they lacked any planning, nor did they have any mechanism to roll out or monitor the changes.

In Development, all process improvements involved acquiring some sort of tool. Whether it was a structural analysis tool or a code review tool or a log analyzer, there had to be tool. For developers, the manifestation of process is to implement a tool. After all, that’s what most of software development is – implementing technology to automate some business process.

For QA, there was lots of analysis (after all, that’s what testing really is – analysis of the functionality) but little planning, and usually awkward solutions that created work and relied on more inspections rather than took things away.

The issue here is that each team did what they were good at, and by doing so failed to produce a complete result. Just like the development projects themselves, a complete set of activities must occur to make process change work. You need to understand the problem and lay out a method for delivering on it. You must analyze the problem and understand the solution. And you must implement the change in a way that makes doing the new process easier and better than the old process.

But the key here is that process improvements involve a complete set of activities, and you can’t simply approach process improvement in the same way that you’d approach your siloed job in software development. We all do what we’re comfortable with, but that is a big piece of why we need process improvements in the first place. After all, if you give a man a hammer…

Why does expertise lead to arrogance?

It seems far too often in my recent experience that expertise is leading to experts that are arrogant about their capabilities. We encounter it all the time in the workplace… your best programmer who alienates their peers, the project manager who ignores others warnings, and those whose job it is to teach us their skills defending their stance rather than helping those seeking knowledge learn.

It isn’t a topic that I would typically cover, and like many of the more sensitive topics, I don’t wish to call out the individual or interaction that was the inspiration for this post. Years ago a friend of mine told me that I “did not suffer fools lightly.” At the time, I took it as a significant compliment. Now, I’m not sure why I did. That’s not a compliment for someone who seeks to teach others. It is a substantial insult.

The expert on experts is Philip Tetlock. He spent nearly two decades, if I recall, studying the predictions of experts in politics. It turns out that most experts aren’t much better than non experts making predictions. So, that should tell you something about the value of your expertise in the first place. But, more importantly, Tetlock found that one kind of expert did better than the rest. Experts who did not have a unifying theory about the world, who could take in a diversity of evidence, made considerably better predictions. In other words, experts who lacked arrogance about their current body of knowledge were better experts.

So this is a post directed at those of you who are experts in your chosen field. When someone asks you a question, no matter how it is worded, assume their intent is good and try to not only answer the question as given but to answer the underlying intent. It isn’t your job to defend why you did what you did, but to help others understand your thinking so that they can internalize it. And in the meantime, by taking in these questions and considering the subtleties they create, you become a better expert. So, suffer ‘fools’ lightly because their questions benefit everyone involved.

“But I know the business…”

Don’t get me wrong, I think you should strive to understand the business you’re in, but that can’t be the only thing you do.  I was talking to a recent college graduate about her career choices not that long ago.  She asked me “should I stay in one industry or move around?”  Mind you, her degree was in computer science and she was asking that question from the perspective of an IT employee.  My answer was “move around.”

And that leads me to the title of my post.  A number of friends and coworkers have sung the praises of “knowing the business” as their major strength.  As in “I’ve been in IT in financial services for umpteen years… I know the business.”  Knowing the business won’t help you in the slightest if you don’t know how to do your job well.  IT is a highly transferable skill, as evidenced by the fact that companies are readily able to offshore work to India, China, Brazil… you name it.  I hate to tell you this, but if any business were so complex that it took you that long to “know the business” you wouldn’t be in business.  Who has 20 years just to understand what you’re doing?  When did you learn 80% of the business… probably in about the first 20% of the time you’ve been there.

But more importantly than that, knowing the business doesn’t help you when the business is presented with a novel situation.  Typically knowing the business means you’ve seen what normally goes on and are familiar with it and its subtleties.  That’s all well and good, as long as the business doesn’t change.  But, when business changes, what you know may be more of an anchor than an advantage.  Novel situations often require novel responses, and someone who knows the business isn’t necessarily going to see what could be because they’ll be mired in the conventional wisdom of how the business used to do things.

So, getting back to my advice to that recent graduate – move around.  And that means if you are going to specialize, specialize in IT, not “the business.”  Pragmatically, if you pigeonhole yourself into a single business, but don’t develop the skills to work elsewhere, what happens if your job is eliminated or the entire industry disappears?  But more importantly, seeing other industries will give you insights that you can transfer to your next job.  Having the view of an outsider means you don’t know that something is “impossible” and might suddenly make it happen.

Richness versus Recall

Alistair Cockburn presents an interesting insight in his presentation “I come to bury Agile, not praise it.”  On slide 12, he presents the richness of the communication channel as an important part to getting information across.  Surely, you’ve experienced this yourself with a never ending chain of back and forth emails that were quickly resolved with a single 1 minute phone call to clarify.

Therefore, it makes enormous sense to replace communication of low richness with communication of high richness, right?  Well, I’m not sure it’s that black and white.  In order to use information effectively, you not only have to be able to communicate it, but also to recall it when you need to use it again.

For example, you sit down and have a conversation with the user and then turn around to write some code.  The ability to translate what the user asked for into code depends not only on having the conversation but remembering all the details of the conversation correctly.

So, do you have an Eidetic memory?  Probably not.  How long can you accurately recall a conversation?  Long enough to turn it into code faithfully?  Probably not as well.  You can probably remember the nominal case, but what about all the exception handling you discussed?

Now, I’m not saying you should communicate via email or paper only since that’s clearly silly, but on the other extreme, you probably shouldn’t communicate orally only as well.  Indeed, merging the face to face conversation with documentation helps manage both the completeness of conversation and ability to recollect details when you need it.

Your change of mind doesn’t excuse my error

Who doesn’t appreciate dodging a bullet, right?  You know that your project is going wrong – resource issues, a big requirements misunderstanding, a major design flaw… maybe even despite your best project management, proper risk analysis, etc.  Sometimes things go wrong.  In fact, the Standish Group, who publishes the CHAOS report, indicates that as an industry we’ve approached about 70-75% of projects being +/- 10% budget and schedule, and we don’t seem to be getting a lot better than that.

That means, 25-30% of the time, we’re going to miss by a bigger value than that.  At any rate… let’s say your project is going south and there’s nothing you can do to recover.  You’re going to miss your promised date or budget.

Suddenly, the business changes their mind.  Maybe in a way that’s unrelated to the issues you’re having.  Phew!  You breathe a sigh of relief, since you can now use the business’ change to reset dates, including enough time to fix the issue(s) your dealing with and come out smelling like roses.

Not so fast, I say.  Dodging a bullet is great, but if you fail to learn from what would otherwise have been a failure, you’re doing yourself a disservice.

It’s like watching a movie where the only reason the otherwise doomed hero escapes is due to some serendipity.  Sure, it makes for a great movie when a hapless bird flies into the power lines, taking out the power, plunging the enemy into darkness and allowing the hero to sneak off largely unscathed.  But, if that played out in the real world – most of the time the bird would never come along and the hero would be dead.

You can’t count on a random event to save you nearly as often as it happens in the movies.  So, when your potential failure is only alleviated by a lucky turn of events, still take the time to reflect on the failure that could have been and learn from it rather than rejoice it never came to be.

Heroism works… Sometimes.

One of the big challenges about statistics is that they aren’t a guarantee of anything. For example, projects that don’t do unit testing typically have 35 to 50 percent higher defect densities than projects which do unit testing. When that “bad” project gets into into integration testing, things are pretty sure to not go well.

But, there’s something worse than ignoring the statistics in the first place. Sometimes, when bad things come to pass, we reach for cliches like “when the going gets tough, the tough get going.”

It was just the other day that I was discussing one of these projects that had gone south. The QA team has successfully used the data they had to convince the project manager that things weren’t going to play out well. The project manager took that to his manager, who said, effectively, “what have you got against teamwork?” He was saying this like a football coach attempts to pep up his losing team.

We love the pep talk. How many sports movies focus on an underdog who pulls together after a rousing speech from the coach? How many movies show the rousing speech followed by the team losing? None that I can think of. Why? Because nobody writes stories about teamwork failing. That’d be a depressing story. But, in every game, at halftime, you can bet that both coaches are busily trying to pep up their team. And that means, that fifty percent off the time, more or less, teamwork (and heroism) DOESN’T WORK!

We only see the times it works out and we only remember the times it works out. The losers don’t write history.

So, getting back to this project. There is some chance that the team will pull together and that they’ll rescue the project. Then, the manager will forget the data that predicted that they’d be in a mess in the first place, and remember that heroism seemed to solve it. Heroism is a dangerous thing, because either you’re a hero, or you’re the dragon’s dinner. Only one comes home to tell the story; the cautionary tale isn’t alive to tell it.

The Wise Man

It is the wise man who knows that he knows nothing, to paraphrase Socrates.  Our knowledge of things is meager, certainly compared to all the things we as a species might ever come to know.  Every thing we learn seems to uncover new questions that need answers.  It’s an ever expanding universe of possibilities with things we never considered before cropping up all the time.

Yet, when most of us start our job each morning, how many of us remember this?  How many of us realize that the way we do whatever it is we do is only based upon (hopefully, at the minimum) the best that we know?  More importantly, how many of us realize that they best we know may not be the best that the world knows?  And how many of us realize that what we collectively know is very likely not the best that the world has yet to discover?

If we sit down and do something, even in a standard way, but never consider that we don’t yet (and likely never will) know the best way, then how do we ever expect to improve?  Improvement comes from recognizing a gap between the result we are getting now and the result we desire to get.

If we have KPIs and are meeting them, should we equate that to knowing the best way to do something?  Why is it that change is so hard when things are going well and yet we’re desperate for change when doing the same thing we’ve always done is suddenly not making customers happy anymore.

The goal isn’t good enough.  The goal is perfect, and the first step you can take towards that is realizing that we don’t know how to achieve it.  We need to discover it.  We need to have a curiosity about the world – about what our competitors are doing, about what academia is learning, and a recognition that all those things we could learn might get us closer to perfection, but not ever perfect.

It’s the wise man who knows he knows nothing, but it’s the recognition of the gap that should drive us to constantly learn and therefore improve.