The courage to be honest

Some months ago, I was working with a client who wanted to set up a website with social commerce elements. (That’s Web 2.0 in fancy words.) They only seemed to have a very rough idea of what they wanted, so asked them right at the start of the meeting: "Why do you want social commerce?"

Their answer was interesting, and one that I had not expected. They said, "We want to project the image of an honest an open organisation."

Hmm. Fair enough.

So we went on with the meeting, discussing what they could do with blogs, how commenting would work, and so on. The main thing was to open up the site for the bank’s customers to talk freely.

At some point, one of the client’s team indicated that profanity and abusive comments would need to be filtered out, so moderation becomes important. "We don’t want to become liable for content that is on our site."

Fair point. While discussing that, another chipped in, saying "True. We will also need to monitor negative comments. We don’t want our site to have negative comments about our products."

A brief silence.

Many nods.

And the conversation continued.

I was too stunned to butt in immediately. But after a few minutes, I raised the point about negative comments. "You want to project the image of an honest and open organisation. If you filter out comments that say anything bad about you, how are you going to achieve that?"

They thought for a short while, and someone said: "Yes, the users will probably find out about it."

You can’t project an honest image unless you are honest. That means being honest about the good as well as the bad. Honesty is irrelevant for good news — no one lies about good news. Are you honest when delivering bad news? That requires courage.

This is worrisome. Someone saying "We don’t want negative comments" is a bit of an issue. But also worrying is their reason for why not. I had hoped that it would be "That’s not what an open an honest organisation does". Instead, it was "The users will probably find out about it."

This sort of behaviour stems from insecurity. It’s what keeps us late at work, not want to be the first to leave. It’s what makes us say "Yes" to things we would really rather say "No" to.

I remember a time when we were making slides late in the night. I finished mine quickly, and took printouts for the project leader to review. He word-smithed it on paper, I typed it back in, and took a printout again. (Yeah, he could’ve edited it himself. But…) And when all of that’s done, I’m still waiting, not wanting to "leave the team behind". I’m a team player after all.

It’s like drugs. You want to fit in. Be a team player. If the team’s doing it, you do it too.

These days, I’m the first to leave from work. Sometimes, it’s late when I leave, but I’m always the first to leave. And it hasn’t made any difference. At least not that I can tell.

A lot of the fear is in the mind, frankly.

There also was a time when I couldn’t say "No". When I left BCG, I spoke to a partner during an exit interview about how I wanted to work less hours. He thought the problem was more fundamental.

"Anand, knowing you, you’re the kind of person that will end up working hard no matter where you are. So will the move really make a difference?"

"The difference, James, is that here I’ve set up an expectation of saying ‘Yes’. I’ve gotten into the habit, and I’ve gotten others into the habit. At least in a new place, I’ll have a fresh start and set new expectations."

That’s happened, fortunately. These days, I consistently say ‘No’. With some folks, it’s easy.

"Anand, would you be able to help out with this?"

"No, sorry." (with an "please excuse me" smile on my face.)

Some people still scare me, though. (These are the aggressive Type A personalities that it is my occasional misfortune to come in contact with.) And when it does, I lie.

"Anand, we need this proposal out by Monday. Could you help out over the weekend?"

"Sorry, have plans for the weekend. I’m visiting a cousin at Brighton."

No, I don’t have a cousin at Brighton. I’m just scared to just say, "Sorry". It didn’t matter, though. The fear is real, but it still is only in your mind.

It’s the same with businesses. We’re collectively scared to admit something’s wrong with us, or that we can’t do something. I went for a meeting with a partner recently. The client mistook us for operations consultants. (We’re IT consultants.) So he asked us what operations experience we have. Our response should’ve been "None. We’re IT consultants."

Instead, our response was "Oh, we have several years of experience in the organisation. We’re this, we’re that, we’re great."

I don’t think we’d have been thought less of if we’d said "None". And we were found out in the next meeting anyway.

It’s the same about opening up to negative comments. If somebody makes a negative comment, it’s okay! Not that many people care. Hushing it up makes it worse (like for instance with BA and Virgin recently). Lying about it might work, but only for a while. The real problem with lying not that you might get caught out — it’s that you’ll get into the habit and in the long run, will get caught out.

For me, the best cure for this sort of fear is the firm belief that the world cares a lot less about us than we think. It’s okay to be a loser. No one cares but us. But it’s less okay to be a liar.

Resolving the Prisoners Dilemma

If you’re ever taken a course in Economics, and it discussed Game Theory, you may be familiar with The Prisoner’s Dilemma. Roughly, this is the problem.

Assume you possess copious quantities of some item (money, for example), and wish to obtain some amount of another item (perhaps stamps, groceries, diamonds). You arrange a mutually agreeable trade with the only dealer of that item known to you. You are both satisfied with the amounts you will be giving and getting. For some reason, though, your trade must take place in secret. Each of you agrees to leave a bag at a designated place in the forest, and to pick up the other’s bag at the other’s designated place. Suppose it is clear to both of you that the two of you will never meet or have further dealings with each other again.

Clearly, there is something for each of you to fear: namely, that the other one will leave an empty bag. Obviously, if you both leave full bags, you will both be satisfied; but equally obviously, getting something for nothing is even more satisfying. So you are tempted to leave an empty bag. In fact, you can even reason it through quite rigorously this way: "If the dealer brings a full bag, I’ll be better off having left an empty bag, because I’ll have gotten all that I wanted and given away nothing. If the dealer brings an empty bag, I’ll be better off having left an empty bag, because I’ll not have been cheated I’ll have gained nothing but lost nothing either. Thus it seems that no matte what the dealer chooses to do, I’m better off leaving an empty bag. So I’ll lease an empty bag."

The dealer, meanwhile, being in more or less the same boat (though at the other end of it), thinks analogous thoughts and comes to the parallel conclusion that it is best to leave an empty bag. And so both of you, with your impeccable (or impeccable-seeming logic), leave empty bags, and go away empty-handed. How sad, for if you had both just cooperated, you could have each gained something you wanted to have. Does logic prevent  cooperation? This is the issue of the Prisoner’s Dilemma.

There’s nothing wrong in the logic, actually. The key assumption is that it is clear to both of you that the two of you will never meet or have further dealings with each other again. If you’re never going to deal with someone again (and hence there is no question of retribution or any fallout), you really should cheat.

An aside. During my first few days at IIT, two third years were ragging me about my stance on pre-marital affairs. After trying my best at defending the moral standpoint, I finally confessed that it was only the fear of the after-effects that worried me.

"There’s this beautiful naked girl in your room," they said. "You are guaranteed no repercussions. What will you do?"

"No repercussions?"

"None whatever."

I thought for a while. "I’ll flip a coin." 🙂

Anyway, the aside aside, the solution to the one-off Prisoner’s Dilemma is for both people not to cooperate. If contracts are not enforceable, we’re all better off not trading. If there are no cops, we’re individually better off stealing.

But, of course, that’s not true in the real world. Most situations are repeatable, and you do tend to meet people again. Those you cheat may even have a motivation to meet you again.

This is the iterated Prisoner’s Dilemma. Douglas Hofstadter wrote about this in the May 1983 issue of Scientific American in his Metamagical Themas column (which, by the way, is brilliant). While the Prisoner’s Dilemma has a simple solution (don’t cooperate), the iterated Prisoner’s Dilemma does not have a predetermined solution. At best, you can have a strategy. (That can be proven. If there was a predetermined solution, your opponent would know it, and could beat you. One of those cases in mathematics where you are not guaranteed a solution.)

But is it possible for cooperation to emerge in an iterated Prisoner’s Dilemma? Can it beat competitiveness?

Robert Axelrod of U.Mich conducted a computer tournament to find out. He invited strategies from game theorists and wrote them as BASIC programs. Each program would be pitted against another. Every time, it could response with either C (cooperate) or D (defect). Cooperation gets both programs 3 points. If one defects, the defector gets 5 points and the cooperator gets nothing. Both defecting gets 1 point each. Axelrod ran each program against each other many times, and added up the scores.

The program that won was called TIT FOR TAT. It was the shortest program (4 lines of BASIC code). Here’s it’s strategy:

Cooperate the first time.

Do what your opponent does thereafter.

Think about it. TIT FOR TAT starts by being nice, and stays that way, unless you defect. Then TIT FOR TAT punishes. If you repent, TIT FOR TAT forgets and forgives. Interestingly, TIT FOR TAT can never win a game. It can, at best, draw a game, but never score more points than its opponent. It goes for winning the war by losing battles.

That’s four traits:

  1. Being nice
  2. Punishing immediately
  3. Forgiving immediately
  4. Willing to lose battles

After publishing these results, and having learnt a lot about different strategies, Axelrod repeated the tournament. Four times as many entries poured in. This time from world experts on game theory. It also included an improved TIT FOR TAT called TIT FOR TWO TATS, which is an improved TIT FOR TAT that does not fall into a C – D – C – D cycle when playing against TIT FOR TAT.

TIT FOR TAT won again.

Till date, TIT FOR TAT remains an unbeaten individual strategy, and people believe it may be optimal.

(PS: By individual strategy, I mean that there are multiple programs that can team together, losing to each other and making sure that one of them wins. This sort of thing can beat TIT FOR TAT. But no individual program beats TIT FOR TAT.)


In our first term at IIMB, we played a game in our organisational behaviour class, intended to help us understand inter-departmental cooperation (or rivalry). The class was split into two ‘companies’. Each company had four divisions.

The game had 10 rounds. In each round, every division could choose to cooperate or defect. If everyone cooperated, each division made 3 points. If any division defected, it would make 5 points, while all cooperating divisions made 0 points. If all divisions defected, they would all make 1 point. The divisions were not allowed to talk to each other.

The aim was to beat the other company. (Not other divisions, within or outside the company.)

Our company started off with 3 Cs and a D, which quickly deteriorated to 1 C with 3 Ds by round 6. At round 7, it was 4 Ds.

Before round 8, we were all given a chance to have a huddle. A representative from each division would come together and talk things through. We promised to cooperate, and thereafter, it was 4 Cs to the end.

We lost the game. The other ‘company’ had started off with 1C and 3Ds, but had learned to cooperate pretty quickly, aided, in Prof N M Agrawal‘s words, by "… Aparajita threatening the other divisions with her glares."


The reason there’s a Prisoner’s Dilemma is the inability to reliably communicate or enforce behaviour. Having a chat helps. Having laws that punish you helps. Having a bully threaten you helps. The thing is, you need a signal of some kind. And it needs to be an early signal, or you end up waiting till round 8 and lose the game.

If you’re ever in a situation where cooperation helps everyone, but it’s not in interest to cooperate, here’s what seems to work:

  • See if you can agree to cooperate before-hand
    1. Have a chat
    2. Find policies that punishes defectors
    3. Threaten if required
  • If not, then try to force cooperation by signaling.
    1. Be nice
    2. Punish defection immediately
    3. Forgive repentance immediately
    4. Lose battles to win the war

Less is more

The hours in consulting are pretty long. 65 hours a week used to be my norm, and that’s ignoring the travel time to and from work. So there wasn’t too much life outside of work. (I’ve come to realise, though, that what you do outside of work doesn’t change that much with more free time. What does change is that you just enjoy it more — both in and out of work.)

We have a day, once every month or two, where you take time off from whatever project and head back to the office. One such featured a session with the managers telling the consultants how to succeed. Pretty good advice, actually… but that’s not what I’m going to talk about. It’s something about the nature of that advice.

The advice had a lot of TO-DOs and suggestions. Do this. Do that. Focus more on this. Focus a lot on that. Great. Now we know what to do more of.

My question, towards the middle of the session, was: OK, so what do we do less of, then?

You can’t do more of something unless you do less of something else. In most places, it’s easy to answer this with: “Oh, you need to be more efficient.” or “Cut the idle gossip”. For us, none of these were applicable.

The question pretty much remained unanswered. And with good reason. It’s a tough question.

Later, I got involved with a proposal. I wrote a few bits of it. (One page, actually.) Others wrote a few bits of it. And then some standard appendices were added to it. Finally, it ended up as a 180-page document.

The interesting thing is, I can bet no human ever read those 180 pages end-to-end.

I know no one at our end did, because we turned it around in 1 week, and I was the last to assemble the document before sending it out.

I’m guessing no one at the client end did, because they’d have gotten 5 such documents, and had a week to shortlist down to 3.

So if we didn’t read it and they didn’t read it, why did we put it in?

I think I know why. In my IBM days, I had to make a presentation to the management on productivity. I knew nothing of management or productivity. So I put in a report that had a lot of high-sounding words (you know… value-add, leverage, etc.) that looked reasonably impressive and had no basis in fact.

I did that mostly because I was scared. Of seeming to know less. Of being wrong. You know.

(Funnily enough, the presentation was pretty well received. I don’t know if it was because they were polite or had become numb to bullshit.)

This fear is pretty common. I know how that 180-page document ended up as a 180-page document, and I’m sure you’ve seen this happening before. First, here’s a sample conversation at the client end, when they’re writing up a request for information.

Martin: So, what do I put in the RFI?

Clive: Here’s a template we used. You can use some of that. Ask Nick for the one he used last month, and Natalie for hers. Maybe you should get something from our procurement team and information security group to be on the safe side.

Martin: And how do I make the RFI out of this? (BTW, this is a “bold” question that’s rarely asked.)

Clive: Well, make sure you cover everything from all of these documents.

So the RFI asks asks:

  • if any of your 80,000 employees are a member of any one of the following 340 organisations that are considered disruptive,
  • how many employees you have in each geography, function and vertical — where the break-down provided is as per their definitions (we cook up numbers which, if you add up, totals to over 200,000)
  • how much you spent on paper-clips last fortnight, and other such intimate corporate P&L secrets

And we answer these. The answers to the above 3 questions were “No”, a table of numbers, and “We are not at liberty to divulge this information…”

Now, looking at the answers above, it still doesn’t add up to 180 pages. It’s hardly half a page. But you’ve got to take the following conversation at our end into account.

Steve: You know, we’ve got to put in some details about our methodologies in this section.

Me: I have.

Steve: Yeah, but maybe we should add more, you know, like supply chain methodologies and change management.

Me: But they’re irrelevant!

Steve: Well, can’t say that. Change management is always relevant. SCM… well, no harm putting it in. They can skip it if they don’t want to read about it.

That’s it, isn’t it? There’s no harm in doing more. I’ll just toss it in. If you don’t want to read it, skip it. I’ll just ask you to do more of these. If you can’t, skip the useless stuff.

An innocuous sounding statement: do more. I tremble whenever anyone suggests it. There’s no defence.

There’s a fundamental belief at work here. That more is better.

This is fueled by a lack of confidence. Put in high-sounding words. They look impressive. What’s missed is that experts use jargon because they understand what it means, and it conveys a lot in few words. Others follow a cargo cult science.

What we lose, though, is subtle.

Firstly, it wastes time. It wastes my time. It wastes your time. But hey, time is not all that important. (I’m not saying this sarcastically. I believe that wasting time is quite OK, really, and it’s not such a big deal.)

What’s more important is that it destroys focus. Some things in the document are important. Most others are not. In a 180-page document, I can’t find the important stuff! It actually does harm to put it in if it’s irrelevant.

That’s the tough tradeoff, really. A tangible incremental value against an intangible loss of focus. The value looks attractive when you’re less confident. The document seems completely unfocused anyway.

So what the heck, put it in.

Do more of this. And that too.


So what can you do? Quite a bit, surprisingly.

Firstly, you’ve got to believe that less is more. The response to “What’s the harm in adding…?” is “It dilutes the message”. There’s two things here. Believing it. And having the courage to say it. Trust me, you really believe it only when you say it.

Next, you’ve got to understand — really understand — before you write or speak. That requires not fooling yourself. And it requires a lot of practice. I’ve had nearly 20 years of training in fooling myself, so it’s an uphill task. Many people are worse off, never having tasted true understanding.

Third, you’ve got to be brave enough to shut up, or say “I don’t know”. Initially, this was tough for me, but I learnt from a friend. I always thought him not-so-smart, but honest. He’d ask, “But why?” and when I’d explain, he’d say, “I don’t understand it.” After two hours of trying to get him to understand, I’d realise that I was the one who never got it in the first place. After a while, I got into the habit of being very prepared before I explained anything to him.

Saying “I don’t know” doesn’t make people think less of you, I’ve found. I know a lot of people disagree with me. One of the most consistent feedbacks I’ve received in the first half of any project or firm I’ve been in is, “He should speak up.” Dammit, I don’t have anything to say! If I know something, I’ll say it. If not, I’ll shut up. Now, despite this feedback, no one’s quite objected to me. And in the second half, they’re always amazed at how much I’ve improved based on the feedback.

The feedback had nothing to do with it, of course. I just happen to know more in the second half of a project.

There’s a reason why your boss wants you to talk. It makes you appear knowledgable. In the short term, that’s good. You talk about “value” and “leverage” and people nod wisely.

In the long term, it makes you less able to say “I don’t know.” (What? This brilliant chap who knew all about value and leverage doesn’t understand our way of calculating ROI?)

It makes you less likely to ask questions.

It makes you learn less.

It makes you dumb.

On the other hand, I’ve learnt to plead ignorance up front. “Do you understand ROI?” “No.” Not even an excuse for it. Frankly, it saves time.

Sometimes, a meeting’s running late, I’m hungry, and I just nod at whatever’s said, and you lose the window of opportunity to ask. Except, I’ve learnt, there’s no such thing as a window of opportunity. If you don’t get it, ask. If they’ve said it thrice, and you still don’t get it, ask. More likely they’re not clear about it.


Postscript: This morning, I had to convert a document into a standard template. My document was 3 pages long. The template (just the headings) was 14 pages long.

Why? Because someone wants all documents in that format. Does it help them? Maybe not. But it has to be done. Standards.

Sometimes, it’s easier to give up. The smart thing is to minimise the effort on pointless work. I took 15 minutes. Beyond a point, I protect myself rather than the poor reader.

Return on effort

If you have a bunch of projects you could do, and want to decide which ones to take up, I was taught a rule: if a project has positive net present value, do it.

That is, find out how much money you have to put in (& when), and how much you’ll get out (& when). Adjust for money today being worth more than money tomorrow. If it makes a profit, just do it.

There are 3 aspects to this calculation, of which two are usually ignored.

  1. Time value of money. Money today is worth more than money tomorrow. People usually don’t adjust for this — either because they don’t know they should, or because they’re not sure how much to adjust by. It’s usually OK to ignore this. The difference is not often much. Your estimations of cash flow are likely to be off by more than this adjustment anyway.
  2. Cash flow projection. This is tough too. People batch these together into two groups: what you put in and what you get out.
  3. Investment and return. This is the often used part. You put in money (over time), and you get money out (over time). Do you get more than you put in? How much more?

In other words, I’ve seen Return on Investment (RoI) used far more than Net Present Value (NPV).

NPV vs ROI

In my MBA classes, I was taught that this is wrong. That you need to worry about RoI only if you’re budget-constrained. If you have enough money (and organisations can always borrow), you should do all profitable projects.

I can’t tell for sure if organisations are budget constrained or not. Departments do have budgets. But whether they stick to it or not depends on the department head’s risk aversion and political power. It often has nothing to do with projects.

But I’ve seen a bigger complaint cited more often: people don’t have time. Time is a bigger constraint than money.

This works in two ways. You don’t have staff to execute a more projects. Or you don’t have management time to pay attention to new projects.

If you’re constrained by money, it makes sense to maximise return on investment. But if you’re constrained by time, maximise return on effort.

BTW, effort is not the same as time. Outsourcing, for example, increases return on effort, but probably not return on investment. Vendors take money without taking up staff time (except a bit of management time). If you’re manpower constrained, and not money constrained, use them as much as possible. Similarly, investing in assets rather than in hiring improves return on effort.

When at BCG, there was a whole theme around this called Workonomics. Like Economics is about maximising return for money, Workonomics is about maximising return from your workforce. Powerful concept. It’s a pity I’ve never seen it applied where it’s really needed.

Economics vs Workonomics

The most important thing is: at any point, you have only one constraint. Maximise return on that constraint. If it’s money, maximise RoI. If it’s staff, maximise productivity. If it’s customers, maximise share of wallet. And so on.

Filtering vs weighting

I am selecting a CRM package for a bank. I asked my colleagues how they’d gone about it, and got 8 responses. Every single one of them had the same weighting approach: Take a huge list of criteria, assign weights, score each package, calculate a weighted-average score, pick the highest one.

As I mentioned earlier, I think weighting is a lousy method. (See Errors in multicriteria decision making.) You can’t say “I picked this package because it has X, Y and Z features, which the others don’t.” You can only say, “Oh, overall, it has the highest score…”

The scores and the weights are subjective. You spend ages arguing between a 3 and a 4. You can manipulate them very easily. And you end up having to revise the scores many times to get to the answer you want.

Since I now had an opinion, I put my foot down, and said, “Here’s what we’ll do. Let’s make a list of essential criteria. They will all be YES / NO questions. Any package that doesn’t meet any criteria is knocked off. That’s it.”

This may appear simplistic, but it isn’t. You see, at the end of the day, only a few criteria really matter. Ideally, you just pick these, and compare packages against these. Since you don’t know which these are, you make a bigger list, evaluate them all, and then realise the truth.

Sometimes, you have too many criteria. Then none of the packages make it, and you have to sacrifice some of your criteria.

Sometimes, all of them make it. Then you can choose to enforce more criteria. Or maybe not. If all of them meet your criteria, just pick the cheapest one.

Internally, we were convinced of this approach, and took it to the client. Things were fine for a week. Then, complaints started trickling in.

  1. Fear: “I’ve already used weighting in earlier package evaluations. Now, if you use filtering, it’ll make my earlier evaluations look bad…”
  2. Uncertainty: “I don’t know… what if there’s no YES / NO answer? What if we need shades of grey?”
  3. Doubt: What if it lets everything through? What if it rejects everything?

Uncertainty is the most popular objection.

“What if we need shades of gray?”

I always ask: “Any example?”

“Well, you know… it can come up.”

So I give them an example, and explain how it can be broken in to sub-questions.

“Well, yeah… but just to be on the safe side, could we have a score?”

The exercise is still going on. I haven’t seen a valid concern yet. What’s interesting is, everyone is hesitant about filtering, but no one can defend their objection.

Errors in multicriteria decision making

I talked about my approach for multicriteria decision-making, and mentioned that it was fundamentally flawed. Here’s why.

Spidergraph for Industry 1 Spidergraph for Industry 2

The charts above compared two industries. The bigger the area, the more favourable the industry. The underlying assumptions being:

  • The criteria are comparable. (Points at the same level are of comparable importance. Twice as large is twice as important.)
  • All (and only) relevant criteria have been included.

In this particular example, I know for a fact that both these assumptions are invalid. And in every case I used this methodology, the assumptions fail.

You won’t draw the criteria to scale. We used revenues and growth as two parameters, and marked each industry as high, medium or low. The scale for revenue was Rs 100 cr, Rs 500 cr and over Rs 500 cr. The scale for growth was <5%, 5-10%, >10%. We picked this scale in order to fit the range well on these graphs. Not because Rs 100 cr of revenue was worth about the same as 5% of growth. And yet, that’s the implicit trade-off this graph is asking us to make.

We also had very qualitative criteria, like “Capability” (KSF), and they were compared head-on with growth and revenue. Using qualitiative criteria is not a bad thing. But when the visual makes you trade-off capability against Rs 500 cr of revenue, I feel queasy.

You will miss important criteria. Usually, the process for identifying criteria is bad. “Think of every criteria you can” was our standard approach. In this instance, in our first iteration, we had a dozen parameters. We showed it to the client. They said, “Look, our Chairman likes these industries a lot. He doesn’t like that bunch. We’re much more likely to focus on the ones he likes.” And that’s absolutely important! We ended up adding a “Passion / vision” based on the fit with the company’s existing businesses, and that proved the deciding factor.

Another time, I built an entire model on which project to outsource based on 10 parameters. (It was everything I could think of at the time.) The one that I missed was, “When is the project starting?”. It turned out that this was the most important criteria. In fact, it was the only important criterion. If I’d simply said, all projects starting after 1-June-2006 can be outsourced, I would’ve been 90% right.

You’ll keep the irrelevant parameters. This is the worst of all. Even after we learnt which the important criteria were, we didn’t throw away the useless parameters. We never throw away hours of work, even if it’s useless. So the model keeps bloating, and the irrelevant criteria influenced the shape of the graph more than the relevant ones.

Another problem is that this methodology cannot answer questions concisely.

“Why did you knock off Industry X?”

“Oh, because on a cumulative score against revenue, growth, lifecycle, capability, passion and 10 others, it scores less than 45 points.”

A good answer should be short. For GE, it would be “You’ll never be number one.” For HP, once, it would’ve been “It’s not where we can excel technically.” For Warren Buffet, it may be “I don’t understand the business.”

After these experiences, and based on hindsight, I’ve come to believe the following about MCDM (multi-criteria decision making):

  1. Remove irrelevant criteria. Usually a few criteria make the decision. The rest don’t matter.
  2. Filtering works. You don’t optimise in MCDM. You’re trying to do well against criteria that are often not comparable. You’re better off if you filter out unacceptable levels of criteria, and treat what’s left as acceptable.
  3. Use a decision tree. Don’t compare options. This is the basis of fast and frugal heuristics. It keeps focus on the important criteria, and makes the process easy to implement / explain.

I’ll let you read up on fast and frugal heuristics. I’m convinced it’s the best way to make decisions based on multiple criteria in the scenarios I’ve worked on.

Multicriteria decision making

Decisions are usually based on multiple criteria. You have to trade off between criteria. I’ve been involved many such decisions over the last 5 years.

Example 1: A conglomerate wanted to identify industries for growth. We shortlisted 19 industries, identified 12 criteria for the attractiveness of an industry, researched each one and plotted them on spidergraphs like below.

Spidergraph for Industry 1 Spidergraph for Industry 2

The intention was that, to identify the most favourable industries, you’d just pick the ones with the largest filled area.

Example 2: Another time, we had to decide among BPO vendors. Again, we picked a bunch of criteria and compared vendors against these criteria.

Spidergraph for BPO Vendor 1 Spidergraph for BPO Vendor 2

Example 3: Once, we had to identify stakeholders’ position on a project.

Change readiness profile for Dave Change readiness profile for Uli

Those who were big on the right of the graph were for, and those who were big on the left were against.


In all the above cases, the same process was used for decision making.

  1. List criteria exhaustively
  2. Evaluate options against each criteria
  3. Assign weights to criteria (equal weights implicitly assigned above)
  4. Compare options

Having applied this methodology it several times, I am convinced this process is fundamentally flawed. See how in this post: Errors in multicriteria decision making.

Normalising non-normal distributions is bad

I was working with the treasury of a bank. They were trying to estimate how much money could flow out of their savings account in a day, worst case.

I took their total savings account balance at the end of each day and found the standard deviation. I took thrice the standard deviation, and said, “You can be 99.7% sure that your daily loss won’t be more than 1.5% of the balance.”

That would be right if it were a normal distribution. But it’s not.

Banks have millions of savings accounts, each of which is like a random variable. But unless they’re independent, and they have finite standard deviations, the central limit theorem won’t work.

Firstly, savings account transactions are not independent. If there’s a run on the bank, they’d all pull out their money. Whenever a company declares dividend, a large number of savings account are credited. Salary accounts are credited at the end of the month. As a rule of thumb, you could say that if one savings account goes up, the others are likely to as well.

Secondly, savings account transactions are not normally distributed. If you take a single savings account, you won’t find a bunch of debits and credits. Every month, you’ll find one large credit for the salary, one mid-sized debit for monthly expenses, and several small debits for individual transactions (bills, ATM, etc.) Once in several years, you’ll find a gigantic debit (purchase of car or house, wedding, etc.) or a gigantic credit (retirement / pension fund, sale of property, etc.)

As a result, the savings account is likely to fluctuate a LOT more than if it were a normal distribution.

If I had just looked at the data, I’d have found several occurrences of fluctuations greater than 1.5%. The normal distribution predicts that there should be fewer than 0.3% of such cases. That’s about 1 per year. I’d have visually been able to spot nearly one a month. I’d also have been able to spot the huge 4% swings that do happen once in a few years.

People wiser than me have made the same mistake. I was interning at Lehman Brothers when they were planning to launch a new electronic bond-trading product. My task was to trace the bond price movement.

The data we had was bad. Many bonds jumped as much as 40% in a single day, due to data errors. The bulk of my task was to clean out these errors.

After cleaning up, there was still two jumps that couldn’t be explained. I went to my boss, who recognised them at sight. One was a sudden drop in price of all Government bonds in December 1998. The other was a 32% drop in price of Hikari Tsushin — a mobile phone retailer — on the day they went bankrupt.

We concluded that the daily price drop wouldn’t be more than 9%, to a 95% confidence level. If that was right, a 32% drop in one day would happen once in a million years. Yet, we had Hikari Tsushin just the previous year.

We didn’t bother about it. In fact, we didn’t even think about it. If we’d checked, we’d have found that the daily price drop was closer to 12% or something, to a 95% confidence level.

Summary: Force-fit a normal distribution on non-normal data can understate the worst-case scenario. Often you’re better off just inferring confidence levels from the raw data than from a fitted distribution.

Not all distributions are normal

14 years ago, I was introduced to the process of normalising grades. Professors “fit” students’ marks into a normal distribution and assign grades based on that. (I still don’t know how they do it).

Since then, I’ve encountered normalising a lot. My performance at work is normalised. I normalise my song ratings and movie ratings. I’ve normalised all kinds of things at work: lead-time of delivery of fans, movements in savings account balances, calls to a call centre, demand for a resource… you name it.

(What I mean by normalising is, I find the mean and standard deviation, and assume that it’s a normal distribution with that mean and standard deviation. For things under my control, like movie ratings, I revise the ratings to fit a normal distribution.)

In fact, I normalise everything I encounter by default.

A few years ago, I started feeling uncomfortable about this. I’ve now figured out why normalising is bad — at least when done blindly like I do.

First, let’s explore why normalising is good. Normalising eliminates biases. If the Prof in Section A grades higher than the Prof in Section B, normalising takes care of it. If a Prof is extremist (more A’s as well as F’s), normalising takes care of it. If a Prof is skewed (lots below average, few extremely high above average), normalising takes care of it.

Eliminating biases makes sense if Section A is fundamentally like Section B. It’s not better, nor more extremist, nor more skewed. If the sections are large enough and picked randomly, this assumption is correct. If Section A represents the smarter half, or people born in the second half of the year, or people from the Western states, or any other non-random selection, this need not be correct.

An aside: You may wonder why people born in the second half of the year is non-random. If school admissions start in September, and admissions start when you’re 3 years old, kids born in September will be nearly 4 years old when they join. Kids born in August will be between just over 3 years. That one-year difference, to a three-year old, is HUGE. For example, you will find a birth date bias in football, with most premiership players being born in the months of September – November.

Normalising goes a step further than eliminating bias, however. Normalising forces a normal distribution. This would be right if the underlying data is normally distributed. But if not, we may be making a mistake by force-fitting.

The Central Limit Theorem says that if you add up random variables, you get a normal distribution. Provided it’s a large sample, variables are independent, and each has a finite standard deviation.

This means that many things you get by adding random variables are normally distributed. For example:

  • Number of heads when you toss a coin (add up each coin toss)
  • Average age of an army platoon (add up each soldier’s age)
  • Terminus-to-terminus time for a bus (add up the time between each stop)
  • Price movement of an stock exchange index (add up each stock’s price movement)

But a lot of real-life data is NOT normally distributed. The usual reasons are:

  1. It’s not the sum of random variables
  2. It doesn’t satisfy the central limit theorem (independence, large sample, finite standard deviations)

Here are some non-normal distributions that are NOT the sum of random variables:

  • Soldier’s age within an army platoon. What random variables could you add up? You’ll probably find a lot of people at age 18, because that’s the minimum age. A little fewer at age 19 — last year’s recruits. Far less at age 20 — 2 years minimum service accomplished. Certainly not a normal distribution.
  • Price movement of a single stock. What random variables could you add up? You’ll find that there are far larger price movements than a normal distribution predicts.

Here are some non-normal distributions that don’t satisfy the central limit theorem. (These are, in fact, things I said were normally distributed earlier. You see? It’s easy to think things are normal, but in reality they’re not.)

  • The terminus-to-terminus time for a bus. The number of bus stops is quite small. More importantly, the time between stops isn’t independent. If there’s a traffic jam, an entire section of the route will take more time. If there’s a delay between point 2 to 3, it’s likely that there’ll be a delay between points 1-2 and 3-4 as well.
  • The price movement of a stock exchange index. The price movement of stocks follows a power-law distribution, which does not have finite standard deviations. Also, the price movements are not independent.
  • See more non-normal distributions.

Summary: Don’t assume that anything you see is a normal distribution. It usually isn’t.

I’ll shortly talk about what happens when you assume something’s a normal distribution, when it really is not.

Normalising non-random samples is bad

I rate movies on a scale of 1 (bad) to 5 (good). This is an absolute scale. Initially, I assumed that I would watch as many good movies as bad ones. So I’d have about as many 1s as 5s, and 2s as 4s. But, when I looked at my ratings for movies over the last year, I had far more 4s than 2s. My movie ratings were not normal.

Rating Frequency
1 8
2 31
3 98
4 81
5 18

The reason is clear. I pick good movies rather than bad ones, based on reviews. If I rated every movie there was, the ratings may be normally distributed (or they may not). But when I pick movies, I consciously reject those I know would have a low rating (based on reviews), so my ratings would be more clustered around the top.

Even if I redefined my scale, I’d still have more than 50% above the average. This is not a contradiction. I watch a LOT of good movies with very similar ratings, and a few disastrously bad movies. The good movies will have a higher-than-average rating, and there’ll be more of them than the bad movies. This is a skewed or asymmetric distribution.

So, selective picking can wreck the normal curve.

Yet, almost everything is selectively picked. Colleges try and pick the best students. Organisations tend to pick the best employees. If they rate performance, they’re likely to find a bias towards the higher side — at least, the good colleges and organisations. Force fitting a normal distribution pushes down genuinely good people. (In bad colleges and organisations, it pushes up genuinely bad people).