< Back to 68k.news IN front page

MC Exclusive: Daniel Kahneman on why optimism is as bad as pessimism, when to call it quits and other insights into decision-making

Original source (on modern site) | Article images: [1]

Daniel Kahneman said that this method helps a person avoid what was called "halo effect" and achieve more independent judgement.

Daniel Kahneman, a psychologist, redefined economics and greatly influenced investors and business leaders through his book Thinking, Fast and Slow. He was awarded the Nobel Prize in Economics in 2002 for his pioneering work, along with Amos Tversky, on decision-making and uncertainty. Kahneman died on March 27, aged 90.

In his last exclusive conversation with N Mahalakshmi of Moneycontrol, Kahneman spoke about learnings through his career, decision-making and his pioneering work as a psychologist for the Israeli Defence Forces, which is applicable across professions, even today. Edited excerpts:

Does Daniel Kahneman think fast or slow?

Both, like everybody else. But mostly, fast like everybody else.

How have your studies and insights influenced your own behaviour?

I don't believe reading Thinking, Fast and Slow is going to help people individually think better because writing it didn't help me all that much. I've remained pretty much the same person I was and I think in the same ways. Changing one's patterns of thinking is very difficult and the line that we've been taking is that changing the behaviour of organisations, structured thinking, is much easier than changing the thinking of individuals.

Are there any biases that you have been able to overcome?

I'm a good example of all the flaws of reasoning that we've been writing about and it's not an accident. That's because in the original work that Amos Tversky and I did, it started always from introspection. It started from thinking about situations in which our intuitive thinking would be wrong in some way. Those were the situation that we looked for and those were the mistakes that we're both trying to predict and explain. But intuitive thinking is very difficult to control and my intuitive thinking really hasn't changed much. Occasionally, of course, I recognise an error that I'm making. Usually, when I recognise an error in my own thinking it's never on a very important topic because if it is an important topic, I'm too busy making a mistake to recognise that I'm making it.

Also read: Delving into Daniel Kahneman: Top theories on decision making, rationality and bias

Some people are more sceptical and have a natural tendency to stand out, not follow the crowd, they sound independent. A majority of us are easily persuaded and follow the crowd. Do sceptics score better on intuitive decision-making? 

People who are capable of reflective thinking clearly do better in many decisions. Intelligence, broadly speaking, predicts almost everything in terms of predicting good outcomes in people's lives. The control of impulsive thinking is an important characteristic. There is a test called the cognitive reflection test which has a famous riddle.

A bat and a ball together cost $1.10 and the bat costs $1 more than the ball. How much does the ball cost? The number that comes to everybody's mind is 10 cents. That's the way it's designed. But 10 cents is wrong because 10 cents and $1.10 is $1.20 and not $1.10. That's very simple and yet about 50 percent of students at Harvard fail that test. It is not that they cannot figure it out. It's that they are overly confident. They do not check their own thinking. Cognitive reflection is clearly associated with higher education but not everybody with higher education thinks reflectively. On that score, I would be probably called a reflective thinker but it's not because I have written Thinking, Fast and Slow.

Now, cognitive reflection and sceptics are pretty different. It's clear that the ability to stand out from the crowd and to think differently from others is a necessary condition for success but it is not a sufficient condition for success and in that sense scepticism is very much like optimism.

When you look at extremely successful people, you always find that they're optimistic and that they believe they could do things that other people consider impossible. That gives people the idea that being an optimist and thinking that you can do anything is actually good for your decision making. It's not.

For most people, the benefits of optimism in terms of taking on risky ventures are small. In fact, they tend to be negative on average. But it's true that only people who are optimists and who exaggerate their odds of success are going to achieve great success. If you look only at those, you will get the impression that optimism is a wonderful thing and, in fact, the benefits of optimism are mixed and the same is true for the benefits of scepticism.

I had the opportunity to meet Richard Thaler a few years ago and he said you claim that if you're a pessimist, life never disappoints you. Is this true and has it served you well?

That is true. I am a pessimist. I have been called a cheerful pessimist because I'm usually in a fairly good mood. So when I lose my keys, my first impulse is that I will never find them again although I should know better but this is how I think. So I'm prepared, I'm already figuring out what will I do if I never find my keys.

Also read: 15 Daniel Kahneman essential quotes for decision-making, investing

Can you describe the whole experience of writing Thinking, Fast and Slow?

Writing Thinking, Fast and Slow was a terrible experience. I never thought that I could write a book truly. When I started out, I thought I would be collaborating with somebody and that the book would be a joint book and then it turned out that I couldn't collaborate on writing, that writing is very personal and that I have to do my own stuff. But the book Thinking, Fast and Slow is a bit of an unusual book. I was trying to speak to the public and I was trying to keep the respect of my colleagues and I really felt that I was failing at both. So I gave everybody around me a very hard time. I think the book was saved by an editor, who became a friend, and who put more work into this book than an editor is supposed to. He pushed me across the finish line. But that was quite painful.

Tell us some stories from the time that you were in the psychology department of the Israeli Defence Forces. Also, the interview protocol that you set up for them which is stuff of legend and is followed till today.

I was in the Israeli Army in 1954 when I was 20 years old and served for one year in the infantry and then I became a psychologist. Even with my very inferior BA, I was just about the best trained psychologist in the Israeli Army; my boss was a chemist! And everybody was improvising all the time. When they assigned me the task of setting up an interviewing system for the Israeli Army, I thought I could do it which, of course, was completely unreasonable but I was given a book and it turns out that this book is a very important book in the history of psychology and it's about clinical as against statistical thinking. I was very impressed by that book which eventually became a classic and it had a big influence on me.

I was telling the young interviewers not to worry about the validity and just follow the protocol, follow the rules. And that is all I wanted to do. I defined six traits and some questions that went along with each trait. It wasn't a completely structured interview, but it was quite structured. They interviewed for each trait one at a time. So sociability, responsibility… And those are very straightforward concepts. Then they would try to ask questions that were objective questions about the daily life of the individual. I instructed the interviewers to rate each trait and to think about them independently of everything that had gone on before. My plan was to just have those six ratings and to average the ratings but the interviewers rebelled. I then compromised. And the compromise was that I told them, to do things my way, but when they are done, close their eyes, and think how good a soldier will that person be? That's a completely intuitive rating. What is special about it is that it's a rating that is produced at the end of the process. That is, you have accumulated all the information and then you let your intuition go free.

Several months later, we did the study to validate our interviews against how well soldiers were doing in the army. And it turned out that the interview was quite valid and certainly much better than the unstructured interview that had been used before. But the thing that came as a complete surprise to me was that the intuitive rating at the end of the interview was just as valid as the average of the six ratings -- it just added content.

Sixty-five years later, we wrote Noise. And the theme of our recommendation about how to make decisions and how to think about, is really to treat options like candidates. It is going back to the ideas of that interview and trying to generalise them about how a decision maker should think about options.

That assessment actually also lays the framework for decision-making in general across various different fields. Are there any exceptions to this?

I think in many fields, the kind of thinking that we apply here, which is to take attributes of the case that you're evaluating and think about those attributes one at a time, it's to avoid what we call the halo effect and to achieve independent judgment, which is clearly an advantage statistically.

The more independent your judgments are, the more reliable they're likely to be. I don't think the decision-making of a surgeon deciding on whether or not to have surgery would have the same character. And this is because for a surgeon, any counter indication to surgery could be decisive. So you would not average the ratings. There are clearly exceptions to the idea —- cases in which you have a deal-breaker or have minimal value in one attribute. Also, averaging is not always the best way of doing it. There are attributes that interact so that you need two attributes and one is simply not enough.

Can you take us through the story from where the idea of inside view and outside view really came about?

At a certain stage in my career, I had the idea of building a curriculum for high schools on judgment and decision-making without mathematics. I enlisted several groups of people dealing with that, including teachers and a statistician and the chairman of the education department of the School of Education at Hebrew University. We had worked for about a year and we would meet every Friday and were making decent progress.

On a particular Friday, I asked the group a question: when do you think we will deliver a book? I asked everybody to make an independent judgment and write it on a slip of paper. And then I collected the slips of paper. All of us, including me, had estimates between one and a half and two and a half years.

I then asked the chairman of the School of Education who was an expert on curriculum development. I asked him if he could think of other cases in which people did what we are trying to do and to imagine them when they were at the same stage of progress that we have achieved… He thought for a long time and he was quite embarrassed. And he said, "when I think about it, in the first place, not all of them finished the book. I would say about 40 percent of them never finished the book. And of those, who did finish, I can't think of anyone that took less than eight years."

Now, that is a contrast to the inside view where we were only looking at our problem and how long will it take us. And we know how well we are doing and we extrapolate and take a margin and so on. The outside view is to forget about us, look at problems like ours and what are the statistics of problems like ours? And it turns out that the outside view leads to a completely different conclusion. I had also asked the chairman how do we compare to those teams? And he again thought for a while, and then said, we're actually below average, but not by much.

I gained a lot from that day because I introduced the concept inside view and outside view. And it turns out that this is a very general phenomenon. We call it the planning fallacy.

There is a plan and you make a mistake. But when I thought about it while writing Thinking Fast and Slow, I discovered that this was one of the most idiotic things I ever did  — not for asking the question — but what I did after that. Obviously, we should have quit (abandoned the curriculum project). None of us was prepared to spend seven years with a 40 percent chance of failing. And we had no reason to believe that we were above average. We should have quit and we didn't, even though we had both the outside view and the inside view. I didn't have the courage to quit.

So that corroborates with the sunk cost fallacy in a sense that so much has been put in/staked, you don't dare to pull out...

That's a problem especially after a successful year because the feeling of progress that we overcame the statistical knowledge that we had just gotten that we were going to fail overwhelms us. That happens a great deal that the feeling one has of the particular case overwhelms statistical knowledge, probabilistic knowledge that tells you this is not going to work. And optimists are quite prone to this, but even a pessimist such as I can fall for the same thing.

Can you talk a little bit about Gary Klein, your academic adversary at one point of time who was a big believer in the power of intuition. And your idea of adversarial collaboration - how and under what circumstances did this idea develop?

Yeah, it took us six years to write a paper, but the idea of adversarial collaboration occurred to me because I hate being angry. Some people actually enjoy being angry, but I really don't like it. And it depresses me to get angry. Sometimes in controversy, they get unpleasant, nasty and people are sarcastic. It's a complete waste of time. You're just trying to defeat the other character. You're not trying to accomplish anything. So I invented the idea of adversarial collaboration, that you have two people who decide on a procedure to try to work on the problem together. When you begin an adversarial collaboration, you do not expect to completely share the same religion at the end, but you're making a genuine effort to see how far you can go to agree.

Now, Gary Klein is a very interesting figure and an excellent psychologist. He has been studying experts all his life and is very impressed by experts and by expertise. He has wonderful examples. He has a book, The Sources of Power, that I recommend highly. In that book, he has beautiful examples of cases like a captain of a firefighting company. When they're in the kitchen of a burning house, suddenly he has the impulse and tells his people, let's get the hell out of here. When they do the house explodes. It turns out that his ears were warm. That gave him a hint. He was not fully aware of it, that the fire went directly underneath them.

Gary also talks about intuition in physicians and nurses who diagnose somebody without exactly knowing how they diagnosed them. So intuition is often defined as knowing without knowing why you know. It's not a good definition because intuition doesn't mean that you know, which really means that it's the truth. It's a feeling that you know.

I believe Gary. Intuition is sometimes marvellous. Sometimes it is flawed. It turns out that he had spent his career on the marvels of intuition. I had spent my career on the flaws of intuition. But it was clear that there must be a middle ground. The question is when. So I invited him to work with me on the question of when can intuition be trusted. We started from opposing points of view because he had trust in intuitions. He certainly did not like structured interviews or formal process or bureaucracy or the kinds of things that I think are necessary. For me, finding experts who make stupid mistakes, I find that funny. He doesn't find that funny. What he finds funny is artificial intelligence making stupid mistakes. So we started out at different places and we worked for six years. At the end of six years, we had a paper. That's quite a well-known paper by now: A failure to disagree. That is because eventually, we didn't disagree on the boundary. Interestingly enough, we also did not change our basic positions. He still likes expertise and I still find it fun to find flaws in expertise, but we did agree on the condition, when can you trust your intuition. And incidentally, we became friends. We became quite close.

Are there any rules for adversarial collaboration?

Adversarial collaboration, as I conceived of it, is an academic exercise. This is when you have people who are at least nominally interested in finding the truth. So they both pretend to be looking for the truth and then the obvious way is to look for the truth together. It's not quite the same within an organization or in conflicts or when the issue is which of two decisions to make. In our case, for an adversarial collaboration what gives it a chance of success is there is a probability that we're both right, but let's find out the boundaries. So, I'm engaged in an adversarial collaboration now with somebody on why some phenomena in the field of happiness happen when you ask one question, but not when you ask another question. It's a very nice way of doing things. You become friends with your adversaries. But sometimes you need a mediator. When the situation is tense initially, it's good to have a mediator.

But the decision to come together has to be voluntary, right? And you should also be willing to take a draw.

Yeah, this is quite likely to be the outcome. It will be a draw in some sense. I have done several of them. And there wasn't a victory ever. It's more nuanced than that. Each side has to concede at the end that they didn't get what they wanted, that they didn't win. That is very valuable.

So would you advocate corporates and financial institutions to look at this actively? Will it lead to better decisions?

In situations when there is debate, it's a good idea. You have A and B and they are debating. You have a boss who is listening to that debate. I think it would be a good idea for the boss to ask A to summarise B's case and then to ask B whether it is a fair summary, whether A is capable of producing a fair summary of his position and ask the other way around. I think that's a useful procedure. People are not using it often. I think it could be a very useful idea because the psychological mechanism is quite interesting. When you present somebody's ideas, you cannot help it, you are going to try to make a good case for it. You're trying to make sense of it. Trying to make sense of your opponent's position puts you in a better frame of mind for compromise. So, yeah, I would recommend that.

I've heard Charlie Munger say the same thing that he never allows himself to hold an opinion on anything that he does not know the other side's argument better than they do…

Charlie Munger is a very wise man.

One of the biggest behavioural aspects that leads to corporate decisions going wrong is overconfidence and unbridled optimism. Could you explain Gary Klein's concept of premortem and how this can be used to guard against overconfidence?

Gary had that idea, which in a way is not typical of his general position, but it's a splendid idea. He calls that the premortem. It happens when a decision is about to be made that hasn't been finalised yet, to stop the group and have a discussion that can be either in the same group or with an outside facilitator in some cases, where the premise is the following. Suppose we made the decision that we're considering. It is now a year later. It was a disaster. Take a piece of paper and write the history of that disaster. That's the premortem. It turns out that people come up with all sorts of flaws in the decision that they were about to make. It's a very good procedure. I guess that it doesn't bring many people to reconsider the decision completely. But what it does is, it will cause them to improve the decision because they will see failures that they had not anticipated and ways of counteracting or preventing those failures. So, it's a very good idea. I once presented it at Davos, describing it as Gary's idea. I heard somebody say it was worth coming to Davos just for that idea! That was the CEO of Alcor, a very large company. It's an excellent idea. Not mine, but really good.

How do we understand the halo effect? When a company is failing the CEO is seen as rigid, we say he doesn't have vision etc, while when a company is successful, we say the opposite. The cause-effect in many cases is very hard to understand… it's often reversed.

It's very hard to control that kind of halo effect. Phil Rosenzweig wrote an excellent book on that topic called The Halo Effect. It's basically that you infer traits from success or failure. There are at least two mechanisms. You mentioned one of them, which is that it's harder to appear intelligent when things are going badly. It's much easier to appear very smart when things are going well. That's the halo effect, the mere fact that things are going badly makes your decision look stupid and vice versa. So, it's very powerful. I think it would be important to control it in some context. For a board to decide to fire a CEO, they should be reading the halo effect before they do because it may be unjust.

Is there a framework to decide on cause-effect? Are there questions to ask to figure out what caused what? 

The problem is that hindsight is very difficult to control. You cited a sentence from Thinking, Fast and Slow that anything appears predictable after the fact. So, we have that profound misunderstanding of the world. The world appears simpler than it really is. That certainly is true and is very difficult to control. But possibly, if you know about it, and there's a really obvious question, could you really have predicted this or are you smarter just because you know the outcome? It's a good question to ask oneself.

Your view on uncertainty in Thinking, Fast and Slow is very pertinent. People taking decisions when stakes are high with a pretended knowledge.... This is pronounced in stock markets and money-management business. Is there a way to fix this at the institutional level?

Critical thinking is good. That is one basic way of counteracting. There are two ideas that can help counteract this. One is to listen to your pessimists and to protect them because pessimists in a group raise doubt when the group is trying to work itself up enthusiastically to support an idea. Pessimists are likely to be unpopular and people with doubts are likely to feel that they're better off not sharing them. That is very dangerous. So, one thing that can be done is to reassure and protect your sceptics and pessimists. And another idea is that you're better off choosing between two things and evaluating things in isolation. Thinking is generally better when it is comparative than when it is absolute. Being able to compare across options will highlight when there is uncertainty in one, but much more in the other. That will come up in the comparison. Absolute amount of uncertainty is difficult to assess. The comparative uncertainty is much more transparent.

Can you explain the interplay of the theory of loss aversion, the emotion of regret, and how that affects people's decision-making?  I've heard you say that if you harbour more regrets, the less likely you are to do well as an investor.

The reason that regret-proneness is deadly for investors is that just about the worst thing that investors can do is to buy high and sell low. When things are going badly, you're inclined to sell and you don't stick with your decisions. People who are prone to regret are going to be doing that. That is, when things are going badly, they're going to blame their advisors, they're going to blame themselves, they're going to change their mind, they will sell low. People who are prone to regret should not take many risks. That's the general idea.

Do you harbor a lot of regrets?

No!

What has been your money experience? Where do you invest?

Behavioural economists have generally agreed on the decision to buy index funds. I just follow the advice of my friends. On the decision of how much you should invest in equity and how much in bonds, I have always been too conservative. That comes with my pessimism.

Have you been able to conquer your own sunk-cost fallacy? 

I have never struggled very hard to conquer it. I've been fortunate enough that I've been operating in a range where I don't feel any threat of ending up ruined. If I feel like being a pessimist, I can afford it.

So there is a general belief in the market that in the short term, stock prices are unpredictable, but in the long term, they are more predictable. Now, would you agree with this point of view? Because it just seems natural to believe that there is greater uncertainty in the longer-term compared to shorter term.

The idea that stocks rise in the long term is history. In last 150 years it clearly has risen. Now, sometimes there are depressions, recessions, and regressions, but by and large, there is a reward for risk that is completely obvious. There's no guarantee that the world would not end, but if the structure of the world remains more or less as it is, then over the long-term, stocks will do better. But if you will feel that when stocks dip, you will feel the urge to sell everything, then you shouldn't be in stocks.

When it comes to stock market decisions, can you rely on intuition? Does it work better for traders or long-term investors?

The stock market is the prime example that Gary and I both agreed on where intuition is useless. That is because the first condition for intuition is regularity and predictability in the world. Almost by definition, the stock market is unpredictable. So, this is not the place for intuition.

How do you view Jim Simmons's quantitative approach versus Warren Buffett's qualitative approach to investing?

Well, I don't know enough about the individuals. I'm convinced that luck played a role in Buffett's success, there's no question (about that). But beyond a certain point, in the first place, Buffett doesn't operate in the stock market. He buys companies and when he buys them and invests in them, he affects the market. So, Buffett is in the fortunate position that when he makes decisions, the mere fact that he made them has a tendency to make them correct. So, I would feel more confident in the performance of a quantitative hedge fund than in Buffett's performance if I were to bet for next year, I would say the quantitative is more likely to be successful. But Buffett has been very successful and he's clearly a very good judge of management and a good judge of prospects of different ideas.

Buffett seems to be an outlier in his investment approach. He started investing at the age of six and had read all the books in the Omaha Public Library by the age of 11. With an investing track record of more than 50 years, rationality has been considered the cornerstone of his investment approach. Even the principles that you talk about in the book are tenets of Buffett's style of investing. Does that mean that is a large component of luck?

There is no question that there is a large component of luck. When you are operating in a very uncertain environment, consistent success means certainly having an element of luck. There is an element of skill, but you need both. And when you look retrospectively, it all looks like skill, but you should know that more luck was involved than you are inclined to think. Generally, we don't give enough weight to luck.

What aspects of human beings do you think are irreplaceable by machines? You said in your book, machines will make better decisions than humans because we have both cognitive bias and noise to deal with... 

You're asking me to make a long-term prediction and I don't believe in forecasting. I really don't know the answer. The interaction between people and artificial intelligence is going to be very complicated because artificial intelligence is improving rapidly and people are not.

What would you like your legacy to be?

I haven't thought about that oddly enough. I don't think of my legacy. I think adversarial collaboration is certainly something that I would like. And I would like people to remember me kindly and to like me, but as for the rest, the work that we have done, it will survive so long as it survives and then it could be superseded by something else and it's already happening to some extent and it's inevitable and it's fine. That's the way things go. So I don't have strong preferences about my legacy. I don't think I take myself so seriously as to worry about my legacy. That's the truth.

< Back to 68k.news IN front page