The Trouble With Tunnel Vision – An Error That Plagues Insights
By Andrew Grenville, Chief Research Officer | August 11, 2020
“It is that which we do know which is the great hindrance to our learning, not that which we do not know.”
~ Claude Bernard
“I knew it, right from the beginning” Ever said that? I have, and I regret it now. When we jump to a conclusion early on in a research exercise it can feel so right that we can end up ignoring other evidence—information that might lead us to a different conclusion. We’re not the only professionals that face this challenge. In this article we dive into tunnel vision and look at some example of how it bedevils detectives and lawyers—too often leading to wrongful convictions. By better understanding tunnel vision and similar biases, we can be on guard for them in our own insight process.
Into the tunnel
Tunnel vision is a pernicious bias. It leads us to conclusions that feel right but are wrong. Tunnel vision gives us answers we expect, while obscuring the truth.
Tunnel vision is linked to confirmation bias, hindsight bias and outcome bias. WYSIATI can also be involved, as people grapple with scant data. And belief persistence bias—where people cling to their original conclusions despite contradictory evidence—is also a factor.
Hindsight bias is the tendency to see events that have already occurred as being more predictable than they were before they happened. It’s the “knew-it-all-along” effect. Hindsight bias results in an oversimplification of the past and the ignoring of the reality of uncertainty, doubt, and complication. But you already knew that, right?
Outcome bias comes in when we judge a result by what happened rather than how we got there. It leads people to ignore randomness and other factors and results in an over-simplification. Let’s say your friend does well in real estate, so you decide to invest in it too. It just seems like an obviously good idea. But have you thought about how the economy is doing, how much housing stock is being built, and what the demographic trends are? Are they the same as when your friend made her money? Outcome bias leads us to oversimplify problems and make poor choices as a result.
Beware the tunnel
Tunnel vision is dangerous because it can direct us to find what we expect, rather than what is real. And it can cause us to oversimplify and be certain of our incorrect conclusions. When, after the presentation of research results, the product manager says, “that’s what I expected,” you must wonder if you or they—or indeed both of you—were the victims of tunnel vision. It is an important question to ask, because tunnel vision can have powerfully misleading effects. It shows up very tragically in the world of policing and justice.
Many wrongful convictions can be traced to tunnel vision—with police and prosecutors assuming they have identified the perpetrator. They then end up focusing on evidence that supports that conviction, at the expense of ignoring other findings that would reveal that they had come to the wrong conclusion.
Law professors Keith Findlay and Michael Scott wrote a classic paper entitled “The Multiple Dimensions of Tunnel Vision in Criminal Cases.” Keith Findlay is a former public defender who teaches at the University of Wisconsin Law School. He was co-founder of the Wisconsin Innocence Project. Michael Scott is a former police chief with a law degree from Harvard who teaches at Arizona State University. He is the director of Center for Problem-Oriented Policing.
They define tunnel vision as a “‘compendium of common heuristics and logical fallacies,’ to which we are all susceptible, that lead actors in the criminal justice system to ‘focus on a suspect, select and filter the evidence that will “build a case” for conviction, while ignoring or suppressing evidence that points away from guilt.’ This process leads investigators, prosecutors, judges, and defense lawyers alike to focus on a particular conclusion and then filter all evidence in a case through the lens provided by that conclusion. Through that filter, all information supporting the adopted conclusion is elevated in significance, viewed as consistent with the other evidence, and deemed relevant and probative. Evidence inconsistent with the chosen theory is easily overlooked or dismissed as irrelevant, incredible, or unreliable.”
In this fascinating paper they review the now notorious wrongful conviction of Steven Avery, made famous by the Netflix series Making a Murderer.
Round up the usual suspects
I spoke with a New York City detective, who has been granted anonymity because he was not authorized to speak publicly for this book. He is all too aware of the limitations of identification and just how easy it is for confirmation bias to creep in. He said, “Say you got robbed on the corner of 58th Street and Lexington Avenue. I know a guy who does robberies over there all the time. I’m going to have the victim look at a lineup that’s got Mickey in it, because Mickey did a robbery over there three weeks ago, and I bet he did this one, too.
“So, I put Mickey in there. Mickey looks kind of like it. And the victim says, ‘Well, yeah. Number four looks kind of good’. Then it’s like, all right. Number four it is. It’s Mickey. So, that was the standard, and obviously it’s a terrible standard. That’s why you get a lot of these bad ID’s. You show people mugshots. I’ve had people pick out people from mugshots who were in jail at the time of the incident. They’re positive that’s the guy, and you’ll go ‘this guy was on Rikers Island when this happened.’ I’ve just seen so many people picking people who were in jail, out of the country, in a different state. It’s just such an unreliable metric.” What he prefers and is finding increasingly accessible is video footage, which is an important new data source for detectives.
It sounds a bit ridiculous to be using unreliable techniques which are vulnerable to confirmation bias. You might think it’s something we’d never do in the world of insights. But consider this example.
Compute this
A computer manufacturer has a new laptop feature that their R&D department has invested heavily in. The product manager wants to get feedback on how important this new feature is, and she approaches the insights team to conduct some focus groups with tech enthusiasts.
At the groups, people enthuse about the new feature, and the product manager and R&D team sit happily in the backroom, sipping lite beer and red wine and contentedly popping M&Ms.
The insights team know that a focus group is not reliable all on its own, so they do a quick concept test on the new feature with a similar cohort of people interested in technology. The results are pretty good, with very high likeability scores. Compared to concepts they have tested before, this new feature does slightly better than average. The product manager uses the results of the focus group and concept test as a rationale for heavily promoting the new feature. Sound familiar?
Because of the amount of money and time invested in the new feature, the investigation is practically dripping in confirmation bias right from the start, not to mention sunk cost bias. Focus groups are well understood to suffer from groupthink and social desirability bias, not to mention providing very small and generally unrepresentative samples of people willing to give up their evening to talk about your product.
Stock the groups with tech enthusiasts and ask how important the feature is and, guess what, tech enthusiasts are enthusiastic. By focusing on how important the feature is, the team has asked a System 2 logical question about a System 1 emotional reaction, thereby preordaining the answer.
People don’t really know what is important, because that reaction is unconscious and emotional. But when we ask them if the feature is important, they say ‘of course’ because when you ask people what is important, they say everything is important.
When we ask “why” they feel it is important, their minds instantly generate a socially acceptable and logical rationale for their answers—even though we know people can’t give us accurate information on why, because they are not conscious of their thought processes.
The concept test has people rate the new feature in isolation and then compares the reaction to the new feature to a database consisting mostly of concepts that were weeded out. The fact that it rates slightly above average does not mean much, but the product manager happily latches on to it because it confirms what she knew all along: this new feature is going to be great!
The product is launched with tremendous fanfare about the new feature. The market yawns, and telemetry data shows the feature isn’t used very much. The feature gets quietly dropped when the next model is rolled out.
This scenario and ones like it are not unfamiliar. But they are great examples of tunnel vision in action. The tunnel vision is identifiable to an outsider, even though the product manager and R&D would probably not be conscious of it. And techniques that are known to be problematic get used. But that’s okay because we’re really used to doing focus groups and asking what’s important—just like police are really used to using mugshots.
Seeing the light
When we get affected by tunnel vision, the impact is not so visible or dramatic. It might lead to a failed product or a missed opportunity.
But it’s hard to quantify the economic impact of tunnel vision—mainly because it is opportunity lost. And nobody is fighting to expose the error of our ways, so it goes undetected. But that’s not really an acceptable situation. We need to grapple with our biases. We need to fight back. The question is how?
The good news is we are not the first group to realize that biases—cognitive and otherwise—are preventing us from coming to full and factual insights. Other sense-makers have led the way. More on that in our next article.
This article has been adapted from Eureka! the science and art of insights, due out this Fall.