The Gell-Mann amnesia effect may seem to be an interesting behavioral novelty for cocktail conversation, yet it provides an important lesson for doing research in any financial markets.
It is a good story, but you will not find it in any behavioral finance discussion. There is no actual research on this effect. Crichton wrote about it years ago in a posting “Why Speculate?”. He actually stated in the article, “I refer to it by this name because I once discussed it with Murray Gell-Mann, and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have.”
I defined stupidity as overlooking or dismissing conspicuously crucial information. – Adam Robinson
That definition seems obvious, but there has been deeper research studying how to define stupidity. Of course, this research was published in an academic journal called, Intelligence.
Nonetheless, it seems that one of the key ways to generate success in investment management is to just not do stupid things. Cut the stupidity and you will be more likely be a success. Unfortunately that is easier said than done. Stupidity is all around us. We are not just talking about behavior biases but rather the issue associated with a lack of good sense or judgment. Of course, behavioral biases and stupidity do intersect. The attempt to employ mental shortcuts will lead to stupidity.
Most data are confirming. New economic data are always occurring, but these announcements just reinforce what we already know. First, a lot of economic data moves together, so there is limited added or marginal information. Second, there is a bias with investors that they look for or see confirming information to their existing view.
“On the road from the City of Skepticism, I had to pass through the Valley of Ambiguity.” Adam Smith
Most of finance focuses on measurable risk. Less focus is uncertainty or events that cannot be give an objective probability of occurring. Those events that are hard to measure provide the greatest opportunity.
Each situation requires a balancing derived from judgment and arising from experience, skills acquired by learning from the past and training for the future.
I have been a close follower of behavioral economics research. This broad research is insightful and has caused me to think deeper about how to make better decisions. It has certainly reinforced my belief that using algorithms to make decisions is better than discretionary judgment. However, I have read a series of recent papers that have caused me to take a closer look at some of the core behavioral beliefs that have been established in this area. See the work of Dan Gal and Derek Rucker in the Journal of Consumer Psychology and the recent article in the Observation Section of https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3049660
Ben Bernanke, former chair of the Federal Reserve. “In 2020, Wile E. Coyote is going to go off the cliff and look down.”
Alan Greenspan, also former head of the Fed. “There are two bubbles: a stock market bubble and a bond market bubble.”
Scott Minerd, Guggenheim Partners chief investment officer. The market “is on a collision course with disaster” and the catastrophe will hit in late 2019, with stocks losing 40%.
Jim Rogers, founder of the Quantum Fund. “When we have a bear market, and we are going to have a bear market, it will be the worst in our lifetime.”
From Forbes 4 Financial Savants Warn About The Great Crash Of 2020 Larry Light
These four experts are telling us doom is ahead. Call it Wile E. Coyote moments, double bubbles, bear of bears or a collision course with disaster, the prediction is the same – wealth destruction is coming. These are the usual doomsday stories. They may be right but there seems to be a natural bias to the dark side. We seem to like it and pundits keep feeding us these narratives.
“We need to embrace the fact that we don’t know what the next bad outcome is. We need to think outside the box.”
“The world is continuing to change, and we need to constantly reinvent ourselves in this revolving world.”
-John Williams, President NYFRB
Allow your intuition to guide you to a conclusion, no matter how imperfect — this is the “strong opinion” part. Then –and this is the “weakly held” part– prove yourself wrong. Engage in creative doubt. Look for information that doesn’t fit, or indicators that pointing in an entirely different direction. Eventually your intuition will kick in and a new hypothesis will emerge out of the rubble, ready to be ruthlessly torn apart once again. You will be surprised by how quickly the sequence of faulty forecasts will deliver you to a useful result. – Paul Saffo
Mistakes were made. Mistakes will be made. Sorry, I made a mistake. Many think that an apology for mistakes is all forgiving. But the definition or type of mistake makes all the difference. Don’t make them on purpose.
I have written about Sherman Kent for years as someone who grappled with uncertainty and the language we use to discuss it. There is imprecision in the words we use such a “likely” and “probable”. Sherman Kent was a professor at Yale University who was called in to co-head the CIA’s National Office of Estimates and improve their forecasting skill. Getting the chance of a bad event wrong has real effects. During his tenure, Kent wrote an important piece on the use of ambiguous words to describe our probable estimates. He worked to end squishy vague language that provided political cover for assessment authors.
“There’s a lot of noise when making a decision. Not in the decision itself, but in the making of the decision. It is possible that an algorithm, and even an unsophisticated algorithm, will do better because the main characteristic of algorithms is they’re noise-free. You give them the same problem twice, you get the same result. People don’t.”
– Daniel Kahneman keynote at the Morningstar Investment Conference in Chicago 2018
“An algorithm could really do better than humans, because it filters out noise. If you present an algorithm the same problem twice, you’ll get the same output. That’s just not true of people.”
“But humans are not very good at integrating information in a reliable and robust way. And that’s what algorithms are designed to do.”