• I’ve Seen The Future And It Will Be.

    You can’t predict the future. But you have to.

Business would be easy (and boring) if it wasn’t for The Future. Everybody frets about It, and the more senior you are the further ahead you fret. What else is running a company, but trying to harness, control or transform Its inescapable countdown to Present. From a friendly choc des idées at a small conference in Bruges, we examine the fundamentals of two opposing schools of thought. Who’s right; who’s wrong? Or is there some unifying theory?

In 2011, the U.S. Intelligence Advanced Research Projects Activity (IARPA) launched a forecasting tournament to improve methods for predicting geopolitical events. This initiative followed a series of high-profile intelligence failures, including the 9/11 attacks and the collapse of the Soviet Union, which had caught analysts off guard.

Among the participants were University of Pennsylvania professors Phil Tetlock and Barb Mellers. Their research project, “Good Judgment,” enlisted 3,800 volunteer forecasters to make predictions on a wide array of global topics—from election outcomes to the thickness of Arctic sea ice to the timing of international trade agreements.

After the first year, the top 2% of forecasters—those with the most accurate predictions—were invited to join small teams of about ten members. These “superforecasters” continued submitting individual forecasts but now had the opportunity to collaborate and debate within their groups. The research team studied whether this teamwork improved accuracy and experimented with statistical aggregation methods to refine collective forecasts.

Good Judgment’s performance was extraordinary. The superforecasters—everyday individuals with no specialized training—consistently outperformed domain experts and even professional intelligence analysts with access to classified information. In some cases, their forecasts were up to 30% more accurate.

Their dominance was so clear that by 2013, IARPA chose to end its partnerships with other teams and continue exclusively with Good Judgment and its 250 superforecasters.

I was one of them.

“Precogs?”   //   The Future is UNKNOWABLE!

The forecasting tournament concluded in 2013, leading to Phil Tetlock’s book Superforecasting. I participated in a few other similar research projects, and got invited to present on superforecasting techniques at a CFO conference in Bruges, I argued that the systematic approaches used by top performers—probability calibration, evidence evaluation, and belief updating—could be valuable for corporate financial planning. My presentation emphasized that effective forecasting relies on learnable skills rather than innate talent.

During the conference panel discussion, another speaker, a corporate finance professor, advocated a perspective fundamentally different from mine. He argued that the future is inherently unpredictable and that efforts to forecast specific outcomes are a waste of time. Instead, he promoted scenario planning as the only approach worth pursuing by finance professionals, viewing it as a more productive alternative to prediction-based methods.

While I disagreed with his position—particularly its absolutist framing—the debate sparked my curiosity about this fundamental conflict between prediction and planning approaches. Rather than dismissing his arguments, I decided to investigate scenario planning more thoroughly. Following Ken Wilber’s principle that no person is capable of being 100% incorrect, I set out to steelman the case against prediction, seeking to understand what valuable insights might emerge from genuinely engaging with the opposing viewpoint.

I don’t believe that any human mind is capable of 100 percent error… Nobody is smart enough to be wrong all the time.

― Ken Wilber

To address a crucial point immediately: I agree with the professor that the future is not merely uncertain but inherently unknowable. However, some situations are more unknowable than others, and prediction need not achieve perfect accuracy. Being somewhat less wrong can still provide value. Even so, the potential pitfalls of prediction are numerous and serious.

These problems fall into three distinct categories: limitations in the forecasting methods and models themselves, human behavioral biases interfering with the prediction process, and communication breakdowns between forecasters and decision-makers.

To start with the methods: most predictions heavily rely on past data and historical patterns, assuming that previous relationships will persist into the future. This assumption frequently fails precisely when accurate forecasting becomes most critical: during periods of rapid change, structural breaks, or unprecedented circumstances. Forecasting exercises typically concentrate on specific, preselected variables while neglecting broader environmental contexts. This myopic approach blinds forecasters to crucial interconnected factors, emerging developments, or systemic changes that can render specific predictions meaningless. Especially when the consequences of these relatively unlikely events are disporportional, the prediction failure is huge. A generational example is the Great Financial Crisis of 2008, which Nassim Nicolas Taleb famously chronicled and documented in his landmark book “The Black Swan”.

Whether it’s cause or effect is unclear, but because of this dynamic most forecasting efforts focus on relatively short time horizons, failing to account for longer-term structural changes or rare but high-impact events that can fundamentally alter the predictive landscape. And even well built models can be built on incorrect assumptions about causal relationships and key variables. Reality tends to be far more complex than the models, and complex systems are rife with non-linear effects, emerging properties and all kinds of other curveballs.

As if the models weren’t bad enough, they are typically wielded by human beings, opening up an entire second category of trouble. Human judgment suffers from predictable cognitive limitations including confirmation bias, overconfidence, groupthink, reliance on mental shortcuts, hasty generalizations from small samples, and emotional interference with rational analysis. Expert overconfidence proves particularly problematic, as domain knowledge can increase certainty without improving accuracy.

Accountability Evasion: Many forecasters employ deliberately vague language that makes post-hoc accuracy assessment difficult or impossible. This ambiguity allows poor forecasters to claim vindication regardless of outcomes—a problem especially acute among television pundits and public commentators who benefit from maintaining credibility without accepting responsibility.

Institutional Gaming: When forecasting becomes institutionalized, perverse incentives emerge. Forecasters may cherry-pick favorable data, adjust methodologies to produce desired results, or engage in statistical manipulation. Organizations often prefer conventional failures over original mistakes, creating pressure to conform rather than innovate.

Anchoring and Path Dependence: Initial forecasts unduly influence subsequent predictions and decisions, creating path dependence that prevents adaptive responses to changing circumstances.

Category 3: Communication and Decision-Making Breakdowns

Technical Translation Failures: The handoff between forecasters and decision-makers introduces critical communication gaps. Leaders often receive predictions without understanding underlying reasoning, methodology, or confidence intervals, leading to misplaced confidence in uncertain projections.

Leadership-Forecasting Cultural Clash: Qualities valued in organizational leaders—confidence, decisiveness, clear vision—directly conflict with intellectual habits that produce accurate forecasts: probabilistic thinking, intellectual humility, willingness to change positions, and comfort with admitting error.

Inadequate Uncertainty Communication: Few forecasters excel at communicating the inherent uncertainty in their predictions. Decision-makers rarely conduct sensitivity analyses or Monte Carlo simulations to understand the range of possible outcomes, instead treating point estimates as gospel.

Oversimplification Pressure: Organizational and political pressures favor simple, definitive answers over nuanced, probabilistic assessments. This dynamic incentivizes forecasters to present false precision and decision-makers to ignore uncertainty.

Question Framing Problems: Both forecasters and decision-makers may fail to ask the right questions initially, focusing on easily quantifiable metrics while ignoring more fundamental but harder-to-measure factors.

Additional Arguments Against Prediction

Resource Opportunity Cost: Extensive forecasting efforts consume significant time, money, and cognitive resources that might generate greater value when deployed elsewhere, particularly when accuracy improvements are marginal.

False Security: Predictions can create dangerous illusions of control and certainty, leading to inadequate contingency planning and reduced organizational adaptability.

Complexity Underestimation: Many systems exhibit emergent properties, non-linear behaviors, and feedback loops that make them fundamentally unpredictable regardless of analytical sophistication.

This framework suggests that prediction failures occur not merely due to inherent uncertainty, but through systematic breakdowns across methodology, human psychology, and organizational communication—making the case for scenario planning’s emphasis on preparation over prediction increasingly compelling.

If a man is considered guilty for what goes on in his mind
Then give me the electric chair for all my future crimes

― Prince, “Batdance”

Integrating both.

Integrating both.

I’ve Seen The Future And It Will Be.

Credits:

Words: SV

Photos: Minority Report

Video: “Mad Men”, pilot episode, AMC

https://markmanson.net/ken-wilber