Predicting Trouble

Lance Forbes Comments

Reading Time: 3 minutes

Originally posted at Forbes.com

It’s been just over three weeks since an oversized rodent named Phil was hoisted aloft by men in top hats and deemed to have predicted six more weeks of winter. Surprisingly, Phil’s not the only groundhog making a name for himself in the seasonal weather forecasting game. Ohioans have their own “Buckeye Chuck” … and he, too, predicted six more weeks of winter. (Evidently that’s the verdict when Chuck simply refuses to show his face to the camera-wielding world.)

So, how have their predictions fared? Halfway through that six-week period, and … well … take a look for yourself:

There are many things I would call days like last week in which the low temperature averaged about 10 degrees higher than the average high temperature, but “winter” isn’t one of them.

Whether you like 75 degree days in February or hate them (yes, people like us that exist), the month is about to end, and that means ushering in that time of the year when predicting the future takes over the American cultural landscape: March Madness is only two weeks away. Two weeks until office productivity nosedives as copy machines turn into bracket production machines and everyone thinks their bracket full of predictions will be the one that wins it all.

Of course, everyone knows that most will get their predictions wrong most of the time. Forget the perfect bracket — it’s nearly impossible just to get all of the Final Four teams right.

These failures of forecasting aren’t surprising. In fact, they are fully expected. Whether in weather or in sports, everyone understands that a divergence between the predicted outcome and the actual outcome points to a problem with the prediction, not the outcome.

Obvious as that is, there is one arena where things are often viewed quite differently: the world of corporate performance and financial analysis. There, the predictions of the professional analysts are couched as “expectations,” “ratings” and “outlooks.” When a company arrives at the quarterly earnings reporting period with results that are short of those predictions, the shortfall is viewed as a sign of a problem with the company rather than with the predictions themselves. The comparison of the two goes something like this: the professional analyst has studied the data inside and out, and by applying his expertise and the wizardry of math to the data, he charted where the company should have arrived at, performance-wise. On the other side of the ledger is the company, which simply failed to do what the data said should have been possible.

What seems laughable in nearly every other context — that the entity producing the outcome is at fault for failing to fulfill the prediction — is commonplace in the world of financial media. Imagine how different the modern corporate landscape would be if financial predictions were subject to the same expectations of failure as in other walks of life? Instead, the prediction tail wags the performance dog, and we end up with outcomes like this:

One recent survey of 400 corporate finance officers found that a full 80 percent reported they would cut expenses like marketing and product development to make their quarterly earnings targets, even if they knew the likely result was to hurt long-term corporate performance. – Lynn Stout’s The Shareholder Value Myth

Think about that: Four out of five finance executives would take actions they knew were destructive to the company’s health in order to arrive at a target that was, at the time it was set, a prediction of future performance. Astonishing, and yet not surprising all at the same time. When performance is judged by whether it aligned with the predictor’s forecasts (whether those predictions come from without, or within), then it is utterly predictable that measures will be taken to achieve that “success” … even unhealthy ones.

Even a lying groundhog can see the trouble with that.