Standard Deviation Challenged
Thursday 24th May 2018
Side by side

Synopsis: Standard deviation (SD) is used to set safety stocks, while forecasting is used to estimate lead time usage. The two combine to determine the stock and capacity commitment an organisation must make to achieve its service level goals. While forecast error is widely understood, standard deviation error is not. Nor, crucially, is the interaction between the two errors. Sometimes a low side forecast will be compensated by a high side estimate of SD, at other times a low side forecast will be made worse by a low side SD estimate (and so on).

Originally published by Cranfield


In this paper, first published 2002, we proposed that:-

  1. The errors compound more than they cancel or partly cancel
  2. Other methods, though not field tested, give better results

One such method is proposed while the research continues.

A note on computerised stock algorithms, both explicit and implicit.

A human forecasting system has an inbuilt gatekeeper. The forecaster decides whether to accept or reject changes suggested by their investigations. By this and other such methods the forecast is made explicit.

By contrast, many system forecasts are implicit, and never exposed to critical scrutiny. For example, where the systemised outcome is a suggested Purchase Order the forecast and standard deviation calculations are implicit. Where the results mislead it's common to blame the forecast. Maybe the forecast is as good as it can be, the problem is with the SD estimate?

The origins of this research.

When one company's figures show a $7m a year gap between actual and ideal stockholding with no discernable pattern in the product by product differences, the problem must have multiple causes. We needed to investigate and challenge every component (human or system) of the calculation. In this case, standard deviation was just one (the second largest) of more than 5 contributory causes. We estimate SD was causing $750,000 of extra costs a year in the UK alone.

The size of this prize was a spur to delve into an area we felt had been under-researched. Indeed the study was in part a legacy of a deep seated unease with both the theoretical SD calculation and its computer implementation. We therefore set out to answer 2 questions

  1. Is past SD a good predictor of future SD?
  2. Even if it is a bad predictor, do the forecast and SD errors cancel? In other words, does it matter? Are we often right for the wrong reason - the errors have cancelled or partly cancelled.
Along the way we found just how much SD has been overlooked; it had become the 'forgotten partner' in the whole forecasting for stock arena.

Some light hearted comparisons are helpful for the way they illuminate the difference in mindshare.

  • Demand forecasting has 2.18 million web references, vs 7,500 for standard deviation.
  • Forecasting has 57 books in Amazon, SD has no books, one paper.
  • Forecasting has an institute (the [American] Institute of Business Forecasting) which - the last conference I attended - had absolutely nothing on standard deviation.
  • Several professional societies have forecasting SIG's (Special Interest Groups) - ORS (Operation Research Society) is just one. None have SD SIGs


The method simulates a common (though not recommended) computerised safety stock algorithm. Monthly 'Sales' for a single product are randomly generated about a mean from a Poisson or similar skew distribution. In other words, the sales have the demand variability one would expect of a rational market with no acquisition cost or history of shortage.

The sales populate 12 months in each of 1,000 notionally different years. In fact the underlying demand is the same, each month or year's sales are just samples from the population and therefore vary purely through sampling error.

Using 8 different forecast methods, each year calculates a month ahead forecast, and a historic SD. These are used to calculate a cycle + safety stock target over an array of lead times and service levels, always using the correct transforms. The cycle and safety components are kept separate so we can later determine which component is causing what part of the total error.

Since the 'correct' cycle and safety stock are already known from the base data, we can now compare and classify the computerised predictions with this base.

Design around the norm; catch and manage the exceptions. Not the other way round.
Home | About Us | Showcase | Research | Cases | The Vaults | Tips & Quips | Contact Us …
© 2002 - 2018 Supply Chain Tools Ltd.