the cost of procrastination

This article is part of the series decisions

Discussions of decisions and problem solving.

  1. decisions, decisions
  2. perfect decisions, or timely decisions?
  3. picking opportunities
  4. the cost of procrastination
  5. solving the right problem
  6. plan A and plan B
  7. before you decide

In my last post, I looked at strategies for picking opportunities. I left off with a cautionary message about cost and procrastination. And that is the subject of this post.

People justify procrastination because “something better will turn up”. When viewed through the lens of statistics and probability, this is a fraught strategy. The inferior outcomes it leads to are quite frightening, and can be very expensive. Greed sometimes plays a role, and produces a similar outcome. A higher price may not be the best outcome, as we will see in this article.

This article posits a real and substantial cost to procrastination. In addition, aside from these real costs, and the opportunity costs, there is the cost of a suboptimal position because you delayed.

In looking at the opportunity selection strategies in the previous article, I carefully ignored the attendant costs involved.

In the “secretary problem”, there are two obvious costs. The first cost is the time spent interviewing – known as the observation cost. This is the opportunity cost of taking time to perform the interview, when you could be doing something more productive. The second cost is another opportunity cost, and is the cost of not having a secretary. Until you choose to hire, you have no secretary, and consequently your productivity is lower because you are doing all those things that otherwise the secretary would do for you.

To factor these costs transforms the problem into something a little different. You should be looking for the candidate that optimises your cost/benefit situation, rather than finding the absolute best candidate. A better, but later, candidate might not be a better solution when you account for the costs.

In statistics literature, this is the “house-selling” problem.

As described in the publications, this problem is what game theorists call a “partial-information” game, rather than a “zero-information” game, of which the secretary problem is a classic example.

In this model, you know the range of acceptable offers. You do not know the order in which these offers present, but you do know your costs.

Your acceptable range will have come from your market research, you will know the most you are likely to get, and the lowest you will accept. You also know your costs – agent’s fees, advertising costs, holding costs and the like. These costs are likely to vary directly with elapsed time, an important concept I will return to.

You are going to reject any offer below the range, as it is less than your acceptable minimum. The chances of successfully revisiting previous offers are so small as to be inconsequential.

The question you face is that given an offer, what is the probability that you will get a better one? Not just a slightly higher price, but significantly higher to overcome the cost incurred.

The mathematics behind informing your strategy are over fifty years old, and they tell us that the biggest influence on your decision should be the ratio of your cost to the selling price range.

Whilst your costs are small, compared to the range, you can afford to be choosy. Which makes sense because the change in your cost base at this level doesn’t have a big impact on the probability of a better offer.

But, once your costs as a ratio of your price range start to increase, the statistics say you are heading towards a problem. As every day goes by, and your costs increase, your options diminish.

So, what do you do?

The following example may help.

Imagine you want to sell your house, and you have a minimum price of $950,000, and your trusted adviser, the real-estate agent, has led you to set a $1,050,000 upper band. This gives a price range of $100,000.

Your costs are $10,000 per month, being mortgage, advertising fees and other attendant costs.

Early in the process, you receive an offer of $1,000,000. Clearly something that must be considered, but you will be asking yourself whether there is a better offer out there. The statistics help you here – there is a better than even chance that there is. Not guaranteed, but certainly significant at around 68%.

You wait. You in fact wait three months, and an offer of $951,000 arrives. It is within your band, so you are going to consider it, but what is the probability that something better may arrive? You will be horrified to learn that the mathematics tells us it is less than an even chance. This may very well be the best offer you are going to get. You have just lost $79,000 – the price difference, plus the costs – and you have lost it through delay.

The following graph shows probability of a better offer, not just a higher price, plotted against cost. Given that cost is a factor of time, you could interpret this graph as the better offer probability as a function of time.

A graph showing probability of a better offer as a function of cost
what is the probability of a better offer?

Wait for something better to turn up? Something so good it will overturn the cost to benefit equation? Good luck. Once you are past a relatively short period of time, the numbers are against you.

There are two reasons why people make bad decisions in these scenarios. The first is that they move too soon, and don’t explore the options. The second is that they decide to move too late, and are forced into suboptimal outcomes. The graph makes this obvious. Whilst early in the cycle, you should be looking for better offers. However, once you cross that point where your costs are 15-20% of your range, you’d better be very decisive.

A digression – at what point do the odds work against you? The answer is: when your cost ratio is around 1/2e. Half that magic number I have discussed before.

So much for this partial-information game. What I find intriguing is the “full-information” game. It could also be called the procrastinator’s delight

In this scenario, you know that the next opportunity will be better than any previous. You may even know when it will arrive.

In technology decisions, Moore’s Law sits in the mind of every purchaser. It says that the price/performance ratio of technology doubles every 18 months. Moore was referring to chips at the time, but it now has wider application, and people expect it now of most technology. Any consumer electronics device you buy will be superseded quickly, and the new one will have better features and be cheaper

Should you buy something now, or should you wait for the next, indubitably better iteration?

This is purely an arithmetic problem.

For the purposes of this discussion we are using price/performance as an analogue for cost/benefit.

First, you need to establish your cost of not having the device. Drawn on a graph, this will be a straight line, angled down. Next, you need to draw today’s benefit line, a straight line, angled upwards. You then draw your next benefit line.  Now to the important part – this new line does not start on the y-axis at zero; it starts from the corresponding place on your cost timeline.

For Moore’s Law you should arrive at something like the following graph. Note that in this graph the time periods are six months, so you are waiting 18 months before acting.

A graph showing a benefit comparison using Moore's Law
benefit comparison – Moore’s Law

This tells us something immediately. If you wait 18 months for the next generation under Moore’s Law, you will achieve parity in three years, and then be wildly more beneficial thereafter.

But, is that good enough?

It doesn’t consider obsolescence and replacement. It does show that it takes some time to catch up, and if you consider the time cost of money, it becomes less unattractive.

Moore’s Law is very, very aggressive, and is not seen generally outside pure technology. Certainly, cars don’t fit his law, and nor do all consumer technologies. If the scenario was a 10% improvement every year, then waiting for the next generation will be something your grandchildren will thank you for. Don’t believe me? Draw the chart yourself.

The mathematics for your own chart is simple – it is just solving some simultaneous equations to find intersection points.

If your scenario were a 10x improvement next month, then you would be mad not to wait. Anything less substantial than Moore’s Law becomes, at best, debatable. In fact, it is most likely never to pay off, given the time cost of money.

The point to this exercise is simple. Unless there is a compelling reason to wait, such as wildly improved cost/benefit equations, you are best placed to act now.

That, of course, is the whole point of this post.

The strategy of “waiting for something better” has usually only two outcomes – one rare, and one common. If you are unusually lucky, and beat the probabilities, there may be some advantage in waiting. However, if you are no luckier than everyone else is, the statistics and probabilities should tell you that by procrastinating you would achieve an outcome that is less than ideal. You may not ever catch-up to the offer on the table today. Further, the longer you delay the worse it gets, and the hope that something better turns up fades.

further reading

For the statisticians and mathematicians, please refer to the 1962 paper by Chow and Robbins called “On Optimal Stopping Rules”.

The paper can be obtained from springer.com

about the author

Keith Pfeiffer was born in the UK at an early age and migrated to Australia shortly thereafter. He has a passion for his technology career, literature, music performance, and of all things, Indian cuisine.

comments

All comments are moderated according to our comments policy.