Capacity Planning: A Play in Three Acts Cloud Computing

The goal of capacity planning is to ensure that you always have sufficient but not excessive resources to meet customers’ needs in a timely fashion. The game is acted out as a play in three acts.

The first act is to instrument (measure) what’s going on. As Mark Twain said in an interview with Rudyard Kipling, “Get your facts first, and then you can distort them as much as you please.”

The second act is to forecast the expected workloads (the demand to be placed on the system) and in the third act, you model various combinations to determine the least costly combination for getting the job done with the response times and service levels that you require. But as Shakespeare says in Hamlet’s soliloquy to Ophelia, “Aye, there’s the rub.”

Act Three of our play is a model . And this model has to be validated, which means proven correct. Mathematical induction cannot be extrapolated infinitely. If your measurements in Act One were based on 10 transactions a minute, and your forecast in Act Two is for 500 transactions a minute, then you model based on that forecast, and your model was not validated beyond 60 transactions a minute, it’s unlikely that the model will be accurate for predicting behavior at 500 transactions per minute. Capacity planning is iterative and requires that you constantly revalidate your models.

The key to success is making accurate assumptions.Classical assumptions for analysis include:

  • The sample must be representative of the population for the inference prediction (in other words, are you comparing apples to apples or cherries to grapefruits?).
  • The error is assumed to be a random variable with a mean of zero conditional on the explanatory variables (in other words, the variations between the predicted and measured values should be a little higher or a little lower, but the average variance, ideally, is zero, and the error must be explainable).
  • The predictors must be linearly independent (i.e., it must not be possible to express any predictor as a linear combination of the others. So, for example, varying processor speed should not affect disk speed).
  • The variance of the error is constant across observations (in running the model across a variety of circumstances, the degree of accuracy should be roughly equal) .

Models that have been verified over a range of real-life conditions are said to be robust and are useful for prediction.

Capacity Mangement: An Old-New Technique
Capacity management in the clouds is an old-new story. In the early 1970s and continuing through the 1980s, capacity management—configuring just the right configuration of resources to meet response time requirements at the lowest possible cost—was a hot field. Mainframe computers were complex to configure, had long lead times, and were expensive, often costing more than $1 million ($4 million, adjusted for inflation). It was vital to configure them correctly and to accurately model workloads and related resource utilizations to ensure acceptable response times. It is no less vital when deploying in the clouds. To configure correctly, we need a basic understanding of queuing theory.



Face Book Twitter Google Plus Instagram Youtube Linkedin Myspace Pinterest Soundcloud Wikipedia

All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Cloud Computing Topics