Forecasting growth in computer utilization is the key to successful capacity planning. Three basic techniques have been used, with varying degrees of success, to forecast computer utilization:
The author noted as far back as 1979 that these typical forecasting techniques are inadequate.
What are the disadvantages of using such techniques?
As an example, consider the real estate market: Houses that had been appreciating in value for years started suddenly to retrench. For an example in our domain, consider that an installation may experience flat growth this year, but may experience a large workload increase when the applications currently under development go online, or when a company suddenly achieves notoriety. Simple trend analysis is never going to predict that type of growth, which is caused by “exogenous” events (something coming from outside the system, such as a mention on The Oprah Winfrey Show, which can cause usage to skyrocket), rather than endogenous ones related to the system’s internal workings.
Key Volume Indicators to the Rescue
For many years, we have been using a concept called key volume indicators (KVIs) to overcome deficiencies in forecasting using other methods. The tools required are a usage analysis program (such as Nimsoft, discussed earlier) and a regression program, found in the Microsoft Excel Analysis Toolpak add-in or any standard statistical software package such as SAS.
The underlying principle behind this technique is that the end user, given sufficient information, understands his business best. He should be responsible for predicting his own needs. He cannot be expected to predict computer usage, but he probably can predict business-related growth with a fair degree of success (and let’s face, if he can’t, it’s his problem). The key volume indicator is a way of relating an application’s units of work to computer resource utilization. Appropriate KVIs must be chosen to prepare reliable forecasts. The key volume indicators must relate to computer usage and at the same time be business and application-related in order to be forecastable by the users. A useful by-product of the process is the availability of unit costs for many applications, permitting comparisons among user constituencies and better cost estimates for planned applications.
Once the indicators have been identified, the user will prepare a forecast in terms of the key volume indicator units and the computer will translate these units into a forecast of computer resources. I/O operations, CPU-seconds, or any other measure of utilization can be forecast using this technique.
Types of Workloads
Growth in corporate computer use will come primarily from three sources:
Increased workloads for existing applications, environmental and geographic workload shifts, and new applications. Other factors influencing use are related to changes in processing time caused by program modifications, new techniques such as a change in data base management system, or changes in run frequency.
Determining KVIs for an Application
Potential KVIs are selected for their forecastability, relationship to the application, and the availability of their historical data. The historical volume and computer utilization data is examined statistically in order to select potential volume indicators with the greatest correlation to computer utilization over time.
Monitoring and Improving Forecastability
Forecasts will improve because users will be able to do a better job of forecasting KVIs than forecasting using computer-related measures, the indicators will be chosen carefully and bear a known relationship to computer resource utilization, and users will prepare forecasts periodically, and as they gain experience in actual usage as compared to their forecasts, their forecasts should improve.
An additional benefit from the use of KVIs is the collection of data relating to the computer resources required per KVI unit of work. This data can be used to develop standard costs for applications, to compare standard costs with actual costs to highlight variances, and to compare cloud-based costs to traditional server architecture costs. Comparisons of the costs of similar applications among divisions can assist in identifying inefficient programs and in reducing costs.
Determining Whether Resources are Adequate for Projected Demand
As we noted earlier, queueing models can be used to predict the effect on response times and CPU utilization of workload changes. For a given computer configuration and workload, the model will predict the average CPU utilization, the average batch job turnaround times, and the average terminal response times to be expected due to workload changes. The model also can be used to predict the effect of volumes different from the forecasts.
Usage data is not available for applications not yet installed. If a similar application is installed at another division, the key volume indicators and coefficients may be borrowed as a first approximation. If such comparable data is not available, the analyst will have to base his estimate on the time required to process the approximate number of CPU instructions and database operations per transaction. If a new application will replace an existing application, care must be taken to deduct from the total forecast all resource utilization to be displaced by the new application. After the application is placed in production, and the usage data becomes available, the procedure for existing applications should be followed.
Accuracy of Forecasts
What unusual conditions may arise when preparing forecasts using KVIs? The most obvious possibility is that the standard error reported by the regression program is unacceptably large. This indicates either that the KVIs selected are not, in fact, good predictors of resource utilization, or that the data are incomplete or reflect a temporary exceptional condition. This question can be resolved by inspecting the output from the regression program and noting the variance.
If the variance is large for most months, the potential KVIs were not found to correlate well with utilization, and different indicators must be selected. However, if the data generally correlate well, but correlate poorly for a few months, this would also impact the standard error of estimation. If possible, the data should be researched to determine whether there were any unusual conditions relating to processing the application system for those months where the variance is large. The non-representative data should then be disregarded and the regression program run once again, using the remaining data. This will often produce an acceptable result.
Another item of concern in some shops relates to distribution of the workload by time of day. An advantage of cloud computing is that resources can be added and subtracted to reflect varying arrival rates of transactions by time of day. However, a key point to remember is that computer capacity does not generally come in very small increments. The purpose of preparing a capacity forecast is to determine the need for additions or changes to the computer configuration.
The forecasting accuracy is sufficient if it can correctly predict the need for equipment changes. One useful way to test the sensitivity of a capacity prediction to small changes in user forecasts is to bracket the projected forecasts when preparing the capacity plan. This is especially easy to accomplish where a model is employed.
Dr. Buzen’s BEST/1 capacity planning tool17 is still available from IBM as part of the Performance Tools for iSeries licensed program. A variety of other modeling tools are available.
Java Modelling Tools (JMT) is a suite of applications developed by Politecnico di Milano and released under GPL license. The project aims to offer a complete framework for performance evaluation, system tuning, capacity planning, and workload characterization studies. The current stable version of the suite encompasses six Java applications:
Make or Buy a Cloud
It’s not sufficient to merely produce a validated model of a configuration validated to meet the projected requirements. The next questions inevitably are: make or buy; internal cloud versus external; public versus public. The major issues affecting these decisions come in two flavors: tangible and intangible. Tangible considerations have a quantifiable dollar cost and value of service. Intangible costs and benefits are harder to quantify but should never be ignored, because they are often points of project failure.
As we have seen, folks move to external clouds for many reasons, but two stand out:
Sometimes, one comes at the expense of the other. Two very basic facts must be always remembered. First, if all the critical service attributes and the IT resources inventory are identical, then the cost of a service purchased from a provider will always be higher than the in-house estimate, based on the simple fact that the provider is (trying) to operate for a profit, while inhouse costs are frequently underestimated and don’t include all costs. Based on our experience with multiple customers, a provider’s charges, like for like, are typically about 20 percent higher than the in-house service. A second fact of life is that economies of scale are very much alive and well in the IT world.
The cost per unit of IT resource decreases as the quantity increases. It follows, then, that the cost per server for a 500-server complex (all costs included) is about double the cost of an identical configuration of 5,000 or more servers. Purchasers of cloud computing services need to establish at what point the outsourced cloud service is more cost effective, based purely on economic considerations. Large cloud vendors are good at cost containment and spread many of the overhead and fixed costs over a large quantity of devices, so for a modest-sized installation of <500 servers, they are often cheaper even when their profit margins are factored in.
Capacity planners now have a new mission, in addition to their traditional responsibilities:
Cloud Computing Related Interview Questions
|Adv Java Interview Questions||UNIX/XENIX Interview Questions|
|Red Hat Linux System Administration Interview Questions||Microsoft Azure Interview Questions|
|Amazon Web Services (AWS) Interview Questions||Unix/Linux Interview Questions|
|KVM Interview Questions||Linux Virtualization Interview Questions|
|Aws Cloud Architect Interview Questions||Salesforce Crm Interview Questions|
|Azure Cosmos DB Interview Questions|
Cloud Computing Related Practice Tests
|Adv Java Practice Tests||UNIX/XENIX Practice Tests|
|Red Hat Linux System Administration Practice Tests||Microsoft Azure Practice Tests|
|Amazon Web Services (AWS) Practice Tests|
Cloud Computing Tutorial
Cloud Computing Is A True Paradigm Shift
From Do It Yourself To Public Cloud—a Continuum
Cloud Computing: Is It Old Mainframe Bess In A New Dress?
Moving Into And Around The Clouds And Efforts At Standardization
Cloud Economics And Capacity Management
Demystifying The Cloud: A Case Study Using Amazon’s Cloud Services (aws)
Virtualization: Open Source And Vmware
Securing The Cloud: Reliability, Availability, And Security
Scale And Reuse: Standing On The Shoulders Of Giants
Google In The Cloud
Enterprise Cloud Vendors
Cloud Service Providers
Practice Fusion Case Study
All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.