Unfortunately, this belief often causes the effort to degenerate into a tendency to “wing it.” Since not all that happens in computer is intuitive, conclusively demonstrated, the results can be catastrophic.
First, Get Your Business Needs Down Clearly
As they say, when the boss says, “jump,” you need to ask, “how high?” Goals not clearly articulated cannot be achieved. You need to start by asking the hard questions:
Elasticity = Velocity + Capacity
A requirement for a quick ramp-up during peak customer usage periods, and only during those times, requires a high degree of elasticity. How efficiently can the system scale to your needs? If you need to ramp up too early, the benefit of scalability is diminished; scale too late, and your system performance deteriorates under the increased load. The goal is “just-in-time” scalability. Will the required ramp-up be fast at all times of the day and across all geographies?
And just how much capacity can you get? Will an additional hundred or 300 instances be there when you need them? How much human intervention is required to scale? Can it be accomplished automatically by setting policies?
What are the propagation delays? Is a transaction made in your London office available minutes later for use by the Mountain View, California sales team trying to close an end-of-quarter deal? How long does it take the end user to complete a multistep 100 Implementing and Developing Cloud Computing Applications workflow process, irrespective of the time of day, time of the month, or geographical location?
What Technologists Must Know to Manage Performance and Capacity
The following categories of system resources are often tracked by capacity planners.
CPU Utilization : The central processing unit is always technically either busy or idle; from a Linux perspective, it appears to be in one of several statuses:
Analysis of the average time spent in each state (especially over time), yields evidence of the overloading of one state or another. Too much idle is an indication of excess capacity; excessive system time indicates possible thrashing (excessive paging), caused by or insufficient memory and/or a need for faster I/O or additional devices to distribute loads. Each system will have its own signature while running normally, and watching these numbers over time allows the planner to determine what constitutes normal behavior for a system. Once a baseline is established, changes are easily detected.
Interrupts: Most I/O devices use interrupts to signal (interrupt) the CPU when there is work for it to do. For example, SCSI controllers will raise an interrupt to signal that a requested disk block has been read and is available in memory. A serial port with a mouse on it will generate an interrupt each time a button is pressed/released or when the mouse is moved. Watching the count of each interrupt can give you a rough idea of how much load the associated device is handling.
Context Switching: Input/output devices and processors are mismatched in terms of speed. This phenomenon makes computers appear to be doing multiple jobs at once by allocating slices of processor time to multiple applications. Each task is given control of the system for a certain “slice” of time, and when that time is up, the system saves the state of the running process and gives control of the system to another process, making sure that the necessary resources are available. This administrative process is called context switching. In some operating systems, the cost of this task-switching can be fairly expensive, sometimes consuming more resources than the processes being switching. Linux is very efficient in this regard, but by watching the amount of this activity, you will learn to recognize when a system exhibits excessive task-switching time.
Memory: When too many processes are running and using up available memory, the system will slow down as processes are paged or swapped out to make room for other processes to run. When the time slice is exhausted, that task may have to be written out to the paging device to make way for the next process. Memory-utilization graphs help highlight memory problems.
Paging: Page faults are said to occur when available (free) memory becomes scarce, at which the virtual memory system will seek to write pages in real memory out to the swap device, freeing up space for active processes. Today’s disk drives are fast, but they haven’t kept pace with the increases in processor speeds. As a result, when the level of page faults increases to such a rate that disk arm activity (which is mechanical) becomes excessive, then response times will slow drastically as the system spends all of its time shuttling pages in and out. This, too, is an undesirable form of thrashing. Paging in a Linux system can also be decreased by loading needed portions of an executable program into pages that are loaded on-demand, rather than being preloaded.(In many systems, this happens automatically).
Swapping: Swapping is much like paging. However, it migrates entire process images, consisting of many pages of memory, from real memory to the swapping devices, rather than page-by-page.
Disk I/O: Linux maintains statistics on the first four disks: total I/O, reads, writes, block reads, and block writes. These numbers can show uneven loading of multiple disks and show the balance of reads versus writes.
Network I/O: Network I/O can be used to diagnose problems and examine loading of the network interface(s). The statistics show traffic in and out, collisions, and errors encountered in both directions.
Cloud Computing Related Interview Questions
|Adv Java Interview Questions||UNIX/XENIX Interview Questions|
|Red Hat Linux System Administration Interview Questions||Microsoft Azure Interview Questions|
|Amazon Web Services (AWS) Interview Questions||Unix/Linux Interview Questions|
|KVM Interview Questions||Linux Virtualization Interview Questions|
|Aws Cloud Architect Interview Questions||Salesforce Crm Interview Questions|
|Azure Cosmos DB Interview Questions|
Cloud Computing Related Practice Tests
|Adv Java Practice Tests||UNIX/XENIX Practice Tests|
|Red Hat Linux System Administration Practice Tests||Microsoft Azure Practice Tests|
|Amazon Web Services (AWS) Practice Tests|
Cloud Computing Tutorial
Cloud Computing Is A True Paradigm Shift
From Do It Yourself To Public Cloud—a Continuum
Cloud Computing: Is It Old Mainframe Bess In A New Dress?
Moving Into And Around The Clouds And Efforts At Standardization
Cloud Economics And Capacity Management
Demystifying The Cloud: A Case Study Using Amazon’s Cloud Services (aws)
Virtualization: Open Source And Vmware
Securing The Cloud: Reliability, Availability, And Security
Scale And Reuse: Standing On The Shoulders Of Giants
Google In The Cloud
Enterprise Cloud Vendors
Cloud Service Providers
Practice Fusion Case Study
All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.