Having established what aspects of a process are important to deliver customer satisfaction, it is necessary to ensure that these aspects are properly controlled, in order to deliver the required outcomes.
A process using 100% inspection
The logical way to overcome the problems associated with this type of system is to apply preventative techniques at the operation stage to ensure that the product is produced to the required quality. Such a system is shown in schematic form in figure, the approach is based on Statistical Process Control (SPC) which is a statistical method of data collection and analysis that works in such a way as to monitor the operation and control it to its maximum potential.this enables the operation to be carried out in confidence that the final product will be good.
The application of statistical process control
The origins of SPC date back to the inter-war period, and are based on the work of Walter Shewhart (1980) who, in 1927 identified the use of control charts to detect process variation. The man who is seen to have most influenced the development of SPC as a technique, and popularized its use is W. Edwards Deming. Deming was a disciple of Shewhart and was sent to Japan at the end of World War Two to help redevelop Japanese industry. Amongst other philosophies The propounded the principles and practices of SPC, the Japanese listened, took up his teachings with enthusiasm and the rest is, as they say, history.
The core principle of SPC is the belief in the need to understand the variation in a process and manage it on that basis. The long-term aim of SPC is to minimize variation in processes so that customer requirements are more closely met than before.There are three key elements in achieving this aim:
Special and Common Causes of Variation
Variation is part of our everyday lives. Both at work and in our private lives we make allowances for its effects from the process of getting to work in the morning to the output of a complex manufacturing system. However, whilst a seat-of-the-pants approach to deciding how long we allow ourselves to get to work may be perfectly adequate, a similarly haphazard approach to managing processes at work is not desirable. We need to get a quantitative feel for the variation in our processes. There are two basic elements to this variation: the central tendency and the spread. We need to have a handle on both these since they are vital to a successful process. It’s no good being the right temperature on average if, to achieve this, you’ve got one foot in the ire and one in the fridge!
At this stage it is important to note the two potential causes of variation that can affect a process, these will be illustrated by means of a simple example of driving to work in the morning: even when we set of at exactly the same time, following the same route, in the same car it is apparent that arrival time will vary.
The difference between the two types of variation is crucially in their effect on the process. Common cause variation affects the overall spread of the process (so, for example, a journey with a lot of traic lights would tend to have a wide variation as the variation caused by red or green at each light would add up), it would not affect predictability. A process which is subject to only common causes will be predictable (within limits), so we know that our journey to work might take between 20 and 30 minutes provided that nothing odd happens.We cannot, of course, predict the exact time it will take tomorrow, but we can make sensible decisions with regards to process management.
On the other hand, a special cause will tend to not only increase variation but also to destroy predictability. For example, if you were involved in a road traic accident you would expect the journey to take longer. It would not, however, be possible to estimate the effect; it might be 10 minutes to exchange insurance details with anyone else involved, or if the car was no longer it to drive you might miss the whole day at work. If a process is unpredictable it is not possible to make any sensible management decisions; you could not, for example, allow an extra 30 minutes for your journey time if you knew you were going to have an accident.
Accordingly, a process which is subject only to common cause variation is described as being “In Statistical Control”. This is sometimes reduced to “In Control” or described as “Stable”. This essentially means it is predictable, and we know what is coming (within limits). When a process is under the influence of special causes it is described as being “Out of Statistical Control”, “Out of Control” or “Unstable”.
To effectively manage a process we need to be able to distinguish between In Control and Out of Control conditions. To do this we need to establish what the natural limits of the common cause variation are. To begin this process we need to put the data into context.
The first step in putting data into context is to see it as part of the history of the process. This is best achieved by the use of run charts. Such diagramsallow judgments to be made about process trends or shits. they often also compare the current status of the process to the target or budget associated with that process.
While this may be a significant improvement on making judgments based on the comparison of two points it is still not very scientific. Questions arising might include: when is a trend significant? How much of a shit has to occur before we act? How does the target relate to the process performance?
A run chart
Shewhart Charts: Application of Economic and Scientific Principles
The lack of convincing answers to these questions shows the vulnerability of this approach. Shewhart uses the empirical rule for homogenous data (Wheeler, 1995) which suggests that 3 standard deviations is an appropriate level to set up rules by which we can make consistent judgments about changes in the process – these are called the ‘natural’ or ‘control limits’.
A control chart
The concept of natural limits for a process means that we can distinguish significant changes from insignificant ones: Special Causes from Common Causes of variation. Since the decision rules are based upon characteristics of all homogenous data sets rather than the specific attributes of one particular distribution this is a very robust model.
Shewhart’s general approach to process control is to take a subgroup of the data and extrapolate from the results of this subgroup to make predictions for the population. The two elements of the subgroup to which control are applied are the average and the range. It is appropriate at this point to discuss the relative roles of these two elements. Both are necessary for proper control.
The average chart is concerned with variation between subgroups. The control limits are based upon 3 sigma for the subgroup average distribution. they are essentially testing if individual subgroup averages vary more than could be expected given the variability within individual subgroups. To this end the control limits are calculated using the average range of subgroup data as an estimate of this short-term variability.
For each subgroup calculate the average of the data and plot on the chart.
The range chart is concerned with variation within subgroups. The control limits are based upon 3 sigma for the subgroup range distribution. They are essentially testing if the variation within each subgroup is similar to the variation within the other subgroups. To this end the control limits are calculated using the average range of subgroup data as an estimate of this within subgroup variability.
For each subgroup calculate the range of the data and plot on the chart.
Shewhart has set down methods of calculation for the control limits for each of the charts. these are based on the assumption of 3 sigma limits for both average and range charts. these will not be discussed in detail here, but are covered in “Six Sigma: Principles and Practice”.
It is worth noting that the choice of 3 sigma is an economic rather than a statistical one. Shewhart (1980) states this in his seminal work on the topic. At this level he considers that it would be economic to ind and ix the causes of any point outside the limits but uneconomic to do the same for points inside the limits.
Out of Control Conditions
The purpose of calculating the control limits is to support the identification of out of control conditions and subsequent process learning. There are a number of rules for detecting out of control conditions (Wheeler, 1995), but for the moment we shall only use rule 1; where a data point falls outside the control limits a special cause is said to have occurred.
When an out of control condition is observed it is necessary to take appropriate action. the first point to remember is that no out of control point should be ignored. The chart can be seen as the voice of the process; if the process says that something has changed you must always listen and look for the special cause of the situation. To ignore this warning is to run a process whose output you have no confidence in. In the ideal case the process should be stopped until the cause has been found and irradiated. However, this is unlikely to be possible in every instance so it may be necessary to run with an unresolved special cause potentially present. In such a case it will be important to ensure that inspection-based controls are in place to protect the customer until stability has been regained.
The mechanisms for taking action will vary depending upon the situation in which you find yourself. To make the response to out of control points easier it’s desirable to keep alongside (or preferably on) the chart a log of everything which happens which might have an impact on the variability of the process. This will obviously include such things as shit, operator, tool and batch changes but might also include observations about ambient temperature, passing traic, tea breaks etc. In fact, the more detailed the better. As an example, it was found on one turning process that the opening of nearby external doors for the passage of factory traic was sufficient in winter to reduce the local ambient temperature to such a degree as to have a significant effect on the process. Had this factor not been identified on the process log it is likely that this special cause would have gone undetected for much longer. the first port of call, then, when an out of control point occurs should be the process log. In the majority of cases this will allow you to tie a special cause to an effect. If this is not the case then a brainstorm will need to be carried out (possibly supplemented by a cause and effect diagram) to establish what elements of the process (in its broadest sense) and its environment might have been responsible for the disruption. Normal problem solving disciplines will need to be applied to ensure that the right solution is arrived at.
These activities will need to involve all process related local personnel and possibly technical experts. Please note that in the best organisations such activities are not merely reserved for the resolution of special causes but learning from and responding to the chart will be shared between the local team in regular informal meetings around the chart. In this way reduction of common as well as special causes can be undertaken even at the local level.
Do not content yourself with tweaking the process when an out of control condition occurs. he point of SPC is to improve not adjust. here are, of course, occasions when adjustment is the correct short-term response, but consideration should be given to how to make the adjustment unnecessary -or less frequent- in the future.
SPC can be applied to any process where the output can be measured. However, it makes sense to concentrate on areas of most immediate benefit taking into account things like customer complaints, build problems, high scrap/rework, high quality costs etc. The characteristics to control using SPC are the same ones to which priority is given for any form of control. hose that are important to the customer and those with which we are presently experiencing difficulties.
Control limits are calculated using subgroup data and it is conventional to wait until 20 subgroups have been generated before performing the calculation. It is necessary to recalculate limits once a significant positive change in the process has been identified and cemented in by cause analysis or direct action. Do not recalculate limits as a result of negative changes to the process; find out why they happened and remove the cause to restore the process to its original equilibrium position.
Quality Management Related Interview Questions
|HR Management Interview Questions||Production and Operations Management Interview Questions|
|Hotel Management and Operations Interview Questions||Quality Control Interview Questions|
|Quality Assurance Interview Questions||Quality Center (QC) Interview Questions|
|Helpdesk Management Interview Questions||Test Manager Interview Questions|
|Quality Analyst Interview Questions||Pharma Quality Control Interview Questions|
Quality Management Related Practice Tests
|HR Management Practice Tests||Production and Operations Management Practice Tests|
|Quality Control Practice Tests||Quality Assurance Practice Tests|
|Helpdesk Management Practice Tests|
All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.