This series of articles that focus on customer service has looked at identifying who the customer is, and establishing systems to provide the service that customers will value. A third cardinal principle when it comes to assuring high quality customer service is that of understanding variation in systems, and being able to distinguish special causes from common causes in variation.
What is known as “point mentality” represents a knee-jerk response to what appears to be a problem. We often treat medical symptoms, for example, as single events, when in fact they may be related to a larger condition. If the stock market goes down slightly on a single day, selling everything in a panic would represent point mentality. The market will undoubtedly go up again, in the nature of markets, so watching trends rather than points makes sense for investments. Collecting data and charting it provides a visual image of trends, both in the stock market and with every process that is measurable. Sometimes conditions demand that one can’t wait to evaluate patterns in data, but needs to respond immediately to a situation. Emergencies require response. But most of the time, we will save time, energy, and other resources by examining data over a period of time and making decisions based on trends or other patterns in the operation of the system itself. The fact that a child fails one third-grade spelling test does not mean that he or she will be a failure in life—or even in spelling.
In another example, a classroom temperature wavered from cold to hot throughout the day. Before studying the problem, the inclination of students was to assign blame: “Why can’t the first-period class leave the thermostat alone?” and “Tell the afternoon class to turn the temperature up!” By understanding that much of the variation was due to natural causes, students were able to focus on keeping the temperature steady instead of moving the thermostat abruptly from highs to lows. They recommended to the maintenance staff that the thermostat be moved, and continued to record data to assure themselves of improvement in the system.
If one explains to a room full of people that on the count of 3, everyone should clap simultaneously, do you suppose that there will be only one giant, simultaneous clap? Of course not; there is variation in the system. Instead of a single sound, there will be a rippling sound of applause, no matter how carefully orchestrated the practice seems to be. Can you count on finding exactly 49 pieces of candy in a package of chocolate-covered peanuts? If you examine enough packages, you will find that there may actually be between 47 and 50 pieces in the packages. Are you being cheated? This is common-cause variation.
When you leave your house every morning at exactly 7:12, can you expect to arrive at work at exactly the same time each day? No—in the case of traffic, there will be common-cause variation (timing of traffic lights, pace of traffic, number of cars on the road at the same time) as well as special-cause variation (an accident on the highway, delays in your carpool, flat tires, etc.). Fleeting events cause special variation. Imperfections in the system itself generate common-cause variation.
The key to improvement lies in understanding this variation, so that decisions can be based on trends in data, rather than only on intuitive reactions. This involves recognizing special causes and distinguishing them from common causes. Without this distinction, managers are likely to make two kinds of mistakes. The first is ascribing a variation or problem to a special-cause (e.g., “The operator was late to work”) when it is really due to a common cause (there aren’t enough operators for a particular process). The second involves assuming that variation is due to a common cause, rather than a special one.1
How does one tell the difference between special- and common cause variation and avoid the mistakes that can ensue from misunderstanding these concepts? The answer lies in the use of control charts, where data is collected and analyzed with respect to trends and patterns that can be acted upon. In the 1920s, Walter Shewhart developed the idea of 3-sigma-limit control charts. Control limits, displayed on the control chart, are generated by the data itself (collected over time) and help to clarify the distinction between common- and special-cause variation. Charting data offers the advantage of visual representation, so trends and instability in processes can be detected and acted upon. Every system has some degree of variation, as anyone who has thrown darts at a dartboard is clearly aware. The key to improvement lies in understanding the cause of variation, and understanding whether this cause lies in the system itself (the dartboard is not mounted firmly, for example, or the bull’s eye is not clearly discernible in dim lighting), or in a special cause (blindfolding the thrower, perhaps, or moving the target as the dart is in the air).
Once the concept of variation is grasped, one can begin to work on the system, to reduce the amount of variation. This work involves collecting data, studying causes, testing improvement theories, and constantly evaluating the impact of improvement strategies. An understanding of variation supports the larger understanding of the system, and this understanding supports the ways in which the customers’ needs are served. Failing to understand the inter-connectedness of these concepts may result in addressing the wrong issues to improve the system. This might be like placing a bucket under roof leaks rather than assessing the pattern of leaking and subsequently repairing the roof. Studying data will help to avoid addressing irrelevant issues by focusing on larger issues.
If you were to ask employees to list the greatest problems that they see in the organization, you are likely to see a great variety of problems listed—some trivial, some vitally important. Separating the trivial from the critical amounts to far more than collecting opinions about these problems. But knowing which issues are the biggest problems is important in terms of allocating resources and addressing needs. Decisions about improvement efforts demand insight that is based on data collection and analysis. Customer surveys, statistical monitoring of processes, check sheets, and other tools offer ways to collect data about a variety of processes. A Pareto
chart can support analysis of the output of these approaches visually. The alternative to robust observation using data gathering and analysis is responding to every issue or problem as it comes up. If a receptionist complains that he needs to walk too far to retrieve work from the printer, responses might include everything from making remarks about his need for exercise to purchasing another printer and placing it closer to his desk—not cheerfully.
But by analyzing the needs of all those who use the printer and understanding the amount of time it takes for each to walk to the printer, a more sound decision might be reached with respect to the placement of the printer or even to the number of print jobs demanded of a single printer.
In a hospital setting, a visitor may walk down a patient hallway where many beds are empty, and come to the conclusion that hospital population is down. In fact, it may be down, for that floor or at that moment; but collecting data on hospital dismissals and admissions will lead to more profound analysis of the current situation so that steps can be taken to address problems where they exist. This process will assure long-term customer satisfaction by demonstrating clear understanding of real challenges and responding to them with both short- and long-term solutions.
To make the most of data collection, technology renders analysis easy. SPC software will use data related to a specific process to show trends, indicate stability of a system, and indicate where improvement is needed. The chart below indicates, for example, that the process being analyzed has two indicators of special causes. One appears on the individuals chart (central tendency), and one in the moving range chart (variability).
1 Deming, W. Edwards. Out of the Crisis (Cambridge, MA: MIT Press), p. 319.