Worth a look: Let your control charts tell a story

Matt SavageI found this article in the May issue of Continuous Improvement, put together by IHI. What was most enlightening about the following article, by Bob Lloyd, is that your background may affect how you choose to evaluate data and thus how you respond to data.

I have been exposed to control charts for nearly 30 years and I see a use for most process data to be displayed in control chart form. We are all routinely exposed to data in binary form: the stock market went up compared to yesterday, the temperature is cooler today than on this date last year, our company had x% growth over last year, etc. The numbers are interesting, but control charts tell a story.

Helping Leaders “Blink” Correctly

In the first of two articles, IHI’s Bob Lloyd describes two of four core skills health care leaders need to use data appropriately in decision making: understanding the messiness of improving health care, and determining why you are measuring. Without these capacities, Dr. Lloyd argues, we run the risk of going off in the wrong direction in the “blink of an eye.”

Read the article in Healthcare Executive

Worth a look: Should Toyota’s recall be blamed on quality?

Matt SavageLike you, I have heard about the myriad of problems Toyota has been having lately. I heard about the sticking accelerator, the brake problem on the Prius, and a vehicle recall that will top 8 million. USA Today listed this as “Toyota’s quality fiasco” http://www.usatoday.com/money/autos/2010-02-05-toyota-recall-friday_N.htm. Toyota’s president Akio Toyoda has stated “Let me assure everyone that we will redouble our commitment to quality as the lifeline of our company” http://news.yahoo.com/s/ap/toyota_recall. As I read this, I wondered, is it really a quality problem?

No doubt Toyota has a problem, but was the accelerator problem caused by poor quality? You might recall that one solution to the accelerator problem was related to the floor mat. The floor mat would be modified so that the potential for it to cause the accelerator to stick would be minimized. So what is the root cause of this problem? Were the fibers used in the floor mat faulty? Were the floor mats sized incorrectly? Did the materials supplier produce defective materials? There are many possibilities.

Toyota is known for their precise specifications. So let’s assume that the floor mats, brake pedals, brake lines, etc. were manufactured to a tight tolerance and functioned as they were designed. If this is the case, isn’t the problem related to the design rather than the quality of the parts produced? If root cause analysis identifies that the problem is with the design, then the media should call it “Toyota’s design fiasco.”

Of course at the end of the day, what really matters is that all automotive manufacturers learn from Toyota’s problems and take steps to prevent an issue such as this from occurring again.

I’d like to hear your thoughts on “Toyota’s quality fiasco.”

An improved improvement chart

Matt SavageIf you have worked with count charts with large denominators, you have probably seen control limits that seem too narrow to be of much value. The p-chart is one of the attributes charts with this flaw.

A p-chart counts two things: 1) the number of non-conforming items (the numerator) and 2) the number of items inspected (the denominator). If you look at the glass half-full rather than half-empty, you might count the number of conforming items (rather than non-conforming). In either case, when the denominator is large, a problem may be present.

Consider the following chart which shows a p-chart from a plastic shopping bag manufacturer.

This chart measures the percent of plastic bags that failed a particular test. The bag manufacturer averages about 20,000 bags inspected in each sample and about 600 failures. As you can tell by looking at this chart, the limits seem too tight to be useful. The control limits are considered to be overly-dispersed.

Continue reading

Stats tip: How can I be “in control” if I don’t know what it is?

Matt SavageI recently received the following question:

‘The process certification program at my company says that in order to certify a process it must be in control, be capable and be centered. Capability is measured by the process Cp and centering is measured by the Cpk. What measurement is used to determine if a process is “in control”? Is there a crisp definition of “in control”?’

“In control” is a term used to describe a process that is predictable and does not contain any special causes of variation. A special cause is something you did not expect to occur. I often refer to these as hiccups because, like a hiccup, you do not get them often.

There are many out-of-control or special cause tests you can use to help identify if the system you are evaluating appears to have special causes of variation. In general, if one of the out-of-control test rules is broken, you have license to investigate the hiccup or out-of-control point. Upon investigation, you will make a determination if the anomaly is a special cause. Then, if it is a special cause, you will determine what action to take.

In short, if you look at a control chart and it shows only common cause variation, it is said to be in-control and you should be comfortable predicting the future based on the past (“in control”) process.

Stats tip: A run of 5? Or 6? The debate continues.

Matt SavageI received a lot of e-mails  in response to my last blog entry and a few of you have posted comments on the blog. It is fantastic to see such a lively debate! And while I like to win a debate as much as the next guy, I am more concerned with utility.  After all, if your control charts don’t give you information that helps you control your processes, what’s the point?

We must remember the purpose of a control chart, which is to provide guidance as to when to investigate and when not to investigate. Essentially, a control chart should create an effective balance between reacting too quickly and not reacting quickly enough. While it is evident that we aren’t all using the same operational definition of a run, the nice thing about SQCpack and CHARTrunner is that they allow you define your own run rules.

Note the two distinct runs of points between samples 8 and 18 on the following chart of fictitious data.

Continue reading

Stats tip: A run of 6? or 7?

Matt SavageA CHARTrunner customer in the UK recently contacted me to ask why her control chart was not flagged with an out-of-control condition.

Specifically, note the run down of points in her chart. I replied that the out-of-control test that she defined is looking for seven consecutive samples that are decreasing. I agree that a run of points exists;  however, my assertion is that there are six consecutively decreasing points.

She counts seven consecutively decreasing points. How many consecutively decreasing points do you count on the chart above?

I used an analogy to explain. When one counts steps in a staircase, the initial starting point (the base or landing) is not counted as a step; the first step up is an increasing step if one is going up (and the first step below the top is a decreasing step if descending), but the beginning point is really just a beginning. It is not considered to be increasing or decreasing. The stair could also be an ending step, if one is descending, but it in itself, is neither increasing nor decreasing. Comparatively, I am not aware of a method to identify a point as both increasing and decreasing.

I also pointed to a well respected text by Acheson J. Duncan’s “Quality Control and Industrial Statistics,” Fifth Edition. In this book, he states (page 429) “… Thus in the series 5, 4, 6, 8, 10, 12, 11, there is a run up (increasing) of 4, since there are four increases in a row. Likewise, 7, 10, 8, 6, 5, 4, 3, 2, 4 illustrates a run down of 6.”

I’m interested in your thoughts. When a run up or down exists, where do you begin to count the number of consecutively increasing or decreasing values in the run?

Stats tip: Control limits need to be calculated using the correct method

Matt SavageI often tell others that a control chart is one of the most effective and easy-to-use quality tools. Some argue that experimental design is more effective. Maybe so, but can you teach a novice experimental design as quickly as you can teach him a control chart?

A control chart is a simple tool that works well for many applications. One key component of a control chart is the control limits. Without control limits, you don’t have much … unless, of course, you like run charts. So if the control limits are such a key part to a control chart, why do so many problems and questions exist related to control limits?

Continue reading

Control charts for monthly presentations

Matt SavageI recently returned from training a hospital that has been using CHARTrunner for a number of months. One individual, I’ll call her Rita, presents her quality improvement charts every month in a quality meeting with her managers. She has 22 charts and wanted an easier way to automate the process of getting ready for this monthly presentation.

Her current process is to display the chart, copy it to the Windows Clipboard, go to PowerPoint, paste the chart, resize the chart, and then repeat this for all of her charts. I feel Rita’s pain. The process is time consuming and error-prone.

The good news is that there is an easier way! We automated this process to the point that Rita never needs to launch CHARTrunner. Instead she clicks on a desktop icon each month that launches a command line parameter to update her 22 charts in her PowerPoint presentation. Two clicks accomplishes what previously took her 2 full days. The custom application we developed for Rita saves her 2 days of work each month! I have set this up for many organizations, and I’d like the opportunity to do the same for you.

If you are interested in learning more about a system that automates the process of getting ready for your recurring meetings, let’s talk.

Stats tip: What everyone ought to know about Cpk vs. Ppk

Matt SavageIn a class on capability analysis that I recently taught, several participants asked: “What is the difference between Cpk & Ppk?”

The quick answer is… ‘1 letter’. The mathematical answer is that each statistic uses a different calculation for the standard deviation.

The practical answer is…’it depends’. Now I know, some of you are saying statistics (math) isn’t subjective. I mean, we’re dealing with numbers here! Yes, the numbers are real and yes, you should trust the results. (Unless, of course, someone like Bernard Madoff is gathering the data for you.)

Continue reading

Worth a look: Mike Micklewright on Quality Digest

Matt SavageI regularly follow Quality Digest articles and videos and have come to really enjoy the material written by Mike Micklewright. His material is not just educational, it is usually presented in an entertaining fashion. Hey, if you can make quality information a little entertaining, then you have some talent! You can see many of his articles directly from the Quality Digest site once you enter Mike’s name in the search engine at http://www.qualitydigest.com/.

For example, in this video, http://www.qualitydigest.com/inside/quality-insider-video/viewpoint-mike-micklewright.html, Mike shares his opinions on the new ISO 9001:2008 standard. If your company is ISO registered or you are interested in more information about the standard, there are many sources of information on the web. Google gave me more than I could have anticipated. Yet in this short video, Mike Micklewright examines the eight quality management principles and what the editors did or more to his point, what they did not do, with the standard.

Mike owns his own company, Quality Quest, and if you are interested in more of his material, Google his name or visit http://www.mikemick.com/. What helpful quality resources do you follow online?