Data flows into spreadsheets and database tables at an astonishing rate. It is stacking up in personal computers, laptops, tablet computers, database servers, in the cloud, and even in smart phones. In these growing mountains of data there is insight to be found–insight that can lead to gains in quality and ultimately in profitability.
How do we extract the value in this data?
Two important principles apply. First, we must follow a systematic approach. Second, we must transform the data into visualizations. The systematic approach is important because we will encounter so many data sets over time and because we want consistent gains from understanding this data. The visualizations are important for communication and broader understanding. Our brains are wired to understand pictures, charts, and graphs more efficiently than we understand words and lists of numbers. Often, we need several members of a team to understand a data set before improvements can be made and sustained. The visualizations are important in developing this shared understanding.
To become useful to decision-making, data must be offered in easy-to-understand form that offers clarity in its meaning. Football scores might be recited as raw data: 14-21, 7-3, 28-32, 17-0, etc. The data itself is undoubtedly accurate, if it has been collected carefully. But it clearly has no meaning to someone interested in the outcome of games. Understanding the data comes only with clarity, and that understanding is generated primarily through a series of questions that need to be pursued.
Of course, the main question comes down to, “So what?” A score of 21-17 has no significance unless I know that it reflects the fact that Alabama beat LSU in a football match. The first question, then, may be “Who won the game?” With a single set of data, of course, the significance becomes clear as soon as this question is answered. When data is more complex, it is essential to orchestrate a series of questions about its meaning. If the data is collected in order to improve a process, these questions will follow the traditional Plan-Do-Study-Act cycle that represents the improvement plan.
The PDSA cycle is a methodical approach to data with a step-by-step journey toward improvement. The “Plan” step includes understanding the system by defining it and collecting baseline data about the ways in which that system is operating. To be useful, this data must be collected as a control chart or even a run chart, demonstrating trends that are relevant to the process. Assessing the current situation and analyzing factors that contribute to problems or inefficiencies in the system demand the use of tools such as Ishikawa (cause-and-effect) diagrams, Pareto analysis, and control charts.
In an example, collecting data about the amount of time that elapses between the moment that a restaurant meal is ready for pickup by a server and the time the plate is delivered to the waiting customer might emerge as simple data in minutes: 5, 7, 4, 8, 10, etc. This data provides no information and represents a challenge to any insight about the process. The “so what” can be teased out only by charting that data and observing trends.
Developing a theory of improvement in the process also demands questions and systematic responses to these questions. “What would happen if…?” often generates meaningful improvement projects that might be initiated. Again, though, testing the improvement theory demands careful data collection and analysis. Studying the results of a theory and deciding whether that theory really works can be successful only by looking at the data in meaningful ways. And “looking at” must be more than a figurative term: charting data and looking at its trends are critical to deriving meaning from the data.
What seems to be clear is that data will yield information only when it is analyzed; otherwise, it will continue to stack up in personal computers, laptops, tablet computers, database servers, the cloud, and even smart phones, burying the bewildered observer. It can be tamed only through analysis that demands visual representations rendered by control charts, Pareto diagrams, and other tools of statistical analysis.