The process of analyzing gage variability is often highly structured, involving an examination of the gages themselves for sensitivity to temperature changes, magnetic fields, and other factors. These are the easy ones. The second area of variability has its source in gage operators themselves, who may have different levels of training, experience, fatigue, and even attitude.
Collecting data offers clues to sources of variability. But when this disciplined analysis fails to uncover real reasons for variability, it may be time for the Sherlock Holmes of variability to look at operational definitions—often the most overlooked consideration when evaluating variation among measurement devices.
Elementary, my dear Watson? Perhaps, but nonetheless these definitions can lead to levels of variation in gage output if they are vague or nonexistent. “In the opinion of many people in industry, there is nothing more important for transaction of business than use of operational definitions. It could also be said that no requirement of industry is so much neglected” (Deming, 276).
Taking time off for vacation seems to be a diminishing phenomenon among American workers.
If you get off the highway and take an alternate route when traffic slows to one lane, you are making a prediction. Likewise, if you decide to invite someone to dinner, that too is a prediction. The scientific method? Predictive in nature. Every time you make a decision, you are making a prediction of an outcome, and choosing one over another based on this prediction.
Prediction skills become second nature because of this daily application. These predictions may not be based on data or evidence, but involve some subjective guess about a preferred outcome. In the case of choosing a traffic route or a dinner date, it’s clear that not much data is involved. The decision involves subjective interpretations, intuitive hunches, and guesses about potential outcomes.
Will data analysis really enhance prediction accuracy? There are no guarantees, without adding a certain amount of understanding of data, of variation, and of process performance.
“A hot dog at the ball park is better than steak at the Ritz.”
At least that’s what Humphrey Bogart is said to have commented. With the summer season underway and ball parks in full swing, hot dogs at the ball park, on the grill, and in the lunchbox will help to celebrate National Hot Dog Month in July. And many agree that there’s nothing like a hot dog with mustard. Or relish or ketchup or smeared with chili.
The perennial answer is, of course, “It depends.”
For decades of users, Shewhart control charts have provided information about process stability. Like all Shewhart charts, data is plotted over time and in order from oldest to most current. The traditional control chart, an old standby, is not the only possibility when it comes to garnering information from process data. In monitoring processes with small drifts or changes, for example, the exponentially weighted moving average (EWMA) chart may offer an improvement over traditional Shewhart control charts.
But again, that depends. Certain processes—for example, in the chemical industry—benefit from understanding small shifts or drifts in a process. For other industries, Shewhart control charts do the job quite effectively.
Meeting compliance requirements and standards is sufficient to meet objectives of injury and accident prevention, and assure the health and safety of all employees—right?
Among the “Ten top business trends that will drive success in 2016,” reported in an end-of-2015 Forbes article by author and consultant Ian Altman, was the point that “Top performing companies will focus on connecting customers.”
Citing examples that include Uber, Airbnb, Kickstarter, and others, Altman notes that these companies may own no real estate and have no funds to invest, and yet they are among highly successful firms in 2016. He attributes their success, in part, to the fact that in the case of Uber, for example, “they excel at connecting riders with drivers.” Baker predicts that “The most valuable companies will connect buyer to seller, or consumer to content.”
Does this signal a return to customer service as a priority?
When Frederick Winslow Taylor advanced the principles of “scientific management” in 1909, he was hailed as a master of efficient production. In the context of the new century’s focus on science, his principles were met with approval of manufacturers, who saw opportunities to improve productivity and enhance profitability.
The principles that Taylor advanced were based on the beliefs that there is one “right” way to do each job, that workers are motivated by money, and that close monitoring of processes assures that the most efficient methods could be applied. In one of his experiments, he studied the precise movements that were involved in bricklaying, timing each movement and outlining in specific step-by-step moves the most efficient way to lay bricks.
Taylor’s approach may indeed have improved productivity and streamlined processes in manufacturing. What it gained in efficiency, however, it lost in terms of pride of workmanship, individual responsibility, and the motive for innovation. Unfortunately, its effects linger even a century later in the attitudes of managers toward their workers and workers’ perceptions of their jobs. Its effects may even be seen in education, where rigorous testing assesses only master of highly specific content, with little emphasis on individual motivation or creativity.
As World Quality Month celebrations are replaced with attention to holiday celebrations and November’s focus fades into the distant past, facing a new year in the darkness of December may represent an opportunity to pay attention to issues related to developing and managing technology and contemplating the future of a company or organization.
Last month’s issue of Harvard Business Review, with a cover story related to “What really keeps CEOs awake at night,” addressed the timing of innovative technologies in an article authored by Ron Adner and Rahul Kapoor (https://hbr.org/2016/11/right-tech-wrong-time). We all know of technological innovations that have been released too late and missed the revolution (the article cites Blockbuster’s failure to address the shift from rentals to streaming, for example), as well as those that have been ready too soon, falling into a market that does not perceive their value.
To avoid the “right tech, wrong time” scenario, Adner and Kapoor suggest looking more closely at the ecosystems that support technologies. Understanding the competition between the new and the old ecosystems can help to assure more accurate predictions about the timing of transitions, and to render decisions about allocating resources more effective.
If you’re keeping track of exercise in your daily life, your electronic tracker is loaded with data—but seeing trends and patterns requires charting. See the visual information generated from this data.