Home Page |
Resource Links |
Free Copies |
Free Books |
Humor |
Teaching Insights |
Facilitator Toolbox |
Contact us |
CHAPTER FIVE: UNDERSTANDING VARIATION
from
When "Good Enough" Isn't Good Enough,
Core Ideas of Total Quality
© by Ends of the Earth Learning Group 1998
by
Linda Turner and Ron Turner
TABLE OF CONTENTS
Chapter One | Chapter Two | Chapter Three | Chapter Four | Chapter Five | Chapter Six | Chapter Seven | References and Copying Rights |
This is the part of Total Quality that sends shudders down the spines of people with math phobias. Visions of complicated graphs and abstract math symbols made out of the Greek alphabet can be intimidating. Above all, Deming's call for a statistician to work at the right hand of all senior managers sounds like an admission that this is the stuff of math majors and out of the reach of ordinary mortals.
That's really not the case, however. The importance of grasping the significance of variation is too profound to be dismissed. That significance can, moreover, be learned without any knowledge of algebra or advanced math. What follows here is a quick and simple explanation of variation. The Total Quality paradigm starts with two key assumptions.
In other words, any process is going to make errors. Those errors may be made at an extremely low rate, but the process will make them nonetheless.
|
One of the "A's" will be tallest. Call this "A" your worst "A" and ask yourself, "Why
did this 'A' come out tallest?"
When we have done this exercise with groups, some people have had their tallest "A"
at the beginning of the sequence, some have had it at the end, while still others had
it somewhere in the middle. People generally will try to blame the "tallness" on its
placement in the sequence by saying things like, "I started strong, then fell off," or
"I started out weak and then ended strong." Or, "There must have been a bump in the
paper," or whatever.
None of these explanations truly works because if we have people draw another 25
"A's," they will again come up with a tallest "A" that will probably be placed
elsewhere on the sequence than the first time they tried.
The truth is that there is no single explanation for why one "A" came out taller than
the others. In other words, if we asked, "Why are some of your 'A's' taller than
others?" you won't be able to give us an answer. Your writing system produced
variable results, with some "A's" worse than others even though to you, it felt like
you were doing the same thing each time you drew an "A."
Once when we did the 25 "A's" exercise, a member of the class volunteered to draw
the "A's." This individual had known we would do the exercise and so had snuck in
a rubber stamp. The resulting 25 A's were significantly better than those which could
be produced freehand, but they weren't perfect. There was still variation and we
could still ask, "Why did some of the 'A's' come out smeared, or tilted, or fainter
than other of the 'A's'?" The computer written A's come out even better than does a rubber stamp. If you look closely though, you will discover they are not identical. Particularly aggravating to us is that the color tones differ signficantly from computer to computer even though we send the same directions to each computer.
The same is true about any process. All processes produce variable results. That means that sometimes when errors occur, there is no special cause you can blame. In these cases, we say the system was at fault.
|
Assume you again take up your pen to draw another "A." Before you do so, predict
what it will look like. When most people are asked to do this, their initial response
is to say, "I can't do that. Out of the first 25 'A's,' there was too much variation."
You could, however, predict a range in which the "A" will be drawn. There must be
an upper limit and lower limit for height, width, squishiness, and color, etc.
Predicting upper and lower limits of a system is the best way to describe any process.
Notice that relying on averages is not ideal. In fact, the next "A" you draw will
probably not be identical to the average "A" of the first 25.
As a general rule, you need to start describing all processes in terms of their variation
and not simply in terms of their averages. Knowing that a process produces three
errors per day on average is not nearly so interesting as knowing that errors routinely
vary between zero errors and ten errors.
Draw your "A." It probably came out within your predicted limits and is probably
shorter than the tallest "A." This is because the odds of it being shorter are 24 in 25
(a 96% chance).
If we had yelled at you when you drew your tallest "A" warning that we didn't want
any more of these tall "A's," we would have mislearned that "yelling works" since
your next "A" had a probability of 24 in 25 of being shorter.
By similar token, if we had "retrained" you in order to insure there were no more tall "A's," we would have mislearned that our training worked.
|
Let's suppose that one of the 25 "A's" was best by whatever measurement criteria you
pick. Assume that following one of these outstanding work days, we rewarded you
with a bonus and encouragement: "Keep up the good work. That's what we need to
see around here."
Following our boost to your self-esteem, as hard as you try, there is a 24 in 25 chance
that the next "A" you draw will be worse. This is because out of 25 work episodes,
one is bound, due to random luck, to be best. The chance of repeating that best result
is only 1 in 25.
We would now mislearn a second myth as a result of seeing your results decline:
"Encouragement doesn't work. People go soft when you praise them." Many
supervisors and teachers started out with humane notions, but gradually over time fell
victim to the twin myths, "Yelling works and encouragement doesn't" because they
didn't understand the nature of variation.
System-caused errors occur in random patterns. Unfortunately humans are terrible
at sensing random behavior. Whenever we observe good events or bad events happen
in a run, we tend to blame special circumstances rather than accurately blaming it on
luck.
This is why gamblers will swear that when they have a run of good luck, then they
will be more likely to roll a "7" than when coming off a run of bad luck. The truth
is, however, that the chances of rolling a "7" are in fact no better coming off a lucky
run than an unlucky one.
Professional basketball players will similarly claim that their chances of making a
basket when coming off a run of several baskets in a row is better than when coming
off a run of several misses in a row. When statisticians studied this, they discovered
professionals are just as likely to make the next basket when coming off hot streaks
as off cold streaks. (Gillovich, Vallone, and Tversky, 1985.)
Most people do not recognize randomness. Instead they blame runs of errors and runs of successes on special causes. The favorite special cause is to look for someone to blame or credit.
|
This is not to suggest that special causes are never at fault. When basketball players
are sick, they undoubtedly will make fewer baskets. Similarly when people are better trained or more talented, they will do better on
average than others.
But we have to be extremely careful about asserting specialness because when we are
wrong, we mislearn some devastating myths.
Take up a pen again, but this time use a different colored ink. Draw another "A."
We ask again, "Why did this 'A' come out different?" You can give us a ready
answer this time. You used a different colored ink pen. There is, in other words, a
special cause at work.
We can recognize this special cause because the results fall outside the predicted limits of our
process. In this case, the color did not come out within the predicted color limits of
your first 25 "A's."
It is critical that we measure performance of our work processes and systems so that
we know what normal variation is. When results fall within those limits, we should
blame the results on system luck. When results fall outside those limits-- and only
when they fall outside those limits-- we should look for a special cause.
If people don't know what their system normally produces and if errors spike higher one day than the day before, there will be a temptation to blame someone or something even though intellectually it may have been recognized that the error spike might have been due to normal variation.
|
The workers will be bitter and angry. If there was no attitude problem before, there may well be one afterwards. Their self-confidence may be shaken if they consider themselves to be untrustworthy, especially when, to them, they didn't do anything different on the "bad" day than on previous "good" days.
Perhaps, worst of all, you would never discover that you had falsely blamed the workers because you would have observed an error rate that declined (just as we would have observed a "better" "A" following the tallest "A"). Instead of learning from your mistakes, you would instead be patting yourself on the back and saying something like, "Am I good or what?" For this reason, it is essential that accusations of special causes not be made lightly.
For the moment, let's assume you decided to hold off accusing someone when there was an error spike. If the system was at fault, then you would observe errors drop back toward the long term average on the next day. Your assessment that the error spike was due to the system would have been confirmed.
Let's assume, though, that your caution was mistaken. In reality there really was a special cause that needed attention. This would be like blaming the different colored "A" on system random luck instead of blaming the fact that you used a different pen.
|
When we are patient and withhold making accusations of special causes, our patience
will be rewarded in two ways: (1) if there is no special cause, then we won't make the
mistake of falsely accusing someone and (2) if there is a special cause, it will soon
become obvious because the problems will repeat themselves.
In real life studies of work processes, it has been discovered that about 10% to 15%
of the time, errors are caused by special causes, and about 85% to 90% of the time,
errors are caused by the system itself. When a special cause is at fault, we need to
make repairs. When the system is at fault, we need to redesign the system so that it
will deliver better results.
When you lack the data necessary to define the upper and lower limits for normal variation, you are better off blaming the system for errors since the odds will be with you. The great advantage of blaming the system is that with system redesign, anyone working in the system will get better results.
|
For lessons in how to distinguish between special and system causes.
For lessons in how to redesign systems.
|
The significance of recognizing that all processes make errors is that it means that people need to learn how to accurately diagnose when a special cause of error has been made versus when a system cause of error has occurred. The first question that problem solvers should ask themselves is, "Is this due to the system, or is there a special cause at work?"
|
At home, people can sense the underlying principle by considering two heating
systems, each of which maintains a mean temperature of 70 degrees. Would you
prefer the system which ranged from 65 to 75 degrees or the system that ranged from
68 to 72 degrees? The desire for consistency is obvious.
Unfortunately any system which uses upper and lower specifications or which has
adopted minimum performance requirements will not value reduced variation.
Instead such systems will encourage "working to the minimum" and/or keeping all
products simply within specifications.
Ford Motor Company discovered this phenomenon when they observed that
transmissions made in Japan were more reliable than ones made in the U.S. even
though both equally met engineering specifications. The Japanese used only 30% of
the available tolerance whereas the Americans used 70%. In other words, the
Japanese were more consistent. This wound up making warranty costs for the
Japanese transmissions one-third less than for the American transmissions.
Reducing variation for a sales staff will mean that customers are treated the same
regardless of who gets the phone call. Reducing variation in health care will mean
that patients will be asked the same core questions regardless of which physician sees
them. Reducing variation in an education setting means that when students sign up
for a course, the same basic content will be taught regardless of who is the instructor.
Ultimately, in order to reduce variation, the work force must start talking with one another and develop "how-to" standards with which everyone can live.
|
Chapter One | Chapter Two | Chapter Three | Chapter Four | Chapter Five | Chapter Six | Chapter Seven | References and Copying Rights |