KT and Six Sigma – Complementary or Competitive?

We often run into discussions with clients about whether the Kepner-Tregoe problem analysis process fits in with the Six Sigma work they are doing. Their questions are usually posed to cast the two techniques as being in competition with one another; it’s almost as if they feel they only have the capacity to use one tool, so which one should it be?

In our mind, this is a false contrast. KT Problem Analysis and Six Sigma tools like Ishikawa Fishbone Diagrams were designed to accomplish entirely different ends. Six Sigma is about “Common Cause” problems, where factors inherent to the product or the process cause deviations over time. Think of any machine—all machines produce heat and all machines vibrate. And over time, that heat will eventually dry out and crack the seals and gaskets, and the vibration will eventually work all the bolts loose. This is just in the nature of things—inherent variation. Six Sigma attempts to isolate those inherent sources of variation, and to reduce them.

troubleshooting

KT Problem Analysis was designed to attack “Special Cause” problems, in which something has changed or is different and causes a problem: a change in settings; a new operator; a difference in raw material suppliers; a sudden anomaly in environmental conditions—any or all of these, separately or in combination, can cause problems.  

This shows up in the first question KT uses to qualify whether a particular problem fits the KT process: “Do we have a deviation?” If the answer is, “No, it’s always been this way,” then you are probably talking about Common Cause.

Even within Special Cause problems, the Ishikawa technique can work brilliantly, if used correctly and in the proper sequence.  If mis-used, blending them together can be a disaster.

For example, I recently visited a client who tried to marry the two together, intuitively, and did it precisely wrong. They had a group of Subject Matter Experts who had not been trained in KT and were somewhat resistant to it—they wanted to do Fishbones. Fishbones were intuitive, they relied on experience and knowledge, not on specific data. They gave the Experts a chance to use their expertise. The company also had KT-trained facilitators who wanted to use KT’s Is / Is Not technique, where we begin by specifying the symptoms of the problem.  Their compromise was to start by doing Fishbones, and then doing an Is / Is Not based on the Fishbone variables they found most persuasive.

You can see what’s wrong with this picture – they were starting with causes and then looking at symptoms.  It was precisely backwards, a closed loop system.  The result was that all the causes they identified, problem after problem, were “the usual suspects,” vague culprits like “inadequate cleaning procedures,” or “below standard inspection vigilance,” or the dreaded “operator error.” They threw some procedural fixes at the problems . . . and they recurred. It had gotten so bad by the time KT arrived on the scene that they were ready for some additional help. 

Within Special Cause problems we prefer to use Ishikawa diagrams after the symptoms of the problem have been specified. We already know which Batches were affected, and which were not, which clients complained and which did not, and maybe even which days of the week the problematic product was made on, and which days it was not. At this point, we’re asking what about that product, that deviation, that time, that location, that amount is distinctive, odd, curious, or different. And we often use the Ishikawa categories as prompts to elicit this data:  

What is unique about Model 200 in terms of crewing?  

  • What is different about the right side of the unit in terms of materials?  
  • What is unique about ten days ago in terms of methods or measurement?  
  • What is distinctive about the second floor Lab in terms of environment?’  

Used in this way, at this time, they can work brilliantly.

If you think about it, it more closely parallels classic scientific method this way:  start inductively to establish an epidemiological pattern of the problem’s behavior. Then deduce distinctions, changes, and causes from that.  Finally test those possible causes against the data. Exploratory data to confirmatory data, inductive to deductive, parts to wholes to parts—this sounds familiar.

As a parallel, we find uses for the KT process in Common Cause problems. What we have discovered is that if you can do a quick and dirty Is / Is Not to specify the symptoms of the problem, you may be able to quickly eliminate three or four of the major bones of the fish:  ‘Well, if it Is happening on Line One but Is Not happening on Line 2, we can ignore environmental factors and materials and manpower—they’re both in the same room, share the same stores, and rotating crews. Fifteen minutes invested up front in this way can save hours of painful deliberation downstream, and significantly narrow your search space.

So, these two tool sets are not in a competition. They work better for different kinds of issues, at different stages in the analysis process. And when used together they work better than used separately. It’s just a matter of using the right tool for the job. 

Dr. Jamie Weiss, Senior Consultant, Kepner-Tregoe, Inc.

Also by Dr. Weiss, The Fallacy of People Problems

cc5fb40e-3eb2-4cf5-a1bc-7e384158c122

 

 

()

0 Comment(s)

Schreiben Sie einen Kommentar
  1. Leave this field empty

Notwendiges Feld