Seeing the Data Sets You Free; You Can't Manage a Secret...

Achieving genuine quality improvement requires closely managing the underlying parametrics, not just the upper level attributes. Explore ways to improve by seeing the data.

by Dann Gustavson

Exectives and managers reviewing data.

The title is paraphrased from a talk given by Alan Mulally at the Stanford Business School when he was CEO of Ford Motor Company. (Note 1)  In his discussion, he meant that if you want to tackle a business problem in a meaningful way, it is absolutely essential to get everyone using the same set of data or information to be able to make some desired improvement. Seeing the data together is important, even if – especially if – the data show “bad news.”

A few years ago I joined a company as the Operations Manager in printed circuit board assembly and had my own “A-ha!” experience that corroborates Mulally’s observation. One of the operating division’s practices was to review each production business unit’s process yield data and trends, and report them in a Bimonthly Yield Meeting. My team’s review consisted of tables and bar charts showing the overall first-pass yields of the various models of circuit boards we built for each product line. Accompanying commentary written by one of the process engineers described the failure modes for the top 3 causes of test failures in the period, followed by an attempt to explain the underlying or “root” causes.

Perhaps not surprisingly, given the volume and variety of circuit boards tested in two months’ time, in every historical report I looked at the answer was some variation of “different failures, random causes, no trends, check back again in 2 months.” This approach was sufficient to keep first pass yields hovering around 90%, even as circuit board complexity had increased steadily over the prior few years as products had added features.

As part of the company’s continuous improvement process, a training company was engaged to teach a workshop on Six-sigma principles. Engineers from each business unit participated in a week-long classroom session followed by a supervised project over several months. Many good things came out of the workshop and the projects; one of the best was to solidify the notion that in order to improve quality of the process, it was not enough simply to tabulate the test failures over a long period of time. It is necessary to get beneath the “pass-fail” view – in close to real-time – and investigate what defects occur and what is causing them to occur, then set about correcting the underlying causes in order to maintain process control.

Here’s an example from a few weeks into that new Six-sigma paradigm.

A board called the “MCU Board” has a high failure rate in the ICT (In-Circuit Test, an automated board test system) for a capacitor in circuit location C16. The production line repair technicians remove and replace that part on approximately 30 boards per week because the tester flags them as defective; that is, the measured value is outside the acceptable tolerance that’s programmed in the test. When the part (or any part) fails, the board is set aside for repair and retest before it can be used in product. I’m the new guy here, so I can ask naïve questions for a few more weeks, and when I ask about this C16 failure, the responses are something like: Oh yeah, that’s normal; or, Been like this as long as we can remember; and, It’s only about 5 percent of the boards.

Nobody thinks of it as a problem; this is just the way it is.

In reviewing the ICT failure data, however, I noticed something curious and brought it up with the engineering team. “The same component type and value as C16 is used in a dozen locations on the board, but the other ones never fail. How can it be that the bad ones are always placed in C16?” 

Dead silence in the room for three full minutes while that sank in. Then one brave soul volunteered that maybe the test program limits were not set properly for C16. Another engineer pointed out that maybe C16 was right and the other locations’ test limits were set too wide (meaning some of them should be failing, too). Still another pointed out that maybe the circuit design had other components around C16 that prevented the ICT tester from making a proper measurement. At that point an eager problem-solver emerged and said he would start digging into the cause of this obvious problem as soon as the meeting broke up.

That afternoon he excitedly reported to me that the test program indeed had pass-fail limits biased in favor of measured values above the center of the C16 part tolerance, thus increasing the chance that parts measuring low, yet still within tolerance, would fail the test. On the next six-hundred-piece run of that board, he checked the measured values for all the affected capacitors and was able to nail the root cause of the problem as incorrect test limits. Seeing that, he easily corrected the program so that parts measuring within tolerance (essentially all of them) would pass.

Behind the tech-geek discussion, what happened there? Right off the bat, we got a 5 percentage-point improvement in MCU Board yield, for the cost of around 4 hours of an engineer’s time. The technicians who used to remove and replace 30 per week of that part no longer had to do that, saving labor cost. The bonepile of defective boards was reduced because the failed MCU Boards that had been awaiting repair were tested against the corrected limits, with no C16 repeat failures. The scrap rate went down, saving material cost. Board retest post-repair went down, saving labor cost.

These are all good things, but there was an even bigger payback.

Seeing the Data that have always been there

This first success got people excited about what other 2% or 3% or 5% problems we could find lurking in the data, now that we knew another good place to look. The data had always been there; now we can see it. Over the next couple of weeks, each engineer took responsibility for two or three of the highest impact boards, impact based mainly on either high production volume or low first pass yield. We began tracking the test results daily for individual circuit board models, rather than looking in aggregate just before the bimonthly meeting. We set Six-sigma control limits based on the “voice of the process” and took actions to reduce the process defect rate along with other actions to reduce the observed lot-to-lot variation.

Truly seeing the data may require finding a different way of looking at it.

Gradually people’s mindset shifted away from “90% yield is good enough” to view defect reduction and elimination – problem solving and continuous improvement – as our process engineering team’s reason for being. And just as big, if not bigger – people had begun to learn that even if the data show “bad news,” putting it in front of the manager was not a career-limiting action. On the contrary, seeing the data led to a serious dialogue about the meaning and implications, and a search for the “good news” solutions to be obtained.

OK, Dann – What’s My Take-away?

One of the biggest benefits to Six-sigma is that when properly implemented, it allows leaders to unleash the analytical and creative talent of engineers and other professionals so that those experts – the ones closest to the work and the problems – are empowered to do something with a positive impact. After a few successes the approach becomes ingrained and self-driving. Not every attempt succeeds; sometimes you will find yourself up a blind data alley and have to rethink your approach. Overall, however, as your team members use the tools, they become more effective with them and the trend in your results will be positive.

In my case, the circuit board engineering team used the tools and methods we learned to drive first-pass yields above 99.5% in about two years, meaning the process defect rate was reduced from 10% to less than 0.5%. Because only 1 board in 200 required rework due to test failure, instead of 1 in 10, we reduced our board assembly rework crew from 8 full-time people to one part-time person, and transferred the others to value-adding operations. All because we found a way to see the data that were already there for us.

As Alan Mulally put it, you can’t manage a secret.

Another example of Next Level Leadership!

Note 1.  Search on “Alan Mulally Leaders Must Serve With Courage”

Dann Gustavson, PMP®, Lean Six-Sigma Black Belt, helps Program Managers and their teams achieve superior results through high-impact program execution. Prepare, structure, and run successful programs in product engineering, manufacturing operations (including outsourcing), and cross-functional change initiatives.

Contact Dann@Lean6SigmaPM.com.