Tuesday, November 30, 2010

Sample Size for Process Capability



It's important, no, necessary that new processes be developed with high process capabilities because a process that operates close to a specification limit is an expensive process. If you can develop the process such that it is six sigma capable, you probably won't hear much from it ever again. And that's great.


A colleague and I were reviewing a Cpk study at work recently where the result was 1.2 and the sample size was 25. Note that a Cpk statistic is a point estimate and it has confidence limits. The larger the sample, the narrower the limits. So if we put 95% confidence limits around the calculated 1.2 with n = 25, we get limits of 0.84 and 1.56. That's quite a risky process if it is as low as 0.84.


Sample sizes for process capability studies aren't well understood. I recommend 100+ and the sample needs to be taken over several runs, not all at once. Here's a formula for calculating the confidence interval for a Cpk statistic. If you are responsible for a new process, make sure your calculated Cpk statistics include the confidence limits.






Sunday, April 5, 2009

Measurement System Variability






I am working with a client who has experienced an increase in variability in the thickness of their product. I was able to get my hands on just over a year's worth of recorded data and the plot was quite revealing. the chart is attached - please ponder it for a moment.


Their process has some sources of variation between batches but look at how the data points appear after June 2008 (middle of the graph). At this point in time, someone replaced the measuring tool. The older digital indicator was able to provide five decimal places whereas the new digital indicator provided only three. This shows up as an increase in the short term variability in the data.


The measurement system is one of the sources of variability in a data set and in many cases, it is the largest source. It is the easiest to correct and easy to prevent.


I took the original data set and plotted the differences between each data point. Here, it is clear that the new tool increases the variability of the measurement system.


The wrong tool can be very costly - rejecting good product just because of measurement error.






















Wednesday, November 12, 2008

Measuring Tools

I saw a video clip for a high technology organization that described the importance of precision in measuring their parts. But the video showed a fellow using a dial indicator. It seems that a lot of people are not aware of the variability in this rather simple tool. If precision is really important, you don't use a tool that introduces 0.005" to 0.010" variability to the measurement process.

Measuring tools are used to assign a number to some feature of a part. In assigning this number, the tools vary. Sometimes a little and sometimes a lot. We can all appreciate the fact that if a lot of people measured the same part with the same measurement tool, there would be some disagreement in the responses. And if the same person measured the same part with the same measuring tool many times over, there would still be some disagreement. These are the sources of variation in the measuring process.

It is important, indeed necessary that the variability (repeatability and reproducibility) introduced by the measuring tool be known. Otherwise you make too many of the two types of errors - you accept product that should be rejected and you reject product that should be accepted.

Tuesday, October 28, 2008

Data Ends the Debate




A few years ago we were having a discussion on who's product caused a problem. A wood products manufacturer created the box and a plastics supplier moulded the cap. The cap and box did not fit very well. The wood products guy said the plastic caps were distorted and each one was different. His stance was "not my problem." The plastics fellow said the plastic parts would distort out of the mould but not enough to cause the problem we were seeing. Going nowhere.


A couple of fellows took a load of the wood boxes and started measuring them. One particular dimension was tied to the problem. The Excel chart shows the distribution of this dimension over dozens of boxes. Clearly, one of the holes was being drilled too far from the edge.

With data, the debate was over and we found the root cause quickly.
A simple example but it is not common. Many of our problem solving efforts fail because we just don't have the right data.

Thursday, October 16, 2008

Why We Don't Solve Quality Problems

The reason why problems do not get solved is that the decision makers do not have the right data to work with. We sit in a meeting room with a team of people discussing the problem and all we get are opinions. Mostly, they're wrong. Opinions are formed in the absence of data.

Other times we have someone in the discussion who may have authority or who may just be assertive. They will present a good case but it's just another opinion. Assertiveness isn't always a good thing. Organizations have spent lots on solutions that weren't.

The name we give to a problem often suggests a root cause. One example comes from a high volume packaging line. A particular problem was termed "feed problem" which suggests that the feeder was the root cause. But the real problem turned out to be orientation of the item. Because the center of gravity was not the center of the item, some items wobbled. Once this was identified and shown to be true, the fix was easy. But for months, the "feeder problem" was not solved because people were looking at the conveyor system and not the item on the conveyor.

Labelling a defect incorrectly is common and it causes delays.
Another reason why we misdiagnose problems is because we use the wrong sample size. This happens a lot. If you find two defects and trace them back to supplier A and not supplier B, we all know that isn't enough confidence to call it a supplier defect. But what sample size is right? How many do we need to be 95% confident? 99% confident. Most people don't know how to determine the required sample size. And that also causes delays as solutions are implemented that don't solve the problem.

There are a lot of reasons why we are not so effective at problem solving and we've all experienced solutions that proved to be wrong. Collecting evidence and using statistical methods can make your problem solving efforts a lot more successful.

Tuesday, October 14, 2008

Productivity in Problem Solving

I've been a black belt for many years now and have been involved with a lot of problem solving assignments. These projects require charters, meetings, checklists and reports. You tend to have conversations about the problems with dozens of people, one at a time. There's a lot of routine work involved, as with any project.

But nearly everything I learned about the problem came from analysis of data. Collecting evidence about process behaviour, or running a designed experiment provides new insights almost every time. So if you're tackling a difficult problem, you need to be on the floor, or in the lab, measuring parts and collecting data.

Solving quality problems is a scientific endeavour. If there's no data, it isn't science.

Thursday, September 18, 2008

Know Your Process

In an injection molding factory, most parts will have a mold and cavity ID molded on the back of the part. In this way, when there is a quality problem we can quickly know where the parts came from. Not many processes are set up with such convenience.

I was able to work with a high tech company that had a high reject rate, 2.7% coming from a long and complex process. There were several workstations for each process step so the number of possible paths that each product could have taken was in the hundreds. This is a problem in many high volume manufacturing companies. Not many can run one-at-a-time serial production.

One of the process steps was a heating cycle. Thousands of parts were soaked at an elevated temperature for 24 hours. When they were removed from the ovens, the parts were handed to several people for more work before making their way to a final test station. It was here at final test where the 2.7% failed.

In this assignment I had each product labelled (Sharpie marker) with the shelf number from the oven (from 1 to 12). We were very surprised to see that nearly all failures were labelled 1, 2 or 3. The bottom three shelves in the oven were responsible for almost all the failures! The temperature in the oven dropped of sharply near the bottom and these parts were not getting processed adequately.

There were so many opinions on a possible root cause and the real root cause wasn't on the list! Collecting data may be tedious but there's no other way to know your process.