When I was in manufacturing, there were losses and were losses. When it comes to process yield, what does "normal" yield mean? Also, how do we know when a "streak" of 100% days is really an achievement?
These are similar questions. In the past I would have said to use statistical process control solutions. Control charts would be a good start and Cp and Cpk would tell you how your process stacks up globally. However in this case, I think we can come up with something that is more intuitive and easier to explain.
Instead of forcing the data to fit the tools, let's take a completely different approach and use Bayesian techniques to answer both questions. Bayesian analysis results in probability distributions which most people have an easy time understanding. Let's start with the first question: What's "normal" when it comes to yield in our process?
The first thing we need is a model that describes the distribution of observations. We already know that normal doesn't work well. We want something that looks more like this:
The distribution is pushed up against 100% but has a long tail towards zero. This is a modified gamma distribution I made that looks a lot like some real processes I have seen. (I took a gamma distribution, flipped, normalized, scaled and shifted it to fit between zero and one.)
The next thing we need is some actual data from the process in question. I will be faking that data here for obvious reasons. (For some reason, no one likes process yield data made public.)
We will be using a Monte Carlo simulation to find values to set the position and width of the distribution above so that it closely matches the data from the real process. Visualize stretching, squeezing and moving the peak around. That's pretty much what the simulation does.
We're setting our priors based on the real data. (If that sentence doesn't make sense to you don't worry about it.)
Here are the results:
I zoomed that in to show that the distribution is much tighter and the mode is much closer to 1. That means this process has better yields than my example. You can also see that 95% of the time you won't see less than 81.4% yield.
That number might be mildly disappointing, however if we simulated more data, we could easily narrow down those numbers further. (There are two numbers less than 90 in the data I used so this is probably a good representation of the process.)
So the answer to the first question is: Anything greater than 81.4% is pretty normal for this process. (The upper limit is 100% obviously.)
On to the second question: How do we know when a "streak" of 100% days is really an achievement?
At this point, it's tempting to use the model we just used to create some random data and then analyze that data to see how many times we see particular streaks. Unfortunately that isn't going to work well. The reason is the model returns metric values and we are really interested in just whether or not we had losses, not the yield. (In other words, the model we built returns things like 0.985. We'd have to place a lower limit on what "100%" really means, which is tied to volume. That's a road I don't want to go down if I don't have to. In this case, I don't have to so I won't.)
A binomial model better fits the bill here. We convert the existing data we used before into ones and zeros. One means a lossless day and zero means a day with losses. Then we use a logistic model to find reasonable values for the average percentage of days without losses.
This tells us that on average 82.1% of the days don't have any losses at all. (Contrast that with the earlier finding of the average days' yield. Those aren't the same thing.) Based on this information, now we can simulate a bunch of days and count up the average lossless streak.
You can see from the simulation that we would expect a streak of 7 days or longer 18.5% of the time. So a one week stretch with no losses is not all that unusual. A sixteen day stretch would be much more surprising. We'd only expect to see 16 or greater days 3% of the time.
As you can see, Bayesian modeling can give us the answers we need in situations where traditional statistical process control would be difficult. I have complete control over the function I use to curve fit and this model could easily be adapted for cascading process steps. The process is amazingly flexible and the results are easy to explain.