Contact us

The Fallacy of People Problems

The Fallacy of People Problems…and How to Resolve Them

One of the most oft-cited statistics in pharmaceutical manufacturing is that 80% of all reportable deviations are “people problems,” deficiencies of human performance. Clients report similar internal estimates ranging from 40% to 90%. This statistic shows up in our studies of Corrective and Preventive Action (CAPA) processes and investigation reports, and it is even cited on the Food and Drug Administration Web site. Despite the pervasiveness of people-caused problems, the specific causes attributed are few in number: failure to follow standard operating procedures, skipped or mis-sequenced steps, and improper documentation.

But do all of the problems classified as “human factors issues” really indicate a deficiency on the part of a person? Perhaps not.

Are These Really People Problems?

Consider the case of the “red specks.”

In this instance, a manufacturer of tablets found “red specks” in their final product inspection. Potency was within specifications, dissolution was unaffected, and stability trials were underway, but the appearance specifications required that the pills be white, not white with red specks. Initially there was the usual finger-pointing and cause-jumping: “It must be some kind of degradation.” “I’ll bet it has to do with the coating material from product X getting into the mix somehow.” “It wasn’t us folks in Production.” “We tested it twice in the lab and it showed up both times.”

When none of this led to either a root cause or subsequent corrective action, the focus shifted to the symptomology of the red specks. Someone asked what, in fact, the specks were, and after some debate, the specks were tested. A spectrographic analysis revealed that the red specks were particles of ferrous oxide.

Those of us who are not biochemists may know of this by its more common name: “rust.”

The investigators scratched their heads. “Rust?” they said. “Do we have any rust?” They developed a detailed process map, considered those steps of the process that might create rust, and went out and took samples. They found no rust.

They broadened their scope to incoming materials and took samples of all of the materials that went into the mix. Lo and behold, the investigators found small particles of ferrous oxide in drums that held one of the excipients for the blend. We’ll call it excipient X. They checked unopened drums of X, and there it was again: more rust. “Aha,” they thought. “We have found the problem, and it is not us.”

The manufacturer sent an officious letter to the supplier of excipient X. It stated what they had discovered, provided charts and graphs for evidence, and demanded that the supplier perform a thorough root cause analysis and detail what corrective and preventive actions would be implemented to make the rust problem go away.

Through official channels, the supplier responded, “We have conducted a thorough investigation and have determined that the source of the deviation is located in the drum we use for mixing excipient X. The inside of the lid of the drum has rust on it. When operators close the lid on that drum too forcefully, the rust flakes off the inside of the drum and gets into the mix. We have classified this as a people problem. Our corrective action is to retrain our people.”

An astonished phone call followed, asking the Supplier what the re-training would focus on, “Telling them not to slam the lid so hard.”

Is this a really a “people problem?”

While this anecdote may provide a chuckle, unfortunately, it is based on a real occurrence. But this situation is far from unique. Consider the case of the “black flecks.” A different manufacturer found black flecks in a blend, and immediately had them analyzed. The report indicated that the specks were a black rubber gasket material, and an FDA-approved gasket material at that. This was not good: the blend went into a chewable tablet. Chewing a tablet is one thing but chewing rubber is another. The problem was traced to the source, a gasket in one of three adjacent mixing machines. The old gasket was replaced—it was indeed old and worn —and a stainless steel screen was installed at the outputport of the machine, just in case. The black flecks disappeared.

Unfortunately, three months later, they began to get reports of “shiny flecks” in about a third of their samples, which, when analyzed, turned out to be, you guessed it, little shards of stainless steel screen material.

This incident was considered to be a mechanical issue, not a people problem. But isn’t it really a people problem at heart, an error of omission by humans?

The Cause, and the Cause of the Cause

At a philosophical level, a colleague of mine has always claimed that, when you get down to “the cause of the cause of the cause,” the root cause of all root causes, there are only two options: human fallibility or God’s will. Neither is a cause we can do much about with effective corrective and preventive action. We need to work at a level of analysis where we can have an impact on the results, and human fallibility/God’s will is perhaps too deep to accomplish this objective.

In the case of the “red specks,” it is clear that a human designed the mixing drum, and the process in which it was deployed, in such a way that it would rust during normal use. And a human decided that rust was a natural and acceptable part of the lid’s functioning and could not to be prevented, only managed. A less philosophical and more pragmatic analysis would suggest aiming the corrective actions at the rust and not at the people who made it flake off. Why not strip the rust off and slap on some Rustoleum so that no matter how hard the lid is slammed, there would be no rust to flake off into the mix? This would get at the cause of the cause, and would prevent rust from forming in the first place.

In the case of the “black flecks,” a human failed to ask: Why is the gasket in this machine corroding faster than gaskets in the two adjacent machines? Why just now? What is distinctive about this machine, this gasket, this timing? What has changed about the things that are unique to this machine? To the degree that it was instigated by a “special cause,” we can detail the change that led to the problem and correct it at the root. To the degree it is a built-in “common cause,” endemic to all three gaskets but showing up in this one first, by happenstance, it may be minimized but not eliminated entirely, by instituting a detailed preventive maintenance schedule. This would ensure that the gasket was removed before it was compromised. Instead of taking these corrective or preventive actions, the group took “adaptive action.” They assumed that gasket damage was a fact of life, and adapted their process to it by “inspecting quality in” with the stainless steel screen. When they did, no one asked about what might go wrong if they installed a screen to catch the loose bits of gasket material.

Too often, it is not a lack of systematic analytical logic that prevents effective action, it is more mundane concerns. Getting rid of the rust would have cost more money and taken more time. Changing the process or changing the gasket would have taken time. They might have had to revalidate the process to ensure that there was no rust. An Investigation Report might have prompted an FDA visit, looking for other parts of this process or other processes affected by rust. Who knows what other problems, rust-oriented or not, might be found? Even worse, any serious process change might have required filing a New Drug Application. And who knows where that might lead? Retraining the operators and taking adaptive actions are easier, cheaper, and less risky—at least in the short-term. Chewing rubber is one thing; chewing stainless steel is another.

Apply Rigor at the Start

Even classic “people problems”—skipping a step in the standard operating procedure (SOP), touching the wrong surface and not re-gloving immediately, not entering batch yield information at the right time or in the right format—need to be examined.. The standards of problem analysis for a “mechanical problem” demand that we state problems with enough granularity to be actionable. Why should the analysis of a “people problem” be any less specific? If someone skipped a step, then who and which step? If someone touched the wrong surface, then who and which surface? Does this happen a lot? What are the trends? Why is it always this surface? Why just at this time? If someone failed to document batch yield, who, where, and when?

Precisely stating the defect or deviation and who or what was involved can help us visualize and understand what has happened. “Operator JW skipped step 3.2.5.4 in procedure 34-B.” “Maintenance Technician AR, in the process of adjusting belt speed on line 3, brushed up against the fill-nozzle at station 15.” “Supervisor JT entered the batch yield data for batch 040315B in kilograms instead of pounds.” These statements provide a concise starting point for analysis and follow a path that leads toward eliminating the deviation at its source.

A Model of Cause Analysis

Once we have a place to start, the causes may lie with the operator, the maintenance technician, or the supervisor. Or they may not. To determine cause, we need a model. Classic Problem Analysis analyzes “special-cause” variation by asking:

What is it?                            What is it not?
Where is it?                          Where is it not?
When is it?                           When is it not?
What is the extent of it?        What is the extent of it not?

Using this method may narrow the search toward a given person doing a particular thing at a specific time, but may fail to address the uniquely human sources around the question, why?

Once we have narrowed the range of possibilities, we need to turn to a model not of mechanical cause-and-effect, but of human performance. In this view, human performance is the result of a system of forces that act together to drive behavior.

This model offers different sources of performance problems. Let us start with the Performer, and admit that there are people out there doing jobs they are not qualified to do. The test question is: “Could this person do this task if their job, or their life, depended on it?” If the answer is yes, then there is no deficiency in the performer. However, for each of us, some tasks are simply out of our capabilities and no amount of training would improve our performance. In this case, retraining is not the option, replacing is. People cannot be expected to do what is impossible for them to learn.

Next, consider the Response. This asks, “How clear is the desired behavior that we want from the performer?” “Are we asking for a quantum leap in performance or just a slight tweak?” The response often exposes problems caused by changing the SOP. Perhaps the standards are unclear, the changes too drastic, or the expectations unreasonable. It is common to encounter 57-step SOPs that require the dexterity equivalent of patting your head and rubbing your stomach. They just cannot be accomplished easily or consistently, if at all. In these cases, the SOP needs to be changed. If it cannot be changed, training will be required on a constant basis.

To test the Situation, ask if the signal to engage in the desired response is clear and unambiguous to the performer or muddled with other priorities and expectations. In the world of pharmaceutical manufacturing, knowing when to call something a deviation and to begin the analysis can be murky. Employees may be told that quality matters, that precision is important, and that documenting every deviation is necessary. But is this message delivered at even half the volume of the one that says: Keep the line running? Included in the Situation factor is how well the environment supports the desired behavior. Are people expected to do a lot of writing in a room with no flat surfaces and little light? Is a problem-solving meeting working as well as it might when it is held in a space that requires goggles and earplugs?

Perhaps the most significant factor in the performance system model is indicated by Consequences. This factor reminds us that people do what they do because they get rewarded for doing it and punished for not doing it. A truism in management circles states: To see what you have been rewarding, look at what results you are getting.

But the model is more subtle. It posits that there needs to be a balance of short-term and long-term consequences for both the individual and the organization. For example, if the individual sees that the desired performance as negative or punishing, he or she can be motivated to do it anyway if there is a reasonable expectation of positive consequences in the longer-term. This is a classic tradeoff: It’s a pain to do this, and it’s going to make my life crazy for a while, but if I do it without complaint, it will be good for my career down the road. The same applies to organizational consequences. A serious problem in the first month of a multi-year production campaign can justify shutting down the line for a time, if it will produce a ten percent increase in productivity for the campaign. In contrast, there is no long-term benefit of shutting the line for a complete revalidation on the last day of a multi-month run.

Individual and organizational consequences also must be balanced. If the corporation always sacrifices meeting its objectives so that individual workers can feel better, it will not stay in business long. And if the individuals suffer constant, negative consequences so that the organization can prosper, they will seek employment elsewhere, where more of their goals can be met.

A back-order situation encountered at a medical device company illustrates the effects of unbalanced consequences. Our consulting team was asked to analyze some issues in the shipping process. We discovered a huge back-order problem. Surprisingly, the products on back-order were not special orders, but common everyday products, the highest-volume SKUs in the product mix. No one knew why this occurred until we learned about the incentive plan in Production. It rewarded volume based on skewed criteria that drove them to produce odd lots of weird stuff. The consequences for Production were out of balance with those for the organization, rewarding performance that harmed the company.

The most subtle aspect of the model is in how it defines consequences. Not everything is seen as universally rewarding or punishing. Positive consequences must be regarded as positive by the performer. An employee recognition program that offers a personal lunch with the president as a reward might make as many people run screaming in terror as it attracts. One client company recounted how they had tried three times to conduct such a program, only to see it backfire every time because the rewards weren’t universally positive. Once, the rewards were too trivial (free magazine subscriptions, for those three Americans who do not have an army of junior high school kids peddling them in their neighborhood). Then they were too extravagant (a $5,000 reward that led to rampant fraud and corruption). Finally they were just plain strange (pizza with the president, go figure).

In pharmaceutical manufacturing, there are often consequences built into the system that punish spotting problems and engaging in root cause analysis. In many firms, whoever first notices the deviation owns it and is responsible for assembling a team, gathering data, doing analysis, and, in many instances, writing up the investigation report. For many, these are seen as negative consequences, onerous tasks to perform on top of regular responsibilities. There is the risk of management visibility, a constant push from Production to finish the analysis and get back to making product, and there is resistance from colleagues who are concerned that the analysis might not show them in the best light. The analysis itself can be less of a systematic process of gathering and arraying data and more of a knock-down war among vested interests. Being caught in the middle can be unpleasant. Given all this, it is no wonder that many people are reticent to go out of their way to notice deviations: “Problem? I don’t see a problem.”

On the other hand, letting something slip has few, if any, negative consequences for the individuals in the short-term. It is easy, and all too common, for production people to think: As long as the batch meets specifications, who is to know if a step was skipped or reversed, or if a signature was affixed during the process or after review? Chances are it will be three to six weeks before the batch fails specs, or two to twenty-four months before a patient complains. Whatever happened or didn’t happen might well be long forgotten.

These abundant negative consequences and a lack of positive consequences in the short-term discourage the reporting of a deviation. A client recently received a patient complaint of a one-inch bolt in a sealed bottle of capsules. They traced it back to a hinge-arm on a cottoner machine, used right before the bottles are sealed and capped. It was a peculiar bolt; there was only one like it in the plant. It appeared that, if the bolt had worked itself loose, it could have fallen into a bottle before the cotton was inserted. The details of this are not worth troubling about here, but it was striking that the nut that attached to the bolt was never found. Someone must have found that loose nut, looked at it, and tossed it in the trash, without writing it up in the batch records or reporting it to anyone. And someone must have noticed that the cottoner wasn’t working correctly because the hinge-arm was missing a bolt and a nut, and then replaced it, without noting what had happened. When the complaint supervisor was asked how probable it was that her people could have done this, she rolled her eyes and said, “Don’t ask, don’t tell.”

Because, to be blunt, what was in it for them?

Finally, consider how Feedback factors into the model. If nothing ever tells you about the consequences of your responses, you will continue to do what you have been doing, assuming that it is working. If everyone knows Production’s average yield and no one has a clue what the reject rate is, the message is clear. If it is not clear in the SOPs or SOP training precisely why you can’t skip step 3.2.5.4 and what impact it has not only on grinding but on mixing and encapsulation, then you have no reason to be especially vigilant. Finally, if the only real feedback is a yearly list of generalities, followed by a modest monetary reward, what behavior can be expected to change?

The Locus of Leverage for Corrective Actions

The performance system model leaves room for retraining as a corrective action to a people problem, but only when the deficiency is in the performer, and even then, only some of the time. Some people are simply not trainable, some skills are not transferable, and the optimal solution is rarely “more of the same.” Instead, most corrective actions for performance problems involve addressing the system itself—its balance of consequences, its feedback mechanisms, and its stated goals, targets, and objectives. In short, the solution lies with Management making it clear that quality, in all its aspects, is the priority. This is not done with words and slogans but with rewards and measures and metrics and behavior. And finally, the solution lies with addressing the common people problem with as much rigor and analytical precision as the most challenging mechanical or biochemical problem.

Related

KT CAPA: Skills are Not Enough

Recall Remediation

Contact Us

For inquiries, details, or a proposal!