In the book, Influencer: The Power to Change Anything, the Influencer Model by Kerry Patterson et al suggests that you can change anything by addressing two fundamental questions. Change requires motivation: Is it worth it?—and ability: Can I do it? In a pilot program to improve customer service at a state-run health organization, we used this model to help participants say yes to these two fundamental questions, embrace sustainable changes and achieve remarkable business results.
Kepner-Tregoe was called in to help this large health organization to address customer service issues in a program that provides medical services and equipment to people with disabilities. The time required to complete a customer application was too long and growing, and no one was sure why. Some thought the increasing cycle time was due to growing volume, others thought the customer service officers were too slow and that some had no idea what was going on.
Through a simple diagnosis method, KT was able to quickly identify that the increasing cycle time was not due to the application process but how the process was framed. Like in too many organizations, the process was framed on a functional perspective, instead of a customer outcome perspective. Using customer outcome as the objective, KT reframed the process and designed a five-week pilot to change how work was done and provide proof that there was a better way.
Using the influencer model, here’s what was done.
|Motivation: Is it worth it?
|Ability: Can I do it?
|Make the Undesirable Desirable
|Surpass Existing Limits
|Harness Peer Pressure
|Find Strength in Numbers
|Design Rewards and Demand Accountability
|Change the Environment
Personal: Make the Undesirable Desirable
First, we had to define the vital behaviors that really drove customer outcomes. We called this small list of vital behaviors our “principles of change.” They were guideposts for all decisions during our pilot, and were so important that we hung the list on the wall for easy reference.
Principles of Change
• Focus on customer outcome
• All work must be visible
• No work leaves the lanes
• Processing would be end-to-end
• Outliers would be managed
• Decision making would be made at the lowest level possible
Before the pilot, customer applications were managed by the whims and preferences of the Customer Service Officers (CSOs). If a CSO went on holiday, or took a day off or simply deemed an application incomplete, the application would sit on a desk or shelf, be put in a drawer, or placed in the dreaded communal cabinet known as the “follow-up drawer,” which had no owner. This made managing the applications a nightmare because no one knew how many applications were out there at any one time and how long they were sitting with the CSOs. The follow-up drawer had some applications that were over a year old.
To make the undesirable desirable and to ensure that all work was visible, we eliminated the follow- up drawer and made everyone take all the applications from their hiding spots. We made a fundamental shift: work no longer belonged to the individual; it belonged to the group and the group had to manage the workflow.
Individuals could only work on one application at a time and drive it through to one of three possible outcomes. Clear trays were set up at the input of the process for all new or un-processed applications. Clear trays were set up at the outputs of the process for the three possible outcomes: completed applications, follow-up, or required clinical advisor review. This way, everyone involved could easily see how much work was coming in, going out or sitting as WIP at any point in time. To ensure we could make data-driven decisions, someone was asked to count the number of applications in each tray at the beginning of the day and at the close of the day. The new approach was initially met with resistance because people found it hard to give up their individual files and folders. They got over this when they realized that by making the work visible, resources could be rapidly redeployed where the work was stacking up to provide additional support. They felt less pressure to hide any work that they were struggling with.
After a few days, it was literally possible to see the work moving through the process.
Personal: Surpass Existing Limits
At the beginning of the pilot, we didn’t set performance expectations. We simply measured daily results and made them visible on the wall through some simple charts that graphed Daily Volume In vs. Daily Volume Out for each of the designated lanes. As the pilot progressed, an interesting thing happened. While the daily volume in was largely out of our control, the daily volume out became significantly better and better. Were the CSOs suddenly more efficient? Not really, essentially they were doing the same work. Were they motivated to clear the backlog from the old follow-up cabinet? Not really, there was no drive to do so. What they had now was a simple way to gauge their own performance. They knew what a good day looked like and what a bad day looked like. As a result, they were naturally inclined to have more good days than bad so they were inspired to complete their work more effectively.
Social: Harness Peer Pressure
Every morning before work began, we ran a quick 15-minute huddle with the entire team. CSOs were encouraged to raise any concerns about the process, share what they learned from the previous day, ask questions about the results, and voice any complaints. In the early days, there were many complaints; so rather than providing a solution, we used turnaround questions to ask what they thought we should do. We agreed with every suggestion as long as everyone else agreed. If everyone agreed, we made the solution visible by recording it on an easel and we tried it for that day. On the following day, we reassessed by asking, what worked well? What didn’t work well? What should we do differently? Once everyone saw that we were willing and open to try anything they suggested, more solutions were offered and their peers began to critique the solutions in a constructive manner, building consensus. Some ideas were insignificant but others greatly improved the operating process, and more importantly, created a dialogue and an opportunity to do things differently. Because everyone had to agree, people really thought ideas through, improving the quality of their suggestions.
Social: Find Strength in Numbers
When we made the fundamental shift that gave the lane or group ownership of the work, not the individual, an interesting thing happened. People began to collaborate more within their own lanes and they asked for support from other lanes when they saw work building up. Because others could see work levels, they were more inclined to help out in other areas. There were more conversations about the applications they were working on, and they began to coach and mentor one another. The ability to share knowledge and experiences was something relatively new for them, and they were happy to help each other out to get an outcome rather than letting a bad application sit. In feedback at the end of the pilot, they said they really enjoyed the ability to learn from one another and to discuss issues without escalating applications. As a group they were much stronger at finding solutions and making decisions than they were individually.
Structural: Design Rewards and Demand Accountability
Prior to the pilot, individuals could have stacks of applications in their drawers and no one would know. As a result, there was no accountability and little satisfaction in a job well done. Because all the work was now visible and the results were tallied each day and each week, they enjoyed the group’s successes, held each other accountable and would ask why something was in follow-up. Now responsible for clearing their lane’s backlog and able to see the applications with the oldest date, they designed a new follow-up process that ensured that all applications had a clear set of triggers for action.
Structural: Change the Environment
One of the biggest changes implemented was the seating arrangement. Prior to the pilot, seating was random, based on personal preference. Clinical advisors, the primary decision makers on certain applications, sat separated from the CSOs. When a clinical advisor was needed, the CSO would have to track one down, have a quick discussion about the case and wait for a response. If the clinical advisor had other duties, the application would sit with them indefinitely, often for days or weeks resulting in a longer cycle time.
During the pilot, the teams were rearranged based on the complexity of the applications and were separated into different lanes or work streams. Each lane or work stream had a clinical advisor seated with the CSOs to ensure their availability to review cases and provide counsel on applications as they arose. In the beginning, the clinical advisors were resistant to this arrangement because they thought it would increase their workload, pull them away from other duties and make it difficult for them to upskill in different types of applications. In practice the opposite was true. By integrating the clinical advisors into the workflow, they were able to overhear discussions and provide coaching and consultation on the spot. This resulted in less work escalated to them, more knowledge transfer and more trust in the CSOs’ decisions. Because they were able to see the workload in each lane, they were able to manage their time to complete other duties more effectively. Flexible seating arrangements incorporated strategically placed hot desks, enabling the clinical advisors to shift to different lanes based on volume and to advise on different types of applications and services. This helped them upskill on applications they normally would not work on, with support from another advisor.
The CSOs also benefited from the seating arrangements because they were able to quickly shift between lanes to deal with incoming volume and developed expertise by sitting with the clinical advisors and enjoyed opportunities to work on different types of applications.
The Power to Change
After the five-week pilot, the average cycle time from when the application entered the system to when a customer received an outcome was approximately 5 to 6 days. Before the pilot, the cycle time had been 30 days. When we compared to the same time period in the previous year, the average time had been over 100 days. The improvement was so dramatic it was almost unbelievable. To ensure accuracy, a number of different sensitivity analyses were conducted, verified and confirmed. It has been close to 12 months since the pilot was run and the team has managed to maintain this level of performance while finding other ways to improve quality.
A year later, the CSOs and clinical advisors were asked to reflect on change.
One CSO noted, …(in December 2013) According to our count sheet, we had 1932 equipment request forms outstanding, the oldest being from June. As of today (March 2015), we have 127, the oldest being from five days ago.
A clinical advisor said candidly, I used to sit next to two very quiet workers and have had to adapt to a different style of working. But she continued, I am enjoying coming to work a lot more at the moment. I feel proud to work for (the organization) again as I feel like we offer good customer service.
A CSO noted, Because we have some good strong practices in place, we now feel in control, instead of being overwhelmed, and this makes communicating with our consumers and prescribers so much easier. I am sure they must feel a lot more confident in us and the service we provide.
By running and executing this pilot, we achieved the objective of shifting the organizational focus to the customer—improving both the customer experience and the work experience within the organization. We can answer and validate how the team members answered the two questions that drive change: Is it worth it? and Can I do it? In this case, the clear answer was yes.