Excerpts from Driving Customer Satisfaction
Andy Taylor, in Harvard Business Review
- Satisfying customers is a challenge for any service company. It's one thing for people to believe in providing superior service and quite another to ensure that busy, ambitious, and largely autonomous managers stay focused on delivering it.
- The problem was, the operating managers who were driving our growth (and who were being richly rewarded for that) didn't believe we had a serious, systemic problem. Building a consensus that something really was broken, then focusing the team's full energy on fixing it, proved to be a long and complicated process.
- How did we improve the consistency of our service? We started by developing a solid method for measuring customer satisfaction. And we involved the full organization in the process, rather than just hiring consultants to handle it.
- By letting the operating managers help build the questionnaire, we encouraged them to own the measurement.
- Then we worked hard to make the index as actionable as possible. At first we reported scores by region or group. But the regional managers wanted to zero in on individual branch performance; that way, the branches that were having service problems couldn't hide behind the high aggregate scores. We agreed to it, because the end result was a tool that would help hold branch offices accountable.
- In the process, we exploded the myth that good managers already know where they need to improve.
- Similarly, we found that "complete satisfaction" doesn't mean perfection. Our customers, we learned, care most about friendly service. Some of our "completely satisfied" customers had experienced glaring problems with their rentals, but what they remembered most was how quickly and courteously our people resolved the problem. That message gave branch managers a concrete target to shoot for. And it led us to focus the survey on those "completely satisfied" customers, who we learned were three times more likely than "somewhat satisfied" customers to choose Enterprise again.
- As the index gained legitimacy, we made a big deal about it. We posted the scores prominently in our monthly operating reports - right next to the net profit numbers that determined managers' pay. The operating managers were able to track how they were doing, and how all their peers were doing, because we had ranked everybody, top to bottom.
- Two years into the process, everyone generally understood the scores and accepted their validity. But we were still missing something: a sense of urgency. We got the message that it was time to put teeth into our efforts. We revamped our criteria for promoting employees: Field managers couldn't move up without achieving customer satisfaction scores at or above the company average. This big gun wouldn't have worked at the start, but with the ground carefully prepared, it was effective.
- Finally, in the late 1990s, the scores began to rise. After being stuck in the high-60% range for years, the company average for "completely satisfied" reached the high-70% range in 2001 and it's still climbing, along with our market share. More important, we've narrowed the gap between the top- and bottom-performing branches - from 28 percentage points when we began to 11 percentage points now, a key indication that we are delivering more consistent service.
- We certainly could have done a better job of preparing people for what became a long and highly experimental process. But we don't regret the process itself. We learned a lot about our customers and ourselves - including the lesson that quick-fix solutions won't work when it comes to delivering something as vital as high-quality customer service.
Comments