It’s not surprising that of 344 companies that offered views on their strategic priorities in the last six months, 321 listed “improved customer care” at the top of their list. Customer care is also a major cost item, a big component in opex for service providers and a major cost for enterprises. Even equipment vendors cite customer care near number one in priority and (on the average) in the top five in terms of cost targeting. So, in summary, everyone wants more care for less money.
Wishing won’t make it so, of course. The only way to make something better and cheaper at the same time is to make some fundamental changes in how it’s being done. That means going back to objectives and solutions and making new matches. One thing that’s characterized the last six months is an increased acceptance of this basic truth, and improved progress in actually following that advice.
Companies tend to approach customer care objectives in terms of addressing issues, given that if there’s nothing “wrong” with the customer care approach in play, it’s reasonable to focus on how to make it cheaper without making it less effective. So, let’s start there.
The problem list for customer care, in order of mention by companies, is lack of relevance, lack of accessibility, speed and accuracy of the solution, and lack of follow-up. There are some common threads that wind through these criticisms, but we need to at least take a high-level look at each.
The “relevance” problem is serious because a lot of customers complain that there doesn’t seem to be any connection between customer care interactions and the problems or questions that lead to customer contact. One company said that a survey of their customers showed that over half their customer care interactions were abandoned within minutes because the early exchange seemed a total waste of time. A lot of this is due to call-center or chatbot steering of the original interaction.
Accessible care seems a given, but it’s a growing problem because of the convergence of communications options on the common framework of Internet services. My Internet is down, so I can’t send emails or go to a support page on a website. I may also be unable to call or text, depending on just how converged my services are. Even if a customer can use a smartphone to report a wireline outage, it’s typically difficult to link the reporting channel with the problem source. Where there are multiple providers involved, things are even worse.
Nobody who’s ever made a support contact will be surprised that customers are generally unhappy with how fast they get a response and the accuracy of the response. Two-thirds of callers feel that support interactions take them through a list of possible problems rather than trying to get information that would narrow focus to things that actually relate to the symptoms they’re trying to report. Interestingly, the complaint rate on this point is roughly the same for human-agent interactions and chatbot/call director interactions.
Follow-up is a recent issue, having exploded in the last two years to almost match the speed/accuracy complaint in terms of customer impact. Customers are frustrated because support initiatives don’t validate that the recommended steps have worked, or don’t proactively make note of the fact that they’ve failed to address the problem. One of the most common customer quotes offered in customer care assessments was “if I knew what was wrong I wouldn’t have called” and another “can’t you see what I’m seeing?”
That’s a pretty good lead into the next phase, which is to try to find some common threads in the customer care issues being presented. The threads companies see themselves are (again, in order) too much time wasted framing the customer problem to assign it to a support channel, lack of visibility into the problem as the customer would see it, too many different technologies involved in what a customer sees as a single activity, and difficulties in communication during support interactions. Here, it’s best to look at how customer support specialists see the issues to understand why they happen and how they could be addressed.
To the CSS teams, the core problem here is, as one put it, “trying to see through the wrong end of a telescope.” Users aren’t equipped to act as agents in a problem isolation process, and often can’t even relate the problems they’re having. As a result, support processes have to spin their wheels trying to gather basic information before they can do anything relevant, and when they’ve gathered the information it’s often wrong. CSS types think that what’s needed most of all is a local agent component that would run on user devices. The component would have multiple roles, each addressing an issue in customer care.
The first role would be local status monitoring, either dynamic or on request. Every device has some root capabilities and some visibility into the services it consumes. The local agent would collect that and make it available, either via telemetry or presented to the user in the form of a report that could be read, texted, emailed, etc.
The second role would be user interaction for information collection, which would gather “subjective” issues in the form of questions, complaints, etc. CSS personnel believe this kind of offline information-gathering would be more acceptable to the user than interacting with an agent or agent process after making a support request. The local agent could also combine this subjective information with status information it could collect, and by doing so improve the interaction from the customer’s perspective, as well as improving the support response.
The third role would be to establish a trouble ticket chain to represent the customer issue being addressed. The ticket data would be stored locally as well as exchanged with the support center, so that when a problem occurred or a question arose, the user and the CSS could relate it to past issues where support was required, and so that the support center could follow up (directly with the local agent or with the customer) to ensure the question/issue was resolved.
The final role would be to act as a conduit for interaction with a support center specialist or a chatbot (AI, likely). CSS personnel believe that this all works best if all the local agents on user devices were able to interact on behalf of other devices, meaning that a local agent was a conduit for support regardless of the state of a particular tech element. If your PC can’t connect, use your phone. If your network service doesn’t work, a device agent that can access a different service can stand in.
What CSS types globally would like to see is a standard way of interacting with local support agents, which really encourages the development of competing agents that would all work to support a community of products. This is of most interest to those companies who provide tech services or products to consumers or SMBs, because they believe that the benefits would be most where user literacy is the lowest.
That doesn’t mean this idea gets broad support on the supplier side. Over a third of companies say that they believed their senior management would reject the whole idea on the basis of the fear it would undermine support differentiation. That means, though, that almost two-thirds think it would be accepted, and this is a reflection of the growing concern that customer care is out of control overall, and that the industry will have to take collective steps to address the issues. That’s a good sign in itself.
Where these issues are especially relevant is in the service provider space, using the term in its most general way, meaning everything from basic communications to cloud and SaaS. The business of the future is the creation and delivery of experiences, because they’re more differentiable and valuable than simple things like connectivity. The problem is that they’re also way more complicated, and customer care is way more important for them. I’ll look at that point, and what it means both technically and in business terms, in a later blog.