Nov

Artificial Intelligence Chat Is Evolving Faster Than IVR

                                                         

Although it doesn’t feel like all that long ago, way back in the 90s one of the most important factors to a call center’s success was the ability to route a customer to the right support agent with the IVR (Interactive Voice Response). Countless hours were spent identifying the most efficient call routing patterns and expert agent capabilities to ensure that your request reached the right person quickly. This technology is still widely used today and there are still teams in the largest companies programming IVR systems to accomplish pretty much the same goal.

As the standard for customer support evolved there have been many attempts to improve the function and the customer experience associated with IVRs to reduce hold times and provide more relevant support faster. Even today some companies will use their IVR system as a way to keep a customer on hold, rather than provide a solution, when agents are inundated with calls.

For those of us who’ve worked in the voice industry for some time, we’ve seen first-hand the attempts to accomplish a customer’s need before reaching an agent. First there was expert agent routing that delivered your call to the agent most qualified to help you. Then came advances in voice recognition, which today has evolved to be a very effective tool to increase containment rates and deflect calls from reaching a live agent. My two favorite examples of the power of voice recognition are Cox Communications and Capital One, two examples of great voice recognition and routing.

Our memory, however, is short. It wasn’t so long ago that we were all pulling our hair out punching digits into the phone or constantly repeating “agent”, “Agent”, “AGENT”, AGENT!!!!!”.

Whether it was a limit of computational power or the sheer cost of developing and implementing advanced call center technology, it took decades for phone systems to be able to front end the customer support process as efficiently as they do today. Thankfully we all survived to see it without boiling over from the hypertension usually associated with calling with a customer service department.

Bad customer experience is definitely not the case with Chat Artificial Intelligence (Chat AI). While we seem to hear about the shortcomings of Chat AI like the disconnected conversations and the robotic like responses, these experiences are usually the product of Chatbots with limited AI functionality or early stage deployments. The increases in both computational power and the massive advancements in machine learning are driving excellent customer experiences that improve over time.

When was the last time you heard of technology actually performing better, on its own, without a ton of additional development work or continuous updates? Well, that’s the case with Artificial Intelligence. Like a person, the more experience it has interacting with customers and information, the better it performs with little need to be manually improved or fine-tuned.

Today, AI Chat can be used to answer a large majority of customer requests and because Artificial Intelligence learns as it is used, customers prefer to interact through AI chat to avoid all of the frustrations commonly associated with calling a contact center agent. 

Interested in reading more? Check out our other blogs:

Towards smarter data - accuracy and precision

                                                   

There is a huge amount of information out there. And it is growing. To make it efficient and increase our competitive advantage we need to evolve and start using information in a smart way, by concentrating on data that drives business value because it is accurate, actionable, and agile. Accuracy is an important measure that determines the quality of data processing solutions.

How accuracy is calculated?

It is easy to do with structured data, because the requirements are formalizable. It is less obvious with unstructured data, e.g. a stream of social feeds, or any data set that involves natural language. Indeed, the sentences of natural language are subject to multiple interpretations, and therefore allow a degree of subjectivity. For example, should a sentence ‘I haven’t been on a sea cruise for a long time’ be qualified for a data set of people interested in going on a cruise? Both answers, yes and no, seem valid.

In these cases an argument was put forward endorsing a consensus approach which polls data providers is the best way to judge data accuracy. This approach essentially claims that attributes with the highest consensus across data providers is the most accurate.

At nmodes we deal with unstructured data all the time because we process natural language messages, primarily from social networks. We do not favor this simplistic approach, as it is considered biased, inviting people to make assumptions based on what they already believe to be true, and making no distinction between precision and accuracy. Obviously the difference is that precision measures what you got right, and accuracy measures both what you got right and what you got wrong. Accuracy is a more inclusive and therefore more valuable characteristic.

Our approach is

a) to validate data against third party independent sources (typically of academic origin) that contain trusted sets and reliable demography. Validating nmodes data against third party sources allows us to verify that our data achieves the greatest possible balance of scale and accuracy.

b) to enrich upon the existing test sets by purposefully including examples ambiguous in meaning and intent, and providing additional levels of categorization to cover these examples.

Accuracy is becoming important when businesses move from rudimentary data use, typical of the first Big Data years, to a more measured and careful approach of today. Understanding how it is calculated and the value it brings helps in achieving long-term sustainability and success.

 

READ MORE

What Is AI Engine and Do I Need It?

Chatbots and assistant programs designed to support conversations with human users rely on natural language processing (NLP). This is a field of scientific research that aims at making computers understand the meaning of sentences in natural language. The algorithms developed by NLP researchers helped power first generation of virtual assistants such as Siri or Cortana. Now the same algorithms are made available to the developer community to help companies build their own specialized virtual assistants. Industry products that offer NLP capabilities based on these algorithms are often called AI engines.

The most powerful and advanced AI engines currently available on the market are (in no particular order): IBM Watson, Google DialogFlow, Microsoft LUIS, Amazon Lex.

All these engines use intents and entities as primary pnguistic identifies to convey the meaning of incoming sentences. All of them offer conversation flow capability. In other words, intents and entities help to understand what the incoming sentence is about. Once the incoming sentence is correctly identified you can use the engine to provide a reply. You can repeat these two steps a large number of times, thus creating a conversation, or dialog.

In terms of language processing ability and simplicity of user experience IBM Watson and Google DialogFlow are currently above the pack. Microsoft LUIS is okay too; still, keeping in mind that Microsoft are aggressively territorial and like when users stay within their ecosystem, it is most efficient to use LUIS together with other Microsoft products such as MS Bot Framework.

Using AI engine conversation flow to create dialogs makes building conversations a simple, almost intuitive, task, with no coding involved. On the flip side, using AI engine conversation flow limits your natural tendency to make conversations natural. The alternative, delegating the conversation flow to the business layer of your chatbot, adds richness and flexibility to your dialog but makes the process more comppcated as it now requires coding. Cannot sell a cow and drink the milk at the same time, can you?

Amazon Lex lacks the semantic sophistication of their competitors. One can say (somewhat metaphorically)  that IBM Watson was created by linguists and computer scientists while Amazon Lex was created by sales people. As a product it is well packaged and initially looks pleasing on the eye, but once you start digging deeper you notice the limitations. Also, Amazon traditionally excelled in voice recognition component (Amazon Alexa) and not necessarily in actual language processing.

The space of conversational AI is fluid and changes happen rapidly. The existing products are evolving continuously and a new generation of AI engines is in the process of being developed.

READ MORE