Take Advantage Of Our Free Offers
Feb 23, 2018
Interested in reading more? Check out our other blogs:
Meet Eliza, the Mother of AI
Meet Eliza, the Mother of AI..
Today, Artificial Intelligence seems to be the buzz of every major enterprise. Salesforce is formally announcing Einstein this fall, IBM has worked on Watson for years now, and after 20 years of working with AI, Microsoft has made a few attempts to bring the technology to the market. With all this activity, you may be asking yourself what kind of impact AI will have on you and your business, and where you might want to look to investigate the possibilities Artificial Intelligence represents.
Before we discuss how AI will impact customer support and consumer experience, and how you may leverage it in your contact center, I thought it would be fun to take a look where AI got its start.
The term AI was coined by computer scientist John McCarthyin 1956 who subsequently went on to create the Dartmouth Conference to advance the ideas and technologies associated with machine intelligence. While this collective of thought leaders and scientists made huge advancements through programs at MIT and others, most of their work was only circulated in academic fields.
Not many people were aware of Artificial Intelligence, how it worked or its potential uses, until around 1964 when MIT computer Scientist Joseph Weizenbaumwrote Eliza, a program based on Natural Language Processingthat was able to successfully question and respond to human interactions in such a way as to almost sound like a real human being. Eliza, with almost no information about human responses was able to use scripts and pattern matching to simulate responses that might occur between two people.
The most famous of these simulations, highlighting AI ability to intersect with modern needs and technology, was DOCTOR. DOCTOR was able to question and respond to a human in such a way so as to almost sound like an actual psychotherapist. As the human subject made statements, DOCTOR asked questions and made statements relevant to the conversation as if it were a present and conscious being… almost.
Over the years computer scientists, whether academics or industry professionals, have worked tirelessly to improve upon these developments with the hope of delivering a computer program capable not only to ask and respond, but to understand the context of a conversation. A program that can relate relevant data to responses, thus providing value to the human it’s conversing with, while helping to chart the course of the conversation, just as if you and I were talking over a cup of coffee or across a conference room table.
Why is this important, you may ask? With the introduction of Chatbots, we began to see some of the potential in Artificial Intelligence. Companies could now front-end customer chat interactions that allowed the company to be more responsive to its customers while shortening wait times and deflecting inquiries from the call center, which as we all know are hugely expensive.
The one problem with Chatbots? Customers hated dealing with limited technology that was cold, often incorrect, and frustrating. People are accustomed to dealing with the cold, sterile nature of technology when they type numbers in a phone to be routed but expected a human to be chatting with them. These negative experiences have made a number of companies a little gun shy about implementing true Artificial Intelligence. The last thing a business wants is a customer complaining, especially on Social Media, about a poor customer experience due to a bad interaction with technology.
There is a significant difference between Chatbot technology and true AI, consequently the outcomes and customer experience are proving to be very different. Where a Chatbot is more like an IVR, answering simple questions and routing customers to the correct agent, Artificial Intelligence is aware of the conversation and able to present relevant responses, thereby providing a faster response and shorter customer interaction times and better customer service. I mean, if Eliza’s DOCTOR could simulate a psychotherapist in 1964, what can AI do for your contact center in 2016?
nmodes Technology - Overview
nmodes ability to accurately deliver relevant messages and conversations to businesses is based on its ability to understand these messages and conversations. Once a system understands a sentence or text, it can easily perform a necessary action, i.e. bring a sentence about buying a car to the car dealership, or a complaint about purchased furniture to the customer service department of the furniture company.
Understanding sentences is called semantics. nmodes has developed a strong semantic technology that stand out in a number of ways.
Here is how nmodes technology is different:
1. Low computational power. We don’t use methods and algorithms deployed by almost everyone else in this space. The algorithms we are using allow us to achieve high level of accuracy while significantly reducing the computational power. Most accurate semantic systems, e.g. Google’s, or IBM’s, rely on supercomputers. By comparison our computational requirements are modest to the extreme, yet we successfully compete with these powerhouses in terms accuracy and quality of results.
2. Private data sources. We work extensively with Twitter and other social networks, yet at the same time we process enterprise data. Working with private data sources means system should know details specific only to this particular data source. For example, when if a system handles web self-service solution for online electronics store it learns the names, prices, and other details of all products available at this store.
3. User driven solution. Our system learns from user’s input. Which makes it extremely flexible and as granular as needed. It supports both generic topics, for example car purchasing, and conversations concentrating on specific type of car, or a model.