Aug

Meet Eliza, the Mother of AI

                                                             

Meet Eliza, the Mother of AI..

Today, Artificial Intelligence seems to be the buzz of every major enterprise. Salesforce is formally announcing Einstein this fall, IBM has worked on Watson for years now, and after 20 years of working with AI, Microsoft has made a few attempts to bring the technology to the market. With all this activity, you may be asking yourself what kind of impact AI will have on you and your business, and where you might want to look to investigate the possibilities Artificial Intelligence represents.

Before we discuss how AI will impact customer support and consumer experience, and how you may leverage it in your contact center, I thought it would be fun to take a look where AI got its start.

The term AI was coined by computer scientist John McCarthyin 1956 who subsequently went on to create the Dartmouth Conference to advance the ideas and technologies associated with machine intelligence. While this collective of thought leaders and scientists made huge advancements through programs at MIT and others, most of their work was only circulated in academic fields.

Not many people were aware of Artificial Intelligence, how it worked or its potential uses, until around 1964 when MIT computer Scientist Joseph Weizenbaumwrote Eliza, a program based on Natural Language Processingthat was able to successfully question and respond to human interactions in such a way as to almost sound like a real human being. Eliza, with almost no information about human responses was able to use scripts and pattern  matching to simulate responses that might occur between two people.

The most famous of these simulations, highlighting  AI ability to intersect with modern needs and technology, was DOCTOR. DOCTOR was able to question and respond to a human in such a way so as to almost sound like an actual psychotherapist. As the human subject made statements, DOCTOR asked questions and made statements relevant to the conversation as if it were a present and conscious being… almost.

Over the years  computer scientists, whether academics or industry professionals,  have worked tirelessly to improve upon these developments with the hope of delivering a computer program capable not only to ask and respond, but to understand the context of a conversation. A program that can relate relevant data to responses, thus providing value to the human it’s conversing with, while helping to chart the course of the conversation, just as if you and I were talking over a cup of coffee or across a conference room table.

Why is this important, you may ask? With the introduction of Chatbots, we began to see some of the potential in Artificial Intelligence. Companies could now front-end customer chat interactions that allowed the company to be more responsive to its customers while shortening wait times and deflecting inquiries from the call center, which as we all know are hugely expensive.

The one problem with Chatbots? Customers hated dealing with limited technology that was cold, often incorrect, and frustrating. People are accustomed to dealing with the cold, sterile nature of technology when they type numbers in a phone to be routed but expected a human to be chatting with them. These negative experiences have made a number of companies a little gun shy about implementing true Artificial Intelligence. The last thing a business wants is a customer complaining, especially on Social Media, about a poor customer experience due to a bad interaction with technology.

There is a significant difference between Chatbot technology and true AI, consequently the outcomes and customer experience are proving to be very different. Where a Chatbot is more like an IVR, answering simple questions and routing customers to the correct agent, Artificial Intelligence is aware of the conversation and able to present relevant responses, thereby providing a faster response and shorter customer interaction times and better customer service. I mean, if Eliza’s DOCTOR could simulate a psychotherapist in 1964, what can AI do for your contact center in 2016?

Interested in reading more? Check out our other blogs:

The Curious Case of AI Technology

                                                         

                                                                 

The notion of Artificial Intelligence has been around for a while.

Yet, unlike other prominent technological innovations such as electric cars or the processor speed, its progress has not been linear.

In fact, as far as industrial impact is concerned, there were times when allegedly there was no progress at all.

The widespread fascination with AI started several generations ago, in 80-s of the last century. This is when a pioneering work of Noam Chomsky on computational grammar led to a belief that human language capabilities in particular, and human intelligence in general, can be straightforwardly algorithmized. The expectation was that the AI-based programs will have a significant and lasting industrial impact.

But despite unabridged enthusiasm and significant amount of effort the practical results were minuscule. The main outcome was disappointment and AI become somewhat of a dirty word for the next 20 years. The research became mostly confined to scientific labs, and although some notable results have been achieved, such as development of neural networks and Deep Blue machine beating acting world champion in chess, the general community was largely unaffected.

The situation started to change about 5-10 years ago with a new wave of industrial research and development.

We now experience somewhat of a renaissance of AI with bots, semantic search, self-service systems, intelligent assistant programs like Siri are taking over. In addition, optimists of science are bragging confidently about reaching singularity during our lifetime.

The progress this time seems to be genuine indeed. There are indisputable breakthroughs, but even more impressive is the width of industries adopting AI solutions, from social networks to government services to robotics to consumer apps.

For the first time AI is expected to have a huge impact on the community in general.

There is this vibe around AI which hasn’t been felt in years. And with power comes responsibility, as they say, - prominent thinkers such as Stephen Hawking raised their voice against the dangers of powerful AI for humanity. Still, as far as current topic is concerned, this is all part of the vibe.

Despite all the plethora of upcoming opportunities, it is important to observe that we are yet to advance from anticipation stage. AI has not became a major industrial asset, an AI firm has not reached a unicorn status, and despite the fact that major industrial players such as IBM are pivoting towards  fully-fledged AI-based model it has not manifested itself in business results.

We are still waiting for AI-based technology to disrupt the global community.

The overall expectation is that it is about to happen. But it hasn’t happened yet.

 

READ MORE

nmodes Technology - Overview

                                                       

nmodes ability to accurately deliver relevant messages and conversations to businesses is based on its ability to understand these messages and conversations. Once a system understands a sentence or text, it can easily perform a necessary action, i.e. bring a sentence about buying a car to the car dealership, or a complaint about purchased furniture to the customer service department of the furniture company.

Understanding sentences is called semantics. nmodes has developed a strong semantic technology that stand out in a number of ways.

Here is how nmodes technology is different:

1. Low computational power. We don’t use methods and algorithms deployed by almost everyone else in this space. The algorithms we are using allow us to achieve high level of accuracy while significantly reducing the computational power. Most accurate semantic systems, e.g. Google’s, or IBM’s, rely on supercomputers. By comparison our computational requirements are modest to the extreme, yet we successfully compete with these powerhouses in terms accuracy and quality of results.

2. Private data sources. We work extensively with Twitter and other social networks, yet at the same time we process enterprise data.  Working with private data sources means system should know details specific only to this particular data source. For example, when if a system handles web self-service solution for online electronics store it learns the names, prices, and other details of all products available at this store.  

3. User driven solution. Our system learns from user’s input. Which makes it extremely flexible and as granular as needed. It supports both generic topics, for example car purchasing, and conversations concentrating on specific type of car, or a model.

READ MORE