Apr

The Curious Case of AI Technology

                                                         

                                                                 

The notion of Artificial Intelligence has been around for a while.

Yet, unlike other prominent technological innovations such as electric cars or the processor speed, its progress has not been linear.

In fact, as far as industrial impact is concerned, there were times when allegedly there was no progress at all.

The widespread fascination with AI started several generations ago, in 80-s of the last century. This is when a pioneering work of Noam Chomsky on computational grammar led to a belief that human language capabilities in particular, and human intelligence in general, can be straightforwardly algorithmized. The expectation was that the AI-based programs will have a significant and lasting industrial impact.

But despite unabridged enthusiasm and significant amount of effort the practical results were minuscule. The main outcome was disappointment and AI become somewhat of a dirty word for the next 20 years. The research became mostly confined to scientific labs, and although some notable results have been achieved, such as development of neural networks and Deep Blue machine beating acting world champion in chess, the general community was largely unaffected.

The situation started to change about 5-10 years ago with a new wave of industrial research and development.

We now experience somewhat of a renaissance of AI with bots, semantic search, self-service systems, intelligent assistant programs like Siri are taking over. In addition, optimists of science are bragging confidently about reaching singularity during our lifetime.

The progress this time seems to be genuine indeed. There are indisputable breakthroughs, but even more impressive is the width of industries adopting AI solutions, from social networks to government services to robotics to consumer apps.

For the first time AI is expected to have a huge impact on the community in general.

There is this vibe around AI which hasn’t been felt in years. And with power comes responsibility, as they say, - prominent thinkers such as Stephen Hawking raised their voice against the dangers of powerful AI for humanity. Still, as far as current topic is concerned, this is all part of the vibe.

Despite all the plethora of upcoming opportunities, it is important to observe that we are yet to advance from anticipation stage. AI has not became a major industrial asset, an AI firm has not reached a unicorn status, and despite the fact that major industrial players such as IBM are pivoting towards  fully-fledged AI-based model it has not manifested itself in business results.

We are still waiting for AI-based technology to disrupt the global community.

The overall expectation is that it is about to happen. But it hasn’t happened yet.

 

Interested in reading more? Check out our other blogs:

What Is AI Engine and Do I Need It?

Chatbots and assistant programs designed to support conversations with human users rely on natural language processing (NLP). This is a field of scientific research that aims at making computers understand the meaning of sentences in natural language. The algorithms developed by NLP researchers helped power first generation of virtual assistants such as Siri or Cortana. Now the same algorithms are made available to the developer community to help companies build their own specialized virtual assistants. Industry products that offer NLP capabilities based on these algorithms are often called AI engines.

The most powerful and advanced AI engines currently available on the market are (in no particular order): IBM Watson, Google DialogFlow, Microsoft LUIS, Amazon Lex.

All these engines use intents and entities as primary pnguistic identifies to convey the meaning of incoming sentences. All of them offer conversation flow capability. In other words, intents and entities help to understand what the incoming sentence is about. Once the incoming sentence is correctly identified you can use the engine to provide a reply. You can repeat these two steps a large number of times, thus creating a conversation, or dialog.

In terms of language processing ability and simplicity of user experience IBM Watson and Google DialogFlow are currently above the pack. Microsoft LUIS is okay too; still, keeping in mind that Microsoft are aggressively territorial and like when users stay within their ecosystem, it is most efficient to use LUIS together with other Microsoft products such as MS Bot Framework.

Using AI engine conversation flow to create dialogs makes building conversations a simple, almost intuitive, task, with no coding involved. On the flip side, using AI engine conversation flow limits your natural tendency to make conversations natural. The alternative, delegating the conversation flow to the business layer of your chatbot, adds richness and flexibility to your dialog but makes the process more comppcated as it now requires coding. Cannot sell a cow and drink the milk at the same time, can you?

Amazon Lex lacks the semantic sophistication of their competitors. One can say (somewhat metaphorically)  that IBM Watson was created by linguists and computer scientists while Amazon Lex was created by sales people. As a product it is well packaged and initially looks pleasing on the eye, but once you start digging deeper you notice the limitations. Also, Amazon traditionally excelled in voice recognition component (Amazon Alexa) and not necessarily in actual language processing.

The space of conversational AI is fluid and changes happen rapidly. The existing products are evolving continuously and a new generation of AI engines is in the process of being developed.

READ MORE

nmodes Technology - Overview

                                                       

nmodes ability to accurately deliver relevant messages and conversations to businesses is based on its ability to understand these messages and conversations. Once a system understands a sentence or text, it can easily perform a necessary action, i.e. bring a sentence about buying a car to the car dealership, or a complaint about purchased furniture to the customer service department of the furniture company.

Understanding sentences is called semantics. nmodes has developed a strong semantic technology that stand out in a number of ways.

Here is how nmodes technology is different:

1. Low computational power. We don’t use methods and algorithms deployed by almost everyone else in this space. The algorithms we are using allow us to achieve high level of accuracy while significantly reducing the computational power. Most accurate semantic systems, e.g. Google’s, or IBM’s, rely on supercomputers. By comparison our computational requirements are modest to the extreme, yet we successfully compete with these powerhouses in terms accuracy and quality of results.

2. Private data sources. We work extensively with Twitter and other social networks, yet at the same time we process enterprise data.  Working with private data sources means system should know details specific only to this particular data source. For example, when if a system handles web self-service solution for online electronics store it learns the names, prices, and other details of all products available at this store.  

3. User driven solution. Our system learns from user’s input. Which makes it extremely flexible and as granular as needed. It supports both generic topics, for example car purchasing, and conversations concentrating on specific type of car, or a model.

READ MORE