Apr

The Curious Case of AI Technology

                                                         

                                                                 

The notion of Artificial Intelligence has been around for a while.

Yet, unlike other prominent technological innovations such as electric cars or the processor speed, its progress has not been linear.

In fact, as far as industrial impact is concerned, there were times when allegedly there was no progress at all.

The widespread fascination with AI started several generations ago, in 80-s of the last century. This is when a pioneering work of Noam Chomsky on computational grammar led to a belief that human language capabilities in particular, and human intelligence in general, can be straightforwardly algorithmized. The expectation was that the AI-based programs will have a significant and lasting industrial impact.

But despite unabridged enthusiasm and significant amount of effort the practical results were minuscule. The main outcome was disappointment and AI become somewhat of a dirty word for the next 20 years. The research became mostly confined to scientific labs, and although some notable results have been achieved, such as development of neural networks and Deep Blue machine beating acting world champion in chess, the general community was largely unaffected.

The situation started to change about 5-10 years ago with a new wave of industrial research and development.

We now experience somewhat of a renaissance of AI with bots, semantic search, self-service systems, intelligent assistant programs like Siri are taking over. In addition, optimists of science are bragging confidently about reaching singularity during our lifetime.

The progress this time seems to be genuine indeed. There are indisputable breakthroughs, but even more impressive is the width of industries adopting AI solutions, from social networks to government services to robotics to consumer apps.

For the first time AI is expected to have a huge impact on the community in general.

There is this vibe around AI which hasn’t been felt in years. And with power comes responsibility, as they say, - prominent thinkers such as Stephen Hawking raised their voice against the dangers of powerful AI for humanity. Still, as far as current topic is concerned, this is all part of the vibe.

Despite all the plethora of upcoming opportunities, it is important to observe that we are yet to advance from anticipation stage. AI has not became a major industrial asset, an AI firm has not reached a unicorn status, and despite the fact that major industrial players such as IBM are pivoting towards  fully-fledged AI-based model it has not manifested itself in business results.

We are still waiting for AI-based technology to disrupt the global community.

The overall expectation is that it is about to happen. But it hasn’t happened yet.

 

Interested in reading more? Check out our other blogs:

Beware the lure of crowdsourced data

Crowdsourced data can often be inconsistent, messy or downright wrong 

We all like something for nothing, that’s why open source software is so popular. (It’s also why the Pirate  Bay exists). But sometimes things that seem too good to be true are just that. 

Repustate is in the text analytics game which means we needs lots and lots of data to model certain  characteristics of written text. We need common words, grammar constructs, human-annotated corpora  of text etc. to make our various language models work as quickly and as well as they do. 

We recently embarked on the next phase of our text analytics adventure: semantic analysis. Semantic  analysis the process of taking arbitrary text and assigning meaning to the individual, relevant components.  For example, being able to identify “apple” as a fruit in the sentence “I went apple picking yesterday” but to  identify “Apple’ the company when saying “I can’t wait for the new Apple product announcement” (note:  even though I used title case for the latter example, casing should not matter)

READ MORE

What Is AI Engine and Do I Need It?

Chatbots and assistant programs designed to support conversations with human users rely on natural language processing (NLP). This is a field of scientific research that aims at making computers understand the meaning of sentences in natural language. The algorithms developed by NLP researchers helped power first generation of virtual assistants such as Siri or Cortana. Now the same algorithms are made available to the developer community to help companies build their own specialized virtual assistants. Industry products that offer NLP capabilities based on these algorithms are often called AI engines.

The most powerful and advanced AI engines currently available on the market are (in no particular order): IBM Watson, Google DialogFlow, Microsoft LUIS, Amazon Lex.

All these engines use intents and entities as primary pnguistic identifies to convey the meaning of incoming sentences. All of them offer conversation flow capability. In other words, intents and entities help to understand what the incoming sentence is about. Once the incoming sentence is correctly identified you can use the engine to provide a reply. You can repeat these two steps a large number of times, thus creating a conversation, or dialog.

In terms of language processing ability and simplicity of user experience IBM Watson and Google DialogFlow are currently above the pack. Microsoft LUIS is okay too; still, keeping in mind that Microsoft are aggressively territorial and like when users stay within their ecosystem, it is most efficient to use LUIS together with other Microsoft products such as MS Bot Framework.

Using AI engine conversation flow to create dialogs makes building conversations a simple, almost intuitive, task, with no coding involved. On the flip side, using AI engine conversation flow limits your natural tendency to make conversations natural. The alternative, delegating the conversation flow to the business layer of your chatbot, adds richness and flexibility to your dialog but makes the process more comppcated as it now requires coding. Cannot sell a cow and drink the milk at the same time, can you?

Amazon Lex lacks the semantic sophistication of their competitors. One can say (somewhat metaphorically)  that IBM Watson was created by linguists and computer scientists while Amazon Lex was created by sales people. As a product it is well packaged and initially looks pleasing on the eye, but once you start digging deeper you notice the limitations. Also, Amazon traditionally excelled in voice recognition component (Amazon Alexa) and not necessarily in actual language processing.

The space of conversational AI is fluid and changes happen rapidly. The existing products are evolving continuously and a new generation of AI engines is in the process of being developed.

READ MORE