Oct

When Big Data is not so big anymore

                                                   

We are inundated with information. There is so much information around us they coined a special term - Big Data. To emphasize the sheer size of it.

It is, of course, a problem - to deal with a large amount of data. Various solutions have been created to address it efficiently.  

At nmodes we developed a semantic technology that accurately filters relevant conversations. We applied it to social networks, particularly Twitter. Twitter is a poster child of Big Data. They have 500 million conversations every day. A staggering number. And yet, we found that for many topics, when they are narrowed down and accurately filtered, there are not that many relevant conversations after all.

No more than 5 people are looking for CRM solutions on an average day on Twitter. Even less - two per day on average - are asking for new web hosting providers explicitly, although many more are complaining about their existing providers (which might or might not suggest they are ready to switch or looking for a new option).  

We often have businesses coming to us asking to find relevant conversations and expecting a large number of results. This is what Big Data is supposed to deliver, they assume. Such expectation is likely a product of our ‘keyword search dependency’. Indeed, when we run a keyword search on Twitter, or search engines, or anywhere we get a long list of results. The fact that most of them (up to 98% in many cases) are irrelevant is often lost in the visual illusion of having this long, seemingly endless, list in front of our eyes.

With the quality solutions that accurately deliver only relevant results we experience, for the first time, a situation when there are no longer big lists of random results. Only several relevant ones.  

This is so much more efficient. It saves time, increases productivity, clarifies the picture, and makes Big Data manageable.  

Time for businesses to embrace the new approach.

 

Interested in reading more? Check out our other blogs:

The Curious Case of AI Technology

                                                         

                                                                 

The notion of Artificial Intelligence has been around for a while.

Yet, unlike other prominent technological innovations such as electric cars or the processor speed, its progress has not been linear.

In fact, as far as industrial impact is concerned, there were times when allegedly there was no progress at all.

The widespread fascination with AI started several generations ago, in 80-s of the last century. This is when a pioneering work of Noam Chomsky on computational grammar led to a belief that human language capabilities in particular, and human intelligence in general, can be straightforwardly algorithmized. The expectation was that the AI-based programs will have a significant and lasting industrial impact.

But despite unabridged enthusiasm and significant amount of effort the practical results were minuscule. The main outcome was disappointment and AI become somewhat of a dirty word for the next 20 years. The research became mostly confined to scientific labs, and although some notable results have been achieved, such as development of neural networks and Deep Blue machine beating acting world champion in chess, the general community was largely unaffected.

The situation started to change about 5-10 years ago with a new wave of industrial research and development.

We now experience somewhat of a renaissance of AI with bots, semantic search, self-service systems, intelligent assistant programs like Siri are taking over. In addition, optimists of science are bragging confidently about reaching singularity during our lifetime.

The progress this time seems to be genuine indeed. There are indisputable breakthroughs, but even more impressive is the width of industries adopting AI solutions, from social networks to government services to robotics to consumer apps.

For the first time AI is expected to have a huge impact on the community in general.

There is this vibe around AI which hasn’t been felt in years. And with power comes responsibility, as they say, - prominent thinkers such as Stephen Hawking raised their voice against the dangers of powerful AI for humanity. Still, as far as current topic is concerned, this is all part of the vibe.

Despite all the plethora of upcoming opportunities, it is important to observe that we are yet to advance from anticipation stage. AI has not became a major industrial asset, an AI firm has not reached a unicorn status, and despite the fact that major industrial players such as IBM are pivoting towards  fully-fledged AI-based model it has not manifested itself in business results.

We are still waiting for AI-based technology to disrupt the global community.

The overall expectation is that it is about to happen. But it hasn’t happened yet.

 

READ MORE

WHAT IS AI TRAINING



AI training is a critical part of conversational AI solutions, a part that makes AI software different from any kind of software previously created.
AI training is not coding.
Unlike all other existing software which is fully coded.

Let us consider a simple example:
We create chatbots for two companies, one company is selling shoes, another is selling cars. From the software standpoint it is one chatbot solution running as an online service accessed remotely or a program available locally. In both cases they are two identical instances of the same software (one instance for the shoes company, another for the cars company).
Yet, for the first company the chatbot is supposed to talk about flip-flops, summer shoes, high heels and so on. For the second company, however, the chatbot is not expected to know any of that. Instead, the chatbot should be able to support conversations about car brands, car models, should know how to tell Toyota Camry from Toyota Corolla, etc. This shoes and cars knowledge is not programmable. It is trainable. It is not coded, instead it is a part of language processing capability that AI solutions like chatbots have. And herein lies the major differentiation and advantage of the AI solutions compared to traditional software.

How to train AI?
There are several ways to do it. Sometimes AI system can train itself, improve its linguistic ability over time. It also can be trained by professional linguists. And in some cases, by the users. The latter is the desirable scenario because businesses know better than anybody else what they want their chatbot to talk about.
It is not easy, given the existing state of AI technology, and usually requires a high level of technical knowledge. You may have heard mentions of intents and entities in chatbot discussions. These are examples of linguistic elements AI training is currently based on.
Without proper understanding of what these linguistic elements are and how language acquisition process works in existing AI systems it is better to leave AI training to professional linguists.

READ MORE