Oct

When Big Data is not so big anymore

                                                   

We are inundated with information. There is so much information around us they coined a special term - Big Data. To emphasize the sheer size of it.

It is, of course, a problem - to deal with a large amount of data. Various solutions have been created to address it efficiently.  

At nmodes we developed a semantic technology that accurately filters relevant conversations. We applied it to social networks, particularly Twitter. Twitter is a poster child of Big Data. They have 500 million conversations every day. A staggering number. And yet, we found that for many topics, when they are narrowed down and accurately filtered, there are not that many relevant conversations after all.

No more than 5 people are looking for CRM solutions on an average day on Twitter. Even less - two per day on average - are asking for new web hosting providers explicitly, although many more are complaining about their existing providers (which might or might not suggest they are ready to switch or looking for a new option).  

We often have businesses coming to us asking to find relevant conversations and expecting a large number of results. This is what Big Data is supposed to deliver, they assume. Such expectation is likely a product of our ‘keyword search dependency’. Indeed, when we run a keyword search on Twitter, or search engines, or anywhere we get a long list of results. The fact that most of them (up to 98% in many cases) are irrelevant is often lost in the visual illusion of having this long, seemingly endless, list in front of our eyes.

With the quality solutions that accurately deliver only relevant results we experience, for the first time, a situation when there are no longer big lists of random results. Only several relevant ones.  

This is so much more efficient. It saves time, increases productivity, clarifies the picture, and makes Big Data manageable.  

Time for businesses to embrace the new approach.

 

Interested in reading more? Check out our other blogs:

How nmodes technology is unique



nmodes AI is based on semantic algorithms. They require significantly less computational capacity compared to standard machine learning algorithms used by a majority of conversational AI systems today.

As a result, the infrastructure requirements are drastically reduced. In simple terms, what Google Home or Amazon Alexa do with the help of supercomputers or advanced computer farms, nmodes AI can do on a basic server.

And it gives nmodes ability to delegate conversational capacity to the users.  With the help of nmodes AI every business (and individual) can create their own AI to handle the details of the business (products, customers, etc) not accessible from the outside.

 
READ MORE

WHAT IS AI TRAINING



AI training is a critical part of conversational AI solutions, a part that makes AI software different from any kind of software previously created.
AI training is not coding.
Unlike all other existing software which is fully coded.

Let us consider a simple example:
We create chatbots for two companies, one company is selling shoes, another is selling cars. From the software standpoint it is one chatbot solution running as an online service accessed remotely or a program available locally. In both cases they are two identical instances of the same software (one instance for the shoes company, another for the cars company).
Yet, for the first company the chatbot is supposed to talk about flip-flops, summer shoes, high heels and so on. For the second company, however, the chatbot is not expected to know any of that. Instead, the chatbot should be able to support conversations about car brands, car models, should know how to tell Toyota Camry from Toyota Corolla, etc. This shoes and cars knowledge is not programmable. It is trainable. It is not coded, instead it is a part of language processing capability that AI solutions like chatbots have. And herein lies the major differentiation and advantage of the AI solutions compared to traditional software.

How to train AI?
There are several ways to do it. Sometimes AI system can train itself, improve its linguistic ability over time. It also can be trained by professional linguists. And in some cases, by the users. The latter is the desirable scenario because businesses know better than anybody else what they want their chatbot to talk about.
It is not easy, given the existing state of AI technology, and usually requires a high level of technical knowledge. You may have heard mentions of intents and entities in chatbot discussions. These are examples of linguistic elements AI training is currently based on.
Without proper understanding of what these linguistic elements are and how language acquisition process works in existing AI systems it is better to leave AI training to professional linguists.

READ MORE