Nov

Towards smarter data - accuracy and precision

                                                   

There is a huge amount of information out there. And it is growing. To make it efficient and increase our competitive advantage we need to evolve and start using information in a smart way, by concentrating on data that drives business value because it is accurate, actionable, and agile. Accuracy is an important measure that determines the quality of data processing solutions.

How accuracy is calculated?

It is easy to do with structured data, because the requirements are formalizable. It is less obvious with unstructured data, e.g. a stream of social feeds, or any data set that involves natural language. Indeed, the sentences of natural language are subject to multiple interpretations, and therefore allow a degree of subjectivity. For example, should a sentence ‘I haven’t been on a sea cruise for a long time’ be qualified for a data set of people interested in going on a cruise? Both answers, yes and no, seem valid.

In these cases an argument was put forward endorsing a consensus approach which polls data providers is the best way to judge data accuracy. This approach essentially claims that attributes with the highest consensus across data providers is the most accurate.

At nmodes we deal with unstructured data all the time because we process natural language messages, primarily from social networks. We do not favor this simplistic approach, as it is considered biased, inviting people to make assumptions based on what they already believe to be true, and making no distinction between precision and accuracy. Obviously the difference is that precision measures what you got right, and accuracy measures both what you got right and what you got wrong. Accuracy is a more inclusive and therefore more valuable characteristic.

Our approach is

a) to validate data against third party independent sources (typically of academic origin) that contain trusted sets and reliable demography. Validating nmodes data against third party sources allows us to verify that our data achieves the greatest possible balance of scale and accuracy.

b) to enrich upon the existing test sets by purposefully including examples ambiguous in meaning and intent, and providing additional levels of categorization to cover these examples.

Accuracy is becoming important when businesses move from rudimentary data use, typical of the first Big Data years, to a more measured and careful approach of today. Understanding how it is calculated and the value it brings helps in achieving long-term sustainability and success.

 

Interested in reading more? Check out our other blogs:

AI unmasked: How a chatbot is different from a voice bot




The main difference is in the linguistic complexity. 

People express themselves differently when they speak compared to when they type. When we speak we use more sentences and we make our sentences longer. 

As a result a voice bot needs to have better AI compared to a chatbot, in order to handle a conversation and deliver the same customer experience. 


If your business model allows it, is better to start with a chatbot and add a voice bot on top of it.

This way you can gradually increase the complexity of your AI without compromising on your customer experience. 

 
READ MORE

AI unmasked: Why long-term success of your business depends on conversational AI



For a business to grow successfully, it needs to scale its sales, customer service, marketing.

The only sustainable way to do this is to introduce an automated sales and customer experience service.

Conversational AI is the single available method to automate customer experience without reducing the quality of service. It comes in a variety of forms such as a chatbot, voice bot, virtual assistant, cognitive agent. They all share the scaling ability and the ability to deliver human-level quality of conversations. 

 
READ MORE