Nov

Towards smarter data - accuracy and precision

                                                   

There is a huge amount of information out there. And it is growing. To make it efficient and increase our competitive advantage we need to evolve and start using information in a smart way, by concentrating on data that drives business value because it is accurate, actionable, and agile. Accuracy is an important measure that determines the quality of data processing solutions.

How accuracy is calculated?

It is easy to do with structured data, because the requirements are formalizable. It is less obvious with unstructured data, e.g. a stream of social feeds, or any data set that involves natural language. Indeed, the sentences of natural language are subject to multiple interpretations, and therefore allow a degree of subjectivity. For example, should a sentence ‘I haven’t been on a sea cruise for a long time’ be qualified for a data set of people interested in going on a cruise? Both answers, yes and no, seem valid.

In these cases an argument was put forward endorsing a consensus approach which polls data providers is the best way to judge data accuracy. This approach essentially claims that attributes with the highest consensus across data providers is the most accurate.

At nmodes we deal with unstructured data all the time because we process natural language messages, primarily from social networks. We do not favor this simplistic approach, as it is considered biased, inviting people to make assumptions based on what they already believe to be true, and making no distinction between precision and accuracy. Obviously the difference is that precision measures what you got right, and accuracy measures both what you got right and what you got wrong. Accuracy is a more inclusive and therefore more valuable characteristic.

Our approach is

a) to validate data against third party independent sources (typically of academic origin) that contain trusted sets and reliable demography. Validating nmodes data against third party sources allows us to verify that our data achieves the greatest possible balance of scale and accuracy.

b) to enrich upon the existing test sets by purposefully including examples ambiguous in meaning and intent, and providing additional levels of categorization to cover these examples.

Accuracy is becoming important when businesses move from rudimentary data use, typical of the first Big Data years, to a more measured and careful approach of today. Understanding how it is calculated and the value it brings helps in achieving long-term sustainability and success.

 

Interested in reading more? Check out our other blogs:

Scalable Yet Personalized

How to offer businesses and organizations a solution that personalizes and scales consumer interaction process at the same time?

Personalizing the user relationship process. Today end users and consumers demand to be targeted individually and to be approached based on their actual interests. nmodes AI (Artificial Intelligence) powered solution helps organizations accurately identify user needs in real time. Our solution delivers information on each user individually thus providing the necessary level of personalization required of the successful customer service.

Scaling the user relationship process: Once the organization identifies a user and a problem that needs to be addressed, next step is reaching out to that user individually. Currently this is a manual non-scalable procedure. nmodes AI (Artificial Intelligence) solution provides automated assistance to human personnel, including substitution when deemed appropriate, thus making the entire process scalable.

Today more than 90% of all organizations and businesses rely on solutions based on keywords, even though these solutions provide low quality results not sufficient for the new generation of personalized scalable services.

nmodes solution enables sustainable delivery of high quality results, with x5 costs reduction and up to 45% increase in conversation (engagement) capacity.

 

READ MORE

When Big Data is not so big anymore

                                                   

We are inundated with information. There is so much information around us they coined a special term - Big Data. To emphasize the sheer size of it.

It is, of course, a problem - to deal with a large amount of data. Various solutions have been created to address it efficiently.  

At nmodes we developed a semantic technology that accurately filters relevant conversations. We applied it to social networks, particularly Twitter. Twitter is a poster child of Big Data. They have 500 million conversations every day. A staggering number. And yet, we found that for many topics, when they are narrowed down and accurately filtered, there are not that many relevant conversations after all.

No more than 5 people are looking for CRM solutions on an average day on Twitter. Even less - two per day on average - are asking for new web hosting providers explicitly, although many more are complaining about their existing providers (which might or might not suggest they are ready to switch or looking for a new option).  

We often have businesses coming to us asking to find relevant conversations and expecting a large number of results. This is what Big Data is supposed to deliver, they assume. Such expectation is likely a product of our ‘keyword search dependency’. Indeed, when we run a keyword search on Twitter, or search engines, or anywhere we get a long list of results. The fact that most of them (up to 98% in many cases) are irrelevant is often lost in the visual illusion of having this long, seemingly endless, list in front of our eyes.

With the quality solutions that accurately deliver only relevant results we experience, for the first time, a situation when there are no longer big lists of random results. Only several relevant ones.  

This is so much more efficient. It saves time, increases productivity, clarifies the picture, and makes Big Data manageable.  

Time for businesses to embrace the new approach.

 

READ MORE