Feb

Take Advantage Of Our Free Offers

Interested in reading more? Check out our other blogs:

Integrated Real-Time Data Boosts Content Delivery

How to make content more relevant and appealing to the content consumer?

This is a problem that has been on the mind of content creators for some time now. In our age of information abundance it is not easy to stand out and make your voice heard. The competition for the consumer’s attention is escalating, and with the number of information sources ever increasing, it will only get tougher.

Traditionally, a content delivery does not change across the target audience. A commercial, or a blog, looks and is experienced in the same way by all viewers and readers. We are entrenched in this paradigm, and can hardly imagine it being otherwise.

It turns out, the advancement of new technologies capable of capturing individual intents in real time brings up new opportunities in creating personalized experiences within the framework of content delivery.  

This is how content can become more relevant - by becoming more personalized.

In a rudimentary form, we are already familiar with this approach as seen in online advertising. Some web and social resources aim at personalizing their promotional campaigns based on whatever drops of behavioural patterns and interests they can squeeze out of our web searches.  The problem, of course, is that the technologies used to power these campaigns understand human behaviour poorly and results, therefore, more often than not leave a great deal to be desired. To put it mildly.

nmodes has been working on semantic processing of intent for several years. We now can capture intent from unstructured data (human conversations) with accuracy of 99%. (Interestingly, many businesses do not require this level of accuracy, being satisfied with 90%-92%, but we know how to deliver it anyway).

We recently started to experiment with personalizing content by using available consumer intent.

We used Twitter because of its real-time appeal.

We started by publishing a story, dividing it into several episodes:

 

And we kept the constant stream of data flowing, concentrating on intent to dine in Paris:

We then merged the content of the story with consumer intent to dine in Paris as captured by our semantic software. Like this:

This merging approach shows promising results - the engagement rate jumped above 90%.

Overall we are only at the beginning of a tremendous journey. We know that other companies are beginning to experiment, and the opportunities from introducing artificial intelligence related technologies into content delivery are plentiful.

There is a long road ahead, and we've made a one small step.  But it is a step in a very exciting direction.

 

READ MORE

Beware the lure of crowdsourced data

Crowdsourced data can often be inconsistent, messy or downright wrong 

We all like something for nothing, that’s why open source software is so popular. (It’s also why the Pirate  Bay exists). But sometimes things that seem too good to be true are just that. 

Repustate is in the text analytics game which means we needs lots and lots of data to model certain  characteristics of written text. We need common words, grammar constructs, human-annotated corpora  of text etc. to make our various language models work as quickly and as well as they do. 

We recently embarked on the next phase of our text analytics adventure: semantic analysis. Semantic  analysis the process of taking arbitrary text and assigning meaning to the individual, relevant components.  For example, being able to identify “apple” as a fruit in the sentence “I went apple picking yesterday” but to  identify “Apple’ the company when saying “I can’t wait for the new Apple product announcement” (note:  even though I used title case for the latter example, casing should not matter)

READ MORE