Tech Pioneers

Rob Key

CEO

Converseon

Winner 2026

Rob	Key

Let’s start with you. Who are you, and what problem are you trying to solve in social intelligence?

I’m the founder & CEO of Converseon, an AI technology and intelligence consultancy, where we’ve been working diligently since 2008 (when we built our first machine learning based NLP model) to transform unstructured social, media and voice of customer data into market research readiness.   We work continuously on tackling many of the most challenging data accuracy and quality issues through AI and data science and integrate our technologies deeply into many of the leading social listening platforms, as well as provide directly to brands.

When it comes to social data, what do you think is still misunderstood or underdeveloped?

Properly structured, social data not only is qualitative, but also has quantitative and predictive value.  For example, we have consistently seen that for many industries, this data can predict business outcomes – such as sales or shareholder value - with about 85% confidence.

And this is important.  In a world where brands are drowning in data, they are increasingly turning away from vanity metrics to focus on the “metrics that matter” - those that directly impact core organizational KPIs.  

But it all has to start with “ai and research ready data.”  Accurate, validated, governed (by humans), granular (entity and aspect models for example), customized to brands and domains, meta-data rich and normalized through techniques such as our bayesian modeling to separate signals from noise.  

Unstructured data makes up an estimated 80% of all data yet most of it goes unused (or is “dark data”) because of its complexity and noise.   Fully tapping into this data to build effective AI and drive insights is the massive challenge, but also opportunity.   Social listening sits squarely in the middle of this. 

What’s something you’ve seen lately, maybe a trend, a tool, or behaviour, that felt like a glimpse of the future?

The most exciting areas we see and are deeply focused on are in two areas.  One is predictive intelligence.  Our PRISM solution connects social data to business outcomes and provides scenario planning and future-casting to understand how specific changes in perceptual attributes (such an increase in “trust”) will likely impact sales (or other outcome metric).   This helps organizations plan better for a world where even “real time is not fast enough.” 

The second trend is also incredibly important:  the emergence of “context engineering,” which is the discipline of structuring, validating, and governing data, definitions, and meaning around AI systems so they produce accurate, reliable, and decision-ready intelligence.  

Without engineered context—clear identification of entities, relationships, intent, sentiment drivers, confidence, and causality—unstructured data remains noisy, fragile, and unsafe for decision-making.  Much of the  core technology that helps bring social data alive is also the same technology that can be applied to providing this essential context layer.

Our Conversus IQ technology utilizes our NLP technology - which includes rich and accurate meta data –  to elevate social data in an enterprise context-engineering layer that LLMs can be used reliably and effectively  for reputation, brand, and social intelligence.   

If we want social intelligence to be more than a tech category, what needs to change in how we build or buy the tech?

I view the tech category mandate as broader and more exciting than some –  moving from simple social or media “listening” to providing the features and solution (the “context engineering”)  to transform massive often untapped unstructured data sources into trusted and accurate enterprise-ready intelligence using human governed AI. 

To achieve this, it’s  important for social intelligence buyers of tech to think more about  “ecosystems,” versus platform specific.  In the AI era, ecosystems will win - specifically those that provide flexibility of data sourcing and pipelines,  the customization and use of different NLP and AI models, and different engagement options (dashboards to LLMs).    We’re seeing strong movements by many of the top “listening” platforms to build and adjust for this flexibility.  

And this is going to require increased discernment from social intelligence professionals purchasing  tech.  Because shift away from who has the “best” dashboard  toward who provides the best context—the data, semantics, governance, and orchestration that determine how social data is used in real enterprise environments.  The best context engineering tech will operate at the entity, aspect, and attitudinal level; govern how models retrieve, reason, and summarize; reduce hallucinations; align outputs to domain and organization-specific requirements; and enable validation, traceability, and confidence scoring. In doing so, it makes social intelligence defensible—not just in dashboards, but also in boardrooms. 

What’s the hardest part of turning data into action? And how do we make that easier without dumbing it down?

The lack of trust and confidence in the data and insights.  For organizations to make decisions on this data there has to be greater confidence and that, again, comes down to data.  

The industry continues to live in a paradox.   Never has there been more consumer, media, and voice-of-customer data available—across social platforms, news, reviews, forums, call-center transcripts, surveys, and first-party feedback. And yet, extracting reliable, decision-grade insight from this data has remained stubbornly difficult. At the same time, demand for accurate, actionable insights has never been higher.

The emergence of generative AI in 2023–2025 initially appeared to be a panacea by many.. But the industry quickly learned a hard truth: large language models are only as good as the data and context they are given. Unchecked, they introduce unacceptable risks—rising costs, hallucinations that undermine trust, black-box outputs that fail audit and regulatory scrutiny, generic reasoning that ignores brand and category nuance, limited control over outputs, and little ability to tie results to business outcomes.

The result is a growing realization across enterprises: most of the value still locked in unstructured data remains inaccessible, while much of today’s generative AI value is trapped in experimental or low-risk use cases. LLMs alone are insufficient for high-stakes decisions involving brand, reputation, customers, regulation, or capital allocation.

This is why we are putting such an emphasis on context engineering and AI ready data to change this and build trust and confidence in these systems and insights and drive greater adoption.

What’s a quote, concept, or model you return to often when things get messy?

I was reminded recently of the now-iconic 60 minutes interview from the height of the dot-com boom when Razorfish CEO Jeff Dachis was roundly criticized for describing that his consulting firm helped brands “recontextualize” themselves in the face of deep technical transformation as industry “jargon.”

Of course, Jeff was exactly on point.

More recently, the Yale Center of Management has advocated for an updated form of this through the concept of “refounding.” They argue persuasively that success in this moment will require more than incremental improvement or annual planning rituals. In an era where AI is rapidly transforming technologies, processes, and job definitions—often collapsing them entirely—the enduring sources of value must be embedded in an organization’s DNA.  Purpose, rigor, and trusted capabilities become the anchor in the storm and it’s in organizations’ best interests to return to them as an anchor in the storm.

This is an exercise that I believe is essential for both organizations and industries in 2026   - including for the social listening industry.  It’s certainly one we are doing here at Converseon.

What’s the last non-work thing you read, watched, or played that reshaped how you think?

I've found myself, for various reasons, more deeply involved in helping to create natural ecosystems through permaculture. E O Wilsons, The Diversity of Life, certainly is a classic with many lessons and observations relevant to the AI ecosystem humans are building. 

Get Social with SILab