Monday, 22 December 2025

The myth of the 95% AI failure rate

 

Why your AI project is actually failing–and how to fix the foundation


You’ve probably seen the headline by now: “95% of generative AI pilots fail to deliver measurable ROI.” It’s been included in just about every article and presentation on the AI topic recently - and it’s most definitely sparked a familiar fear – that we’re living through another overhyped AI bubble.


But that number, taken at face value, misses the real story. Because what’s failing isn’t AI. It’s how organizations are trying to adopt it. Once you look past the headline and into what the research actually measured, a different picture emerges—one that’s far more practical, and far more fixable.


Here is a critical breakdown of what the research really showed, what it did not show, and the non-negotiable architectural mandate required to transition your projects from the volatile 95% bracket into the successful 5%.

1. What the research really showed: A failure of integration, not intelligence

The MIT Media Lab’s Project NANDA (who advocate for decentralised web of AI agents (see their projected evolution in the chart below), and as such are somewhat biased towards the current state of AI) report defined “failure” precisely: the inability of the AI pilot to transition beyond the proof-of-concept stage and achieve rapid revenue acceleration or a substantial, measurable return on investment (ROI). 

In other words, these weren’t models that didn’t work. They worked just fine in controlled environments. However, they failed when they hit the real world.


The researchers describe this as a “learning gap” – the moment when an AI system leaves the lab and runs into fragmented data, unclear ownership, and workflows that were never designed for intelligence to plug into them.


So the takeaway isn’t “AI doesn’t work.”

It’s “we’re dropping AI into environments that aren’t ready for it.”


Why AI projects stumble: it’s usually not the tech


When AI initiatives stall, the real roadblocks are usually internal – specifically how we try to fit the technology into our existing organization and where we decide to spend the money.


First, let's talk about the Integration problem


Dropping a powerful but generic tool like a Large Language Model (LLM) into a complex company is like trying to use a foreign body in a human system – it just doesn't mesh. For these tools to actually work, we need more than just the software; we need to break down the departmental silos, establish clear governance, and define exactly how the new AI connects with our current systems. Without that framework, the tech is essentially an outsider that can't access the necessary context to be effective.


The second big issue is the strategic misalignment, which I call the "Visibility Trap”, which also indicates a strategic mismatch in where money gets allocated. 


A large share of AI budgets flows toward visible functions like Sales and Marketing. But MIT’s own data shows the highest measurable ROI comes from back-office automation – reducing repetitive internal work and operational drag The conclusion is clear and simple: we often fund what looks impressive instead of what actually compounds value.

Bottom line: the AI isn’t failing. The organization is failing to prepare the ground it’s meant to operate on.

2. The core obstacle: the data readiness crisis

If strategic failure kills ROI in the pilot phase, data fragmentation kills the rollout in the production phase. Gartner data supports this operational challenge, showing that only 48% of AI projects successfully transition into production.6


The single biggest blocker to rolling out AI is the state of enterprise data, which we can dissect into three components:


  1. The unstructured data problem:
    Most enterprise data – emails, tickets, documents, logs – is unstructured. It lacks consistent context and labeling, making it unusable for reliable, auditable AI unless it’s heavily cleaned first.

  2. The fragmentation trap:
    Customer context is spread across CRMs, ticketing systems, engineering tools, and ERPs. Stitching this together with brittle API calls doesn’t scale. It slows systems down and introduces failure points.

  3. The trust gap:
    When leaders can’t trace answers back to source data –or predict how the system will behave – they won’t rely on it for real decisions. That’s when projects quietly get shelved.


Without good data, no good decisions. That eternal truth is why so many promising projects end up getting abandoned - the lack of good data makes it so that they are simply set up to fail.


What do the successful 5% do differently?

To succeed where 95% of companies stall, you must pivot from model-centric optimization to a data-centric architectural foundation.7 This requires securing three critical capabilities: Context, Trust, and Action.


This is where a platform like Computer by DevRev fundamentally re-architects the problem, ensuring your AI initiatives are grounded in a ready-made, unified data layer.

How DevRev addresses the foundation problem

Let’s walk through the three things that DevRev does, as part of our yearlong effort to architect Computer from the ground up for this purpose, very differently.

1. Context: fixing fragmentation with a unified data layer

DevRev replaces fragile data federation with physical consolidation. Using bi-directional sync, data from CRM, support, engineering, and other systems is continuously pulled into a single store. That data is then structured as a relationship-rich knowledge graph – linking customers to tickets, tickets to code, and code to documentation.

This turns scattered, unstructured data into something AI can actually reason over.

2. Trust: grounding AI in auditable queries

To address governance and hallucination concerns, DevRev grounds conversational AI in an auditable layer. Natural-language questions are translated into standard SQL queries against the unified data layer. That means answers are:

  • Predictable

  • Traceable

  • Verifiable against source data

This is what makes enterprise-grade trust possible.

3. Action: closing the learning gap

Insight alone doesn’t deliver ROI. Because DevRev is a system of record, its AI agents are write-enabled. They don’t just answer questions – they can update tickets, log bugs, assign ownership, and execute changes inside real workflows. That’s how the “learning gap” closes: insight turns into action, and action turns into measurable operational impact.

The real lesson behind the 95%

The 95% failure rate isn’t a warning about AI, really. It’s a warning about treating AI like a plug-in instead of a system. GenAI success depends on foundations – context your AI can understand, trust your leaders can audit, and actions that move work forward automatically. When those are in place, AI stops being experimental – and starts compounding value.


If you want to dig deeper into DevRev, Computer, or any of the ideas here, I’m happy to continue the conversation.


All the best


Rik


Monday, 2 June 2025

The Enterprise Dilemma: Building vs. Buying AI-native CX Solutions


In today's every changing and evolving business landscape, enterprises face a critical decision when it comes to implementing AI-native CX solutions: should they build custom solutions from scratch, or buy existing platforms?

The Traditional Build Approach to AI in CX

Building custom CX solutions offers enterprises complete control over their implementation: the can fully customize their implementation, perfectly align with specific business processes, maintain proprietary intellectual property, and have direct control over feature development.

However, this approach comes with significant drawbacks: building custom AI solutions comes with significant challenges including high development and maintenance costs, extended time-to-market, and resource-intensive updates and improvements. Plus: whether you like it or not, there’s quite a bit of complexity to creating a solid AI solution - which is only to be met with significant skill!

The Traditional Buy Approach to AI in CX

Purchasing existing CX solutions provides several immediate benefits. Companies can deploy these solutions rapidly, leveraging proven functionality that has already been tested in the market and that has been engineered by highly specialized staff with very specific skills. These solutions come with regular updates and improvements managed by the vendor, and typically require a lower initial investment compared to building from scratch.

However, this approach also comes with notable limitations. Organizations often find themselves restricted by limited customization options and become dependent on the vendor's roadmap for new features and improvements. Additionally, there's a risk of misalignment between the pre-built solution and specific business needs, which can impact operational efficiency.

DevRev’s Hybrid Solution: A New Paradigm, validated by the industry

Modern platforms like DevRev are pioneering a hybrid approach that combines the best of both worlds: lots of out-of-the-box functionality that relieves you of the boring infrastructure related tasks, combined with extensive customization capabilities to tune the platform to your needs.


This innovative approach offers several distinct benefits: core functionality is available immediately while maintaining the flexibility to customize and extend the platform according to specific business requirements. We can summarize this like this:


This is not just DevRev saying this: McKinsey's 2024 "State of AI" survey shows that 75% of enterprises prefer solutions that offer both out-of-the-box functionality and extensive customization capabilities. This confirms the trend that we have seen: it aligns perfectly with the hybrid approach offered by modern platforms like DevRev.

Conclusion

The traditional build-vs-buy dichotomy is becoming obsolete. Modern enterprises need AI-Native solutions that combine immediate functionality with the flexibility to adapt to specific business needs. Platforms that offer this hybrid approach, like DevRev, represent the future of enterprise CX solutions.

By choosing a hybrid solution, enterprises can accelerate their digital transformation while maintaining the ability to differentiate their customer experience - truly offering the best of both worlds.

Let me know if you have any questions or comments. Would love to discuss.

All the best

Rik

Monday, 26 May 2025

Impedance Matching for DevRev

 

New, innovative products like DevRev are fascinating. They involve immeasurable quantities of hard work by lots and lots of people to get to market. But once you get there, how do you make it Super Easy(™) to communicate and make your audience understand the fruits of all that work? That. Is. Not. Easy.


This past week I have spoken to so many people, friends and contacts old and new, about this fascinating new adventure that I have embarked on. And I have felt like I really had to iterate multiple times to better tune the message of what it is that we provide to our customers. Communicate. Fail. Rinse and repeat. Until it works. Until it clicks.


In order to find that “click”, I was thinking of the idea of the Impedance Matching. Those of you that have an engineering background immediately understand: you need to match your messages to the audience that will be receiving it, or else … stuff will get lost :) … Too little detail and people will be frustrated - too much detail and they will be overwhelmed.


So that’s why I started to think about different “levels of communication” for different “levels of audiences” that  would understand different “levels of messages” for our different DevRev offerings. Here’s what I came up with.


Industry level - We want to make work matter. We want to connect builders to customers. We want to help build the world’s most customer centric organisations.

These may sound like different objectives - but they aren’t. Especially for people that have seen the complexities of building digital products in today’s day and age, it will probably ring true. How many software engineers never see the fruits of their work in the hands of a customer? How many of them have actually never seen or heard the voice of their customer, literally? That’s not a very satisfying place to be. What if we could shrink that distance between builders and customers? What if we could give builders and buyers, dev’s and rev’s, a true voice in the conversation?


Company level - We want to solve the problem of Information Asymmetry in digital product building organisations: different teams have different access to different information. This problem is the root cause for many Customer Experience problems: siloed teams lead to a frustrating client experience that effectively limits growth.

Great companies excel at customer focus. They are obsessed with their customers’ success, with the value that they derive from the product - and will walk through fire to help the customer get there. There is no substitute for that - but there are lots of barriers to get there. Information siloes are real, in fact they have gotten worse since the moment SaaS 1.0 made it dead easy for every department to automate their departmental processes with yet-another-cloud-platform. Where did the holistic view of the customer go? That’s right - it disappeared. And with it, so did the truly exceptional customer delight. 


CxO level - We want to offer new growth opportunities, by enhancing the customer experience at a lower cost. This means breaking down silos between tools and teams, bringing the data together, and using the latest Agentic AI technology to automate the automatable.

At DevRev, we make this a reality, today, by integrating the different tools in your different departments in a comprehensive Knowledge Graph that connects all the dots. Using that data, we can offer holistic search that reduces the information asymmetry, automated workflows and analytical capabilities on top of that. Using AI, we automate the time-consuming tasks, and make the cross-cutting information accessible through conversational interfaces. 



Customer Support - we want you to be able to help more customers quickly and efficiently, using the full information that is needed to do so, and leveraging AI assistance whenever possible. 

Leveraging DevRev, customers have seen significant drops in resolution times, much higher call deflection rates, faster customer service and as a consequence, a higher net promoter score. As a result, the company can turn support from a cost into a revenue generator.


Product Management - we want to break down the barriers between devs and revs, and make sure that you have all the information to better tune your development and support resources to your most valuable product parts. 

Understanding what is wanted and needed by your customers is not trivial, especially when you have layers of Chinese whispers standing between the engineers and their customers. With DevRev’s knowledge graph, a holistic customer view becomes accessible and actionable. With AI, we can aggregate requirements and align your resources. We can tune in to the customer voice, and foster long term success.


Head of data - as digital product organisations become successful, as their departments grow, they become more complex. To deal with that complexity, many organisations have implemented departmental tools to optimize departmental processes - and by doing so we have lost the overall picture. SaaS 1.0 has created data silos - we now face a real data integration challenge.

Using patented “Airdrop” technology, DevRev has successfully implemented a bidirectional syncing system for most sources of enterprise data in the cloud. CRM data from Hubspot or SalesForce, Customer Support data from Zendesk, Freshdesk or ServiceNow, Product data from Jira / Github, it all comes together in a fully synced up Knowledge Graph. This repository is searchable and actionable, and can drive new business processes in real time using AI and AI Agents. This will allow us to lever the holistic view on the  data as additional context for better human and AI decision making.


Head of AI - leveraging the potential of AI is on everyone’s radar. Not doing AI is not an option - you do NOT want to fall behind. But how does one operationalise this amazing technology, without spending an arm and a leg and months/years of development time? How do you limit the risk, and ensure compliance? How do you prevent hallucinations and reputational damage? 

Turns out you don’t have to do it all yourself. DevRev has spent hundreds of person-years in design and engineering time to build a product offering that does it for you, fast, and at a much lower cost. Leverage the benefits, but don’t run the risks. We help you implement AI efficiently and effectively, and together we will unlock its potential for your organization.


I am hoping that these messages are a bit clearer. We have an incredible story to tell, but it’s like so many beautiful stories: there is more than one storyline. By tuning the story to the listener, by matching the impedance, I have been trying to make it easier to understand - whatever your background.


Looking forward to many more discussions in the next couple of days, weeks, months to come. It’s going to be an incredible journey.


Rik


Wednesday, 15 January 2025

Pattern Recognition: The Powerhouse Behind LLMs and all Real-Time AI

Real-time applications of Artificial Intelligence (AI) and Machine Learning (ML) are transforming industries, from finance to entertainment. One of the great benefits of this age of Generative AI (large language models and the likes), is that it has opened up people’s imagination. People can now see that many things, almost anything, is and are possible with today’s real-time AI capabilities.
In this article, I would like to show the similarities and therefore analogies between generative AI and two of the most popular applications for real-time AI: real-time fraud detection and real-time recommender systems. While these applications seem distinct, they share some fundamental aspects: recognizing patterns, and then acting on these patterns. Some would joke that fraud detection systems only “recommend that the fraudster is put in jail”. Therefore it is useful to reflect on what that fundamental shared core of these, and many other, use cases actually consists of. In this article, we will argue that pattern recognition is that core.

Pattern recognition is how AI and ML are used to identify trends in historical data, understand these patterns, and then utilize these insights to forecast future behavior. This approach is crucial in both fraud detection and recommender systems, enabling them to deliver real-time, insightful and actionable results. Sometimes the action may be to offer a new previously unknown product to a returning customer (in the case of a recommender system). And sometimes the action may be that all the alarms go off and authorities are notified to forcefully lead the bad guys to a safe place (in the case of a fraud detection system). 

Large language models use Pattern Recognition

Large language models (LLMs) leverage advanced pattern recognition to understand, learn, and generate language. Trained on vast amounts of text data, these models analyze patterns in word usage, sentence structure, context, and relationships between concepts. This training enables them to develop a probabilistic understanding of language, identifying how words and phrases typically interact. 
When faced with a question or prompt, an LLM uses this knowledge to predict the most contextually relevant and coherent response by evaluating patterns similar to those it has encountered during training. By iterating on this process across diverse contexts, LLMs excel at producing nuanced, human-like answers that align with the input’s meaning and intent. In effect, the recognised language pattern is used to predict the most qualitative and accurate responses.

Fighting Fraud with Pattern Recognition

Fraud detection aims to identify and prevent fraudulent transactions or activities. This requires analyzing large historical datasets to spot subtle but repetitive patterns that indicate fraudulent behavior. For instance, an e-commerce platform might analyze user behavior, transaction details, network activity - or even a combination of all of the above - to identify suspicious patterns.

Consider a sudden surge in purchases from a new account using multiple credit cards. This pattern deviates from normal user behavior and raises a red flag for potential fraud. Real-time fraud detection systems leverage pattern recognition to detect such patterns and make instantaneous decisions about blocking new incoming suspicious transactions that display similar patterns as the fraudulent ones that were seen before.



Building Robust Fraud Detection Models Requires:
  • Data Quality: High-quality data is essential for training accurate fraud detection models. This data should accurately reflect user preferences and behaviors.
  • Feature Engineering: Identifying and selecting relevant features that capture fraudulent patterns is crucial. For example, analyzing ratings and their positive/negative rating distributions can help identify suspicious users.
  • Robust Algorithms: Fraud detection models need to be robust to adversarial attacks, where fraudsters try to manipulate the system. Graph representations of the interactions between fraudsters and systems, like a Graph Convolutional Network (GCN) for example, offer a promising approach to learning robust user representations for fraud detection.

Recommending the Perfect Choice with Pattern Recognition

Recommender systems aim to predict user preferences and interests, and will suggest items they might enjoy. This would promote more and more profitable interactions with the provided systems - which could be shopping cart systems, media portals, or other systems that would benefit from a more intimate relationship between the provider and the user. These systems learn from past user interactions, such as purchases, ratings, or browsing history, to identify patterns that indicate user interests.

Imagine a user frequently purchases books in the kids reading book genre and leaves positive reviews for authors with a specific writing style. A simple recommender system can recognize this pattern and recommend other science kids reading books by similar authors. Real-time recommender systems utilize predictive pattern recognition to provide up-to-date suggestions based on the latest user interactions. A sophisticated pattern-based recommender system would learn how specific times and days of the week (eg. mornings just before going to kindergarten, or evenings just before bed), specific computers from which the system would be accessed (eg. home vs. work computers), and real-time stock keeping availability would matter in making the best possible decisions and recommendations.

Effective Recommender Systems Depend on:
  • Understanding User Behavior: Accurately modeling user preferences and interests from historical data is essential.
  • Capturing Contextual Information: Incorporating contextual data, such as time, location, and device, can improve recommendation relevance. For instance, a travel recommender system can use location and weather data to suggest suitable destinations.
  • Exploiting Multimodal Data: Utilizing multimodal data, like text reviews and images, provides a richer understanding of user preferences. Deep learning techniques, particularly Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs), have proven effective in handling multimodal data.

The Similarities between LLMs, Fraud Detection and Recommender Systems


At the core of both LLMs, real-time fraud detection and recommender systems lies predictive pattern recognition. Both applications rely on analyzing past data, understanding patterns, and leveraging insights to predict future behavior and take action based on that prediction. This shared foundation highlights the versatility and power of this approach.

Consider the similarities:
  • Pattern Recognition: Both applications aim to detect and understand patterns in user behavior and data.
  • Predictive Modeling: Both utilize historical data to predict future actions, whether fraudulent transactions or preferred items.
  • Real-time Analysis: Both operate in real time, analyzing incoming data streams and generating immediate results.

The Hopsworks AI Lakehouse: The Foundation for Real-time Pattern Recognition Systems

Real-time AI pattern recognition applications, like LLMs, fraud detection and recommender systems, thrive on high-quality, readily accessible data. The Hopsworks AI Lakehouse emerges as a powerful solution, enabling organizations to build and deploy these applications efficiently. The Hopsworks AI Lakehouse is the centralized repository for storing, managing, and analyzing data from diverse sources. It integrates the capabilities of a data lake and a machine learning operations (MLOps) platform , providing a unified platform for data-driven AI initiatives.

It seems obvious at this point, but the Hopsworks AI Lakehouse offers significant benefits for Predictive Pattern Recognition applications of all kinds. It offers:
  • Feature Engineering and Model Training: the AI Lakehouse facilitates feature engineering and model training by providing tools for data transformation, feature extraction, and model development.
  • Centralized Data Management: the AI Lakehouse provides a single source of truth for all data, simplifying data access and management for all teams that are developing predictive models
  • Scalability and Performance: the AI Lakehouse is engineered to handle massive data volumes and supports real-time data processing, essential for real-time AI applications.
  • Unified governance: the AI Lakehouse will allow for governance on source data, and provide the required explainability and transparency on the end-result, the predictive pattern recognition system.

Wrapping up

Predictive pattern recognition is a transformative force driving real-time AI applications like LLMs, fraud detection and recommender systems. The Hopsworks AI Lakehouse solution empowers organizations to leverage this power effectively, providing a robust foundation for building and deploying real-time AI solutions.

Hopsworks simplifies the process of:
  • Data Ingestion and Management: Streamline the process of ingesting data from multiple sources and managing it centrally.
  • Feature Engineering: Provide tools for efficient feature extraction and transformation, enabling the creation of powerful predictive models.
  • Model Training and Deployment: Facilitate model training and deployment, making it easier to build and operationalize real-time AI applications.
With the combination of powerful AI algorithms and robust infrastructure, businesses can unlock the full potential of predictive pattern recognition, leading to enhanced security, less fraud, improved user experiences, and increased business value.

I hope this was a useful clarification of how different AI use cases share specific characteristics that are all facilitated by the AI Lakehouse.

Let me know if you would like to discuss!

Cheers

Rik