Thursday, 15 January 2026

Stop looking for AI-shaped holes!


Why enterprises need to reimagine what’s possible

If you talk to enterprise leaders about artificial intelligence today, a familiar pattern emerges. Most conversations start with a very concrete question: Which problem should we solve with AI first?

That question feels sensible. Enterprises are built around prioritization, ROI, and risk management. But when it comes to generative AI, this framing can also be a trap. Starting with narrowly defined problems often leads to narrowly defined outcomes – and that’s not where the real value of this technology lies.

Generative AI is not just another optimization tool. It’s a new way of designing work, coordination, and decision-making. And enterprises that treat it as a solution in search of a predefined problem risk missing much larger opportunities for innovation.

The limits of “problem-first” AI thinking

In many organizations, AI initiatives begin by looking at an existing workflow and asking whether AI can make it faster or cheaper. Reduce average handling time. Deflect a percentage of tickets. Summarize documents more efficiently.

All of that is useful – but it assumes the underlying process is fundamentally sound.

History suggests that transformational technologies rarely deliver their full value that way. Email didn’t just speed up memos. Cloud computing didn’t just reduce data center costs. Smartphones didn’t simply digitize paper workflows. Each of these technologies changed how work was structured in the first place.

Generative AI belongs in that same category. Its real impact comes not from incremental improvement, but from rethinking how work flows across people, systems, and data.

So what can we do differently? What specific steps could we take to NOT look for the AI-shaped hole, but to truly use this technology at its full potential?

Here are a few practical shifts to consider.


1. Change the mechanics




For many enterprise stakeholders, “generative AI” still means one thing: a chatbot that looks and behaves like ChatGPT. You ask a question, it responds with text, and you iterate in real time.

That model is familiar – but it is only one possible interaction pattern, and often not the most effective one in an enterprise setting.

Much of enterprise work is asynchronous. Requests arrive through email, internal portals, or messaging tools. Context accumulates over time. Decisions are rarely made in a single back-and-forth interaction. I have written about this in the past – and stand by it.


The distinction between synchronous and asynchronous communication media opens the door to a different kind of innovation: changing the mechanics of interaction. AI agents that operate via email, messaging platforms, or background workflows often align far better with how work actually happens. In many cases, the AI doesn’t need a conversation at all – it needs access to context, ownership, and the ability to act.


Simply changing the communication medium can already unlock major gains in adoption, efficiency, and user satisfaction—without changing the underlying AI model.

2. Taking the process helicopter view




To go further, enterprises need to zoom out. Instead of examining AI opportunities task by task, it helps to take a “process helicopter” view of the organization.

From 50,000 feet up, two questions become especially revealing.
  • Which processes are currently performed by humans, are repetitive in nature, and consume a significant amount of time.
  • Which processes are performed by humans, deal primarily with unstructured information – emails, documents, chat messages, requests – and also consume a lot of time.
These questions cut across roles and departments. They surface work that exists not because it creates value, but because humans have historically been the only way to connect systems, interpret context, and move work forward.

With the current state of AI technology, many of these process steps are highly automatable. More importantly, they are often redesignable. Platforms like DevRev illustrate this by treating conversations, work items, and systems of record as part of a single operational fabric, rather than separate silos.

And tools like DevRev’s Agent Studio take the grunt out of designing these processes – and make it super easy to “land” the helicopter, and improve your processes.

What this looks like in practice

To make this more concrete, consider a few common enterprise scenarios.
  • In customer support organizations, a large amount of work happens before an issue is ever resolved. Tickets are triaged, clarified, routed, enriched with context from product and CRM systems, and handed off between teams. Traditionally, this coordination work is invisible but time-consuming. AI can take ownership of much of this orchestration – reading incoming conversations, creating and updating work items, linking them to the right customers and products, and escalating only when human judgment is truly required.
  • In product organizations, feedback from customers often arrives as unstructured input scattered across emails, support tickets, call transcripts, and chat logs. Humans manually summarize this information and attempt to translate it into roadmap decisions. AI systems can continuously ingest these signals, cluster them by theme, and connect them directly to product work – shortening the distance between customer conversations and engineering action.
  • In internal operations, think about onboarding, access requests, or policy questions. These are rarely complex, but they are highly repetitive and distributed across multiple systems. Instead of creating yet another portal or chatbot, AI agents can operate across existing channels, interpret intent, gather context, and execute actions across systems – while keeping humans in the loop only when exceptions arise.
In all of these cases, the real innovation is not “AI answering questions.” It’s AI owning work, understanding context, and moving processes forward end to end.

3. Structured ways to innovate with AI



As a last point, I would also like to remind you that thinking differently about AI requires more than inspiration – it requires method. There are lots of different methods out there that can structurally help you innovate, and I am sure every other professional will have their preference. But it’s pretty clear that you can take a deliberate approach, and strategically structure your thought process and come up with different ways to innovate in your organisation, using AI.

Let me give you a few examples of approaches you could take:
  • One useful approach is Jobs To Be Done. Instead of focusing on tasks or tools, teams ask what job a process is truly trying to accomplish. AI often makes intermediate steps unnecessary, allowing entire workflows to collapse.
  • Another powerful technique is process inversion. Teams map a workflow and then ask what would remain if humans were removed from it entirely. This quickly reveals which steps exist because of historical constraints rather than real value creation.
  • A third approach is zero-based process design. Teams design workflows from scratch under the assumption that AI can read, understand, and act on unstructured information by default. Humans are then reintroduced intentionally, rather than by habit.
Finally, some organizations flip the question entirely by starting with AI capabilities instead of problems. They map what AI can reliably do today – classification, reasoning, summarization, decision support, autonomous action – and explore where those capabilities could enable entirely new operating models.

A different question to ask

Perhaps the most limiting question in enterprise AI strategy is: Where can AI help us with our current problems? Don’t do that - it only leads to self-limiting options! A far more powerful one is: If AI were native to our organization, what would we never design this way again?

Enterprises that ask that question won’t just automate faster. They will operate differently. And over time, that difference – not incremental optimization – is where lasting competitive advantage will be built.

At DevRev, we ask for nothing better than to work together on answering that intellectually challenging question. Let’s engage, and instead of solving problems, work together on unlocking opportunities!

Cheers

Rik

Monday, 22 December 2025

The myth of the 95% AI failure rate

 

Why your AI project is actually failing–and how to fix the foundation


You’ve probably seen the headline by now: “95% of generative AI pilots fail to deliver measurable ROI.” It’s been included in just about every article and presentation on the AI topic recently - and it’s most definitely sparked a familiar fear – that we’re living through another overhyped AI bubble.


But that number, taken at face value, misses the real story. Because what’s failing isn’t AI. It’s how organizations are trying to adopt it. Once you look past the headline and into what the research actually measured, a different picture emerges—one that’s far more practical, and far more fixable.


Here is a critical breakdown of what the research really showed, what it did not show, and the non-negotiable architectural mandate required to transition your projects from the volatile 95% bracket into the successful 5%.

1. What the research really showed: A failure of integration, not intelligence

The MIT Media Lab’s Project NANDA (who advocate for decentralised web of AI agents (see their projected evolution in the chart below), and as such are somewhat biased towards the current state of AI) report defined “failure” precisely: the inability of the AI pilot to transition beyond the proof-of-concept stage and achieve rapid revenue acceleration or a substantial, measurable return on investment (ROI). 

In other words, these weren’t models that didn’t work. They worked just fine in controlled environments. However, they failed when they hit the real world.


The researchers describe this as a “learning gap” – the moment when an AI system leaves the lab and runs into fragmented data, unclear ownership, and workflows that were never designed for intelligence to plug into them.


So the takeaway isn’t “AI doesn’t work.”

It’s “we’re dropping AI into environments that aren’t ready for it.”


Why AI projects stumble: it’s usually not the tech


When AI initiatives stall, the real roadblocks are usually internal – specifically how we try to fit the technology into our existing organization and where we decide to spend the money.


First, let's talk about the Integration problem


Dropping a powerful but generic tool like a Large Language Model (LLM) into a complex company is like trying to use a foreign body in a human system – it just doesn't mesh. For these tools to actually work, we need more than just the software; we need to break down the departmental silos, establish clear governance, and define exactly how the new AI connects with our current systems. Without that framework, the tech is essentially an outsider that can't access the necessary context to be effective.


The second big issue is the strategic misalignment, which I call the "Visibility Trap”, which also indicates a strategic mismatch in where money gets allocated. 


A large share of AI budgets flows toward visible functions like Sales and Marketing. But MIT’s own data shows the highest measurable ROI comes from back-office automation – reducing repetitive internal work and operational drag The conclusion is clear and simple: we often fund what looks impressive instead of what actually compounds value.

Bottom line: the AI isn’t failing. The organization is failing to prepare the ground it’s meant to operate on.

2. The core obstacle: the data readiness crisis

If strategic failure kills ROI in the pilot phase, data fragmentation kills the rollout in the production phase. Gartner data supports this operational challenge, showing that only 48% of AI projects successfully transition into production.6


The single biggest blocker to rolling out AI is the state of enterprise data, which we can dissect into three components:


  1. The unstructured data problem:
    Most enterprise data – emails, tickets, documents, logs – is unstructured. It lacks consistent context and labeling, making it unusable for reliable, auditable AI unless it’s heavily cleaned first.

  2. The fragmentation trap:
    Customer context is spread across CRMs, ticketing systems, engineering tools, and ERPs. Stitching this together with brittle API calls doesn’t scale. It slows systems down and introduces failure points.

  3. The trust gap:
    When leaders can’t trace answers back to source data –or predict how the system will behave – they won’t rely on it for real decisions. That’s when projects quietly get shelved.


Without good data, no good decisions. That eternal truth is why so many promising projects end up getting abandoned - the lack of good data makes it so that they are simply set up to fail.


What do the successful 5% do differently?

To succeed where 95% of companies stall, you must pivot from model-centric optimization to a data-centric architectural foundation.7 This requires securing three critical capabilities: Context, Trust, and Action.


This is where a platform like Computer by DevRev fundamentally re-architects the problem, ensuring your AI initiatives are grounded in a ready-made, unified data layer.

How DevRev addresses the foundation problem

Let’s walk through the three things that DevRev does, as part of our yearlong effort to architect Computer from the ground up for this purpose, very differently.

1. Context: fixing fragmentation with a unified data layer

DevRev replaces fragile data federation with physical consolidation. Using bi-directional sync, data from CRM, support, engineering, and other systems is continuously pulled into a single store. That data is then structured as a relationship-rich knowledge graph – linking customers to tickets, tickets to code, and code to documentation.

This turns scattered, unstructured data into something AI can actually reason over.

2. Trust: grounding AI in auditable queries

To address governance and hallucination concerns, DevRev grounds conversational AI in an auditable layer. Natural-language questions are translated into standard SQL queries against the unified data layer. That means answers are:

  • Predictable

  • Traceable

  • Verifiable against source data

This is what makes enterprise-grade trust possible.

3. Action: closing the learning gap

Insight alone doesn’t deliver ROI. Because DevRev is a system of record, its AI agents are write-enabled. They don’t just answer questions – they can update tickets, log bugs, assign ownership, and execute changes inside real workflows. That’s how the “learning gap” closes: insight turns into action, and action turns into measurable operational impact.

The real lesson behind the 95%

The 95% failure rate isn’t a warning about AI, really. It’s a warning about treating AI like a plug-in instead of a system. GenAI success depends on foundations – context your AI can understand, trust your leaders can audit, and actions that move work forward automatically. When those are in place, AI stops being experimental – and starts compounding value.


If you want to dig deeper into DevRev, Computer, or any of the ideas here, I’m happy to continue the conversation.


All the best


Rik


Monday, 2 June 2025

The Enterprise Dilemma: Building vs. Buying AI-native CX Solutions


In today's every changing and evolving business landscape, enterprises face a critical decision when it comes to implementing AI-native CX solutions: should they build custom solutions from scratch, or buy existing platforms?

The Traditional Build Approach to AI in CX

Building custom CX solutions offers enterprises complete control over their implementation: the can fully customize their implementation, perfectly align with specific business processes, maintain proprietary intellectual property, and have direct control over feature development.

However, this approach comes with significant drawbacks: building custom AI solutions comes with significant challenges including high development and maintenance costs, extended time-to-market, and resource-intensive updates and improvements. Plus: whether you like it or not, there’s quite a bit of complexity to creating a solid AI solution - which is only to be met with significant skill!

The Traditional Buy Approach to AI in CX

Purchasing existing CX solutions provides several immediate benefits. Companies can deploy these solutions rapidly, leveraging proven functionality that has already been tested in the market and that has been engineered by highly specialized staff with very specific skills. These solutions come with regular updates and improvements managed by the vendor, and typically require a lower initial investment compared to building from scratch.

However, this approach also comes with notable limitations. Organizations often find themselves restricted by limited customization options and become dependent on the vendor's roadmap for new features and improvements. Additionally, there's a risk of misalignment between the pre-built solution and specific business needs, which can impact operational efficiency.

DevRev’s Hybrid Solution: A New Paradigm, validated by the industry

Modern platforms like DevRev are pioneering a hybrid approach that combines the best of both worlds: lots of out-of-the-box functionality that relieves you of the boring infrastructure related tasks, combined with extensive customization capabilities to tune the platform to your needs.


This innovative approach offers several distinct benefits: core functionality is available immediately while maintaining the flexibility to customize and extend the platform according to specific business requirements. We can summarize this like this:


This is not just DevRev saying this: McKinsey's 2024 "State of AI" survey shows that 75% of enterprises prefer solutions that offer both out-of-the-box functionality and extensive customization capabilities. This confirms the trend that we have seen: it aligns perfectly with the hybrid approach offered by modern platforms like DevRev.

Conclusion

The traditional build-vs-buy dichotomy is becoming obsolete. Modern enterprises need AI-Native solutions that combine immediate functionality with the flexibility to adapt to specific business needs. Platforms that offer this hybrid approach, like DevRev, represent the future of enterprise CX solutions.

By choosing a hybrid solution, enterprises can accelerate their digital transformation while maintaining the ability to differentiate their customer experience - truly offering the best of both worlds.

Let me know if you have any questions or comments. Would love to discuss.

All the best

Rik

Monday, 26 May 2025

Impedance Matching for DevRev

 

New, innovative products like DevRev are fascinating. They involve immeasurable quantities of hard work by lots and lots of people to get to market. But once you get there, how do you make it Super Easy(™) to communicate and make your audience understand the fruits of all that work? That. Is. Not. Easy.


This past week I have spoken to so many people, friends and contacts old and new, about this fascinating new adventure that I have embarked on. And I have felt like I really had to iterate multiple times to better tune the message of what it is that we provide to our customers. Communicate. Fail. Rinse and repeat. Until it works. Until it clicks.


In order to find that “click”, I was thinking of the idea of the Impedance Matching. Those of you that have an engineering background immediately understand: you need to match your messages to the audience that will be receiving it, or else … stuff will get lost :) … Too little detail and people will be frustrated - too much detail and they will be overwhelmed.


So that’s why I started to think about different “levels of communication” for different “levels of audiences” that  would understand different “levels of messages” for our different DevRev offerings. Here’s what I came up with.


Industry level - We want to make work matter. We want to connect builders to customers. We want to help build the world’s most customer centric organisations.

These may sound like different objectives - but they aren’t. Especially for people that have seen the complexities of building digital products in today’s day and age, it will probably ring true. How many software engineers never see the fruits of their work in the hands of a customer? How many of them have actually never seen or heard the voice of their customer, literally? That’s not a very satisfying place to be. What if we could shrink that distance between builders and customers? What if we could give builders and buyers, dev’s and rev’s, a true voice in the conversation?


Company level - We want to solve the problem of Information Asymmetry in digital product building organisations: different teams have different access to different information. This problem is the root cause for many Customer Experience problems: siloed teams lead to a frustrating client experience that effectively limits growth.

Great companies excel at customer focus. They are obsessed with their customers’ success, with the value that they derive from the product - and will walk through fire to help the customer get there. There is no substitute for that - but there are lots of barriers to get there. Information siloes are real, in fact they have gotten worse since the moment SaaS 1.0 made it dead easy for every department to automate their departmental processes with yet-another-cloud-platform. Where did the holistic view of the customer go? That’s right - it disappeared. And with it, so did the truly exceptional customer delight. 


CxO level - We want to offer new growth opportunities, by enhancing the customer experience at a lower cost. This means breaking down silos between tools and teams, bringing the data together, and using the latest Agentic AI technology to automate the automatable.

At DevRev, we make this a reality, today, by integrating the different tools in your different departments in a comprehensive Knowledge Graph that connects all the dots. Using that data, we can offer holistic search that reduces the information asymmetry, automated workflows and analytical capabilities on top of that. Using AI, we automate the time-consuming tasks, and make the cross-cutting information accessible through conversational interfaces. 



Customer Support - we want you to be able to help more customers quickly and efficiently, using the full information that is needed to do so, and leveraging AI assistance whenever possible. 

Leveraging DevRev, customers have seen significant drops in resolution times, much higher call deflection rates, faster customer service and as a consequence, a higher net promoter score. As a result, the company can turn support from a cost into a revenue generator.


Product Management - we want to break down the barriers between devs and revs, and make sure that you have all the information to better tune your development and support resources to your most valuable product parts. 

Understanding what is wanted and needed by your customers is not trivial, especially when you have layers of Chinese whispers standing between the engineers and their customers. With DevRev’s knowledge graph, a holistic customer view becomes accessible and actionable. With AI, we can aggregate requirements and align your resources. We can tune in to the customer voice, and foster long term success.


Head of data - as digital product organisations become successful, as their departments grow, they become more complex. To deal with that complexity, many organisations have implemented departmental tools to optimize departmental processes - and by doing so we have lost the overall picture. SaaS 1.0 has created data silos - we now face a real data integration challenge.

Using patented “Airdrop” technology, DevRev has successfully implemented a bidirectional syncing system for most sources of enterprise data in the cloud. CRM data from Hubspot or SalesForce, Customer Support data from Zendesk, Freshdesk or ServiceNow, Product data from Jira / Github, it all comes together in a fully synced up Knowledge Graph. This repository is searchable and actionable, and can drive new business processes in real time using AI and AI Agents. This will allow us to lever the holistic view on the  data as additional context for better human and AI decision making.


Head of AI - leveraging the potential of AI is on everyone’s radar. Not doing AI is not an option - you do NOT want to fall behind. But how does one operationalise this amazing technology, without spending an arm and a leg and months/years of development time? How do you limit the risk, and ensure compliance? How do you prevent hallucinations and reputational damage? 

Turns out you don’t have to do it all yourself. DevRev has spent hundreds of person-years in design and engineering time to build a product offering that does it for you, fast, and at a much lower cost. Leverage the benefits, but don’t run the risks. We help you implement AI efficiently and effectively, and together we will unlock its potential for your organization.


I am hoping that these messages are a bit clearer. We have an incredible story to tell, but it’s like so many beautiful stories: there is more than one storyline. By tuning the story to the listener, by matching the impedance, I have been trying to make it easier to understand - whatever your background.


Looking forward to many more discussions in the next couple of days, weeks, months to come. It’s going to be an incredible journey.


Rik


Wednesday, 15 January 2025

Pattern Recognition: The Powerhouse Behind LLMs and all Real-Time AI

Real-time applications of Artificial Intelligence (AI) and Machine Learning (ML) are transforming industries, from finance to entertainment. One of the great benefits of this age of Generative AI (large language models and the likes), is that it has opened up people’s imagination. People can now see that many things, almost anything, is and are possible with today’s real-time AI capabilities.
In this article, I would like to show the similarities and therefore analogies between generative AI and two of the most popular applications for real-time AI: real-time fraud detection and real-time recommender systems. While these applications seem distinct, they share some fundamental aspects: recognizing patterns, and then acting on these patterns. Some would joke that fraud detection systems only “recommend that the fraudster is put in jail”. Therefore it is useful to reflect on what that fundamental shared core of these, and many other, use cases actually consists of. In this article, we will argue that pattern recognition is that core.

Pattern recognition is how AI and ML are used to identify trends in historical data, understand these patterns, and then utilize these insights to forecast future behavior. This approach is crucial in both fraud detection and recommender systems, enabling them to deliver real-time, insightful and actionable results. Sometimes the action may be to offer a new previously unknown product to a returning customer (in the case of a recommender system). And sometimes the action may be that all the alarms go off and authorities are notified to forcefully lead the bad guys to a safe place (in the case of a fraud detection system). 

Large language models use Pattern Recognition

Large language models (LLMs) leverage advanced pattern recognition to understand, learn, and generate language. Trained on vast amounts of text data, these models analyze patterns in word usage, sentence structure, context, and relationships between concepts. This training enables them to develop a probabilistic understanding of language, identifying how words and phrases typically interact. 
When faced with a question or prompt, an LLM uses this knowledge to predict the most contextually relevant and coherent response by evaluating patterns similar to those it has encountered during training. By iterating on this process across diverse contexts, LLMs excel at producing nuanced, human-like answers that align with the input’s meaning and intent. In effect, the recognised language pattern is used to predict the most qualitative and accurate responses.

Fighting Fraud with Pattern Recognition

Fraud detection aims to identify and prevent fraudulent transactions or activities. This requires analyzing large historical datasets to spot subtle but repetitive patterns that indicate fraudulent behavior. For instance, an e-commerce platform might analyze user behavior, transaction details, network activity - or even a combination of all of the above - to identify suspicious patterns.

Consider a sudden surge in purchases from a new account using multiple credit cards. This pattern deviates from normal user behavior and raises a red flag for potential fraud. Real-time fraud detection systems leverage pattern recognition to detect such patterns and make instantaneous decisions about blocking new incoming suspicious transactions that display similar patterns as the fraudulent ones that were seen before.



Building Robust Fraud Detection Models Requires:
  • Data Quality: High-quality data is essential for training accurate fraud detection models. This data should accurately reflect user preferences and behaviors.
  • Feature Engineering: Identifying and selecting relevant features that capture fraudulent patterns is crucial. For example, analyzing ratings and their positive/negative rating distributions can help identify suspicious users.
  • Robust Algorithms: Fraud detection models need to be robust to adversarial attacks, where fraudsters try to manipulate the system. Graph representations of the interactions between fraudsters and systems, like a Graph Convolutional Network (GCN) for example, offer a promising approach to learning robust user representations for fraud detection.

Recommending the Perfect Choice with Pattern Recognition

Recommender systems aim to predict user preferences and interests, and will suggest items they might enjoy. This would promote more and more profitable interactions with the provided systems - which could be shopping cart systems, media portals, or other systems that would benefit from a more intimate relationship between the provider and the user. These systems learn from past user interactions, such as purchases, ratings, or browsing history, to identify patterns that indicate user interests.

Imagine a user frequently purchases books in the kids reading book genre and leaves positive reviews for authors with a specific writing style. A simple recommender system can recognize this pattern and recommend other science kids reading books by similar authors. Real-time recommender systems utilize predictive pattern recognition to provide up-to-date suggestions based on the latest user interactions. A sophisticated pattern-based recommender system would learn how specific times and days of the week (eg. mornings just before going to kindergarten, or evenings just before bed), specific computers from which the system would be accessed (eg. home vs. work computers), and real-time stock keeping availability would matter in making the best possible decisions and recommendations.

Effective Recommender Systems Depend on:
  • Understanding User Behavior: Accurately modeling user preferences and interests from historical data is essential.
  • Capturing Contextual Information: Incorporating contextual data, such as time, location, and device, can improve recommendation relevance. For instance, a travel recommender system can use location and weather data to suggest suitable destinations.
  • Exploiting Multimodal Data: Utilizing multimodal data, like text reviews and images, provides a richer understanding of user preferences. Deep learning techniques, particularly Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs), have proven effective in handling multimodal data.

The Similarities between LLMs, Fraud Detection and Recommender Systems


At the core of both LLMs, real-time fraud detection and recommender systems lies predictive pattern recognition. Both applications rely on analyzing past data, understanding patterns, and leveraging insights to predict future behavior and take action based on that prediction. This shared foundation highlights the versatility and power of this approach.

Consider the similarities:
  • Pattern Recognition: Both applications aim to detect and understand patterns in user behavior and data.
  • Predictive Modeling: Both utilize historical data to predict future actions, whether fraudulent transactions or preferred items.
  • Real-time Analysis: Both operate in real time, analyzing incoming data streams and generating immediate results.

The Hopsworks AI Lakehouse: The Foundation for Real-time Pattern Recognition Systems

Real-time AI pattern recognition applications, like LLMs, fraud detection and recommender systems, thrive on high-quality, readily accessible data. The Hopsworks AI Lakehouse emerges as a powerful solution, enabling organizations to build and deploy these applications efficiently. The Hopsworks AI Lakehouse is the centralized repository for storing, managing, and analyzing data from diverse sources. It integrates the capabilities of a data lake and a machine learning operations (MLOps) platform , providing a unified platform for data-driven AI initiatives.

It seems obvious at this point, but the Hopsworks AI Lakehouse offers significant benefits for Predictive Pattern Recognition applications of all kinds. It offers:
  • Feature Engineering and Model Training: the AI Lakehouse facilitates feature engineering and model training by providing tools for data transformation, feature extraction, and model development.
  • Centralized Data Management: the AI Lakehouse provides a single source of truth for all data, simplifying data access and management for all teams that are developing predictive models
  • Scalability and Performance: the AI Lakehouse is engineered to handle massive data volumes and supports real-time data processing, essential for real-time AI applications.
  • Unified governance: the AI Lakehouse will allow for governance on source data, and provide the required explainability and transparency on the end-result, the predictive pattern recognition system.

Wrapping up

Predictive pattern recognition is a transformative force driving real-time AI applications like LLMs, fraud detection and recommender systems. The Hopsworks AI Lakehouse solution empowers organizations to leverage this power effectively, providing a robust foundation for building and deploying real-time AI solutions.

Hopsworks simplifies the process of:
  • Data Ingestion and Management: Streamline the process of ingesting data from multiple sources and managing it centrally.
  • Feature Engineering: Provide tools for efficient feature extraction and transformation, enabling the creation of powerful predictive models.
  • Model Training and Deployment: Facilitate model training and deployment, making it easier to build and operationalize real-time AI applications.
With the combination of powerful AI algorithms and robust infrastructure, businesses can unlock the full potential of predictive pattern recognition, leading to enhanced security, less fraud, improved user experiences, and increased business value.

I hope this was a useful clarification of how different AI use cases share specific characteristics that are all facilitated by the AI Lakehouse.

Let me know if you would like to discuss!

Cheers

Rik

Friday, 6 December 2024

Can you elevate your pitch with AI?

 


Working at Hopsworks has been a great experience for many reasons, but one of the main attractions for me personally has been and still is the proximity that it offers me to some of the most exciting IT developments in our lifetime: the rise of Artificial Intelligence in an innumerable number of business use cases.

Of course, much of that interest and the fascination for it is fueled by the impressive achievements offered by Large Language Models (LLM) and their applications:  LLMs are such powerful tools, when used in capable hands, of course, that they can really offer massive productivity enhancements and therefore, new fields of application. 

In my own daily work, I use LLMs (either Google's Gemini or OpenAI 's different ChatGPT based systems) very regularly - increasingly so. I have found it to be a superbly useful tool for writing, summarizing, coding and just in general, learning. And recently I had a couple of amazing experiences that have simply been too good not to share. One of them I already wrote about: using ChatGPT as an interactive role-playing agent to practice objection handling. It is a baffling experience.

But here's another one. I recently tried to generate a short "Elevator Pitch" for Hopsworks, which goes something like this:
The Hopsworks AI Lakehouse is unique: it provides organisations like yours with the data infrastructure for your Machine Learning systems, allowing you to streamline all your MLOps tasks, teams and processes quickly and efficiently. With the AI Lakehouse, all your stakeholders benefit. First, your individual data scientist, data engineer, or machine learning engineer benefits, because they will be able to work with the same consistent operational infrastructure for all of their tasks. They will save precious time by not having to integrate the infrastructure themselves, and spending more time with their actual day jobs. Second, your data science or machine learning team leader will win because the AI lakehouse will make the team more efficient, and therefore they will be able to do more with less, and contribute more and better end results back to the business. Thirdly and lastly, your governance team will win, because the centralized infrastructure will be much easier to govern, making compliance with the latest and upcoming AI regulations much easier. This is how Hopsworks makes the booming AI application space much more valuable and attainable for your organisation. 
I wanted to figure out a way to customize this "Pitch" for different potential prospects, and see if I could use AI tools to do so. So I tried a bunch of tools, and found that they all have their different strengths and weaknesses. I found that the voice synthesis of ElevenLabs was clearly the best and most flexible around, but then also found that Google Vids offered some amazing capabilities, and could get me some crazy nice results super easily.

So: let me show you some of the results. Here's a Youtube playlist with some of the videos that I generated:

 


I thought that was pretty cool, but... I was also pretty underwhelmed with the lack of intonation and variation that was delivered by these AI voices. They are good - way better than the robo-voices of yesteryear, but they are nowhere near the quality of a real, human voice. To try and prove that - with my limited acting / voiceover skills, here's how I would deliver the same pitch:



There you go. I think it was amazing to see how far the technology has gotten already, and how easy it has become to make custom pitches for specific environments in a fairly automated way. But it's also pretty clear that we still have a way to go and that for now, personal and human content will stand out pretty clearly.

Hope that was a useful experiment. As always, I look forward to your comments and reactions!

Cheers

Rik

Monday, 2 December 2024

Training yourself with ChatGPT

Here's something I want to share. I have been using OpenAI's chatgpt for some personal training, and I have also been sharing this with our Hopsworks team. One of the unbelievably cool things you can do with it, is that you can role play specific topics with it. For example: Objection Handling - ie. to get better at dealing with some of the objections that a prospect might throw at you. Let me give you an example - completely hypothetical. I am going to try to handle the objections that a salesperson for a solar panel company might get from one of his/her prospects.

Role-playing part 1: Setting the scene

In ChatGPT, what you can do is you can set the scene by explaining the type of situation that you are in: the product that you are selling, and the prospect that you are dealing with.

It will then revert with some very detailed guidance on the objections that you may find.

You can find the entire overview of all the objections in this chat transcript. The key is that at the end of the overview of all possible objections and counter-arguments, ChatGPT basically says "Would you like to role play this?"

Role-playing part 2: going back and forth


The coolest thing then, is that the role play is not WRITTEN - it's ORAL. You literally talk to ChatGPT, and it will act out the role of the prospect, and you can practice your handling of the prospect's objections as a tried and tested salesperson. Afterwards, you get a nice little transcript of the entire conversation, of course, for review.
Here's a little clip of the way I was addressing one particular concern:


I have found this method extremely useful and interesting. It's like having an endlessly patient, unemotional and always available teacher at your fingertips. I liked it a lot!

Hope you thought this was a useful article - looking forward to your feedback.

Cheers

Rik