Showing posts with label google. Show all posts
Showing posts with label google. Show all posts

Friday, 6 December 2024

Can you elevate your pitch with AI?

 


Working at Hopsworks has been a great experience for many reasons, but one of the main attractions for me personally has been and still is the proximity that it offers me to some of the most exciting IT developments in our lifetime: the rise of Artificial Intelligence in an innumerable number of business use cases.

Of course, much of that interest and the fascination for it is fueled by the impressive achievements offered by Large Language Models (LLM) and their applications:  LLMs are such powerful tools, when used in capable hands, of course, that they can really offer massive productivity enhancements and therefore, new fields of application. 

In my own daily work, I use LLMs (either Google's Gemini or OpenAI 's different ChatGPT based systems) very regularly - increasingly so. I have found it to be a superbly useful tool for writing, summarizing, coding and just in general, learning. And recently I had a couple of amazing experiences that have simply been too good not to share. One of them I already wrote about: using ChatGPT as an interactive role-playing agent to practice objection handling. It is a baffling experience.

But here's another one. I recently tried to generate a short "Elevator Pitch" for Hopsworks, which goes something like this:
The Hopsworks AI Lakehouse is unique: it provides organisations like yours with the data infrastructure for your Machine Learning systems, allowing you to streamline all your MLOps tasks, teams and processes quickly and efficiently. With the AI Lakehouse, all your stakeholders benefit. First, your individual data scientist, data engineer, or machine learning engineer benefits, because they will be able to work with the same consistent operational infrastructure for all of their tasks. They will save precious time by not having to integrate the infrastructure themselves, and spending more time with their actual day jobs. Second, your data science or machine learning team leader will win because the AI lakehouse will make the team more efficient, and therefore they will be able to do more with less, and contribute more and better end results back to the business. Thirdly and lastly, your governance team will win, because the centralized infrastructure will be much easier to govern, making compliance with the latest and upcoming AI regulations much easier. This is how Hopsworks makes the booming AI application space much more valuable and attainable for your organisation. 
I wanted to figure out a way to customize this "Pitch" for different potential prospects, and see if I could use AI tools to do so. So I tried a bunch of tools, and found that they all have their different strengths and weaknesses. I found that the voice synthesis of ElevenLabs was clearly the best and most flexible around, but then also found that Google Vids offered some amazing capabilities, and could get me some crazy nice results super easily.

So: let me show you some of the results. Here's a Youtube playlist with some of the videos that I generated:

 


I thought that was pretty cool, but... I was also pretty underwhelmed with the lack of intonation and variation that was delivered by these AI voices. They are good - way better than the robo-voices of yesteryear, but they are nowhere near the quality of a real, human voice. To try and prove that - with my limited acting / voiceover skills, here's how I would deliver the same pitch:



There you go. I think it was amazing to see how far the technology has gotten already, and how easy it has become to make custom pitches for specific environments in a fairly automated way. But it's also pretty clear that we still have a way to go and that for now, personal and human content will stand out pretty clearly.

Hope that was a useful experiment. As always, I look forward to your comments and reactions!

Cheers

Rik

Saturday, 24 April 2021

Making sense of the news with Neo4j, APOC and Google Cloud NLP

Recently I was talking to one of our clients who was looking to put in place a knowledge graph for their organisation. They were specifically interested in better monitoring and making sense of the industry news for their organisation. There's a ton of solutions to this problem, and some of them seem like a really simple and out of the box toolset that you could just implement by giving them your credit card details - off the shelf stuff. No doubt, that could be an interesting approach, but I wanted to demonstrate to them that it could be really much more interesting to build something - on top of Neo4j. I figured that it really could not be too hard to create something meaningful and interesting - and whipped out my cypher skills and started cracking to see what I could do. Let me take you through that.

The idea and setup

I wanted to find an easy way to aggregate data from a particular company or topic, and import that into Neo4j. Sounds easy enough, and there are actually a ton of commercial providers out there that can help with that. I ended up looking at Eventregistry.org, a very simple tool - that includes some out of the box graphyness, actually - that allows me to search for news articles and events on a particular topic.

So I went ahead and created a search phrase for specific article topics (in this case "Database", "NoSQL", and "Neo4j") on the Eventregistry site, and got a huge number of articles (46k!) back. 

Tuesday, 25 April 2017

Autocompleting Neo4j - part 4/4 of a Googly Q&A

In the firstsecond and third posts in this series, I got round to finally aswering some of the more interesting "frequently asked questions" that Google seems to be getting on the topic of Neo4j.
Today, we'll continue the last part of that Q&A, and answer two more questions which - funnily enough - are kind of related. They both deal with the query language that people use to interact with their graph database. Neo4j has been pioneering openCypher of course, but clearly there are alternatives out there - and people need to make an informed choice between query languages, of course.

Monday, 24 April 2017

Autocompleting Neo4j - part 3/4 of a Googly Q&A

In the first and second post in this series, I explained and started to explore some of the more interesting "frequently asked questions" that seem to surround Neo4j on the interwebs.
Today, we'll continue that journey, and talk about Lucene, transaction support, and SOLR. Should be fun!

2. Does Neo4j use Lucene

This one is a lot simpler to answer - luckily - than the scale question that we tackled in the previous post. The answer is: YES, Neo4j does indeed leverage the (full-text) indexing capabilities of Lucene to create "graph indexes" on specific node-label-property combinations.

Friday, 21 April 2017

Autocompleting Neo4j - part 2/4 of a Googly Q&A

So in the previous post, I explained my plan of doing a series of blogposts around the most frequently asked Google questions as recorded and suggested by Google's Autocomplete feature.
We'll start this week with the most asked question of all - which I get all the time from users and customers - and it's the inevitable "scale" question. Let's do this.

1. Does Neo4j Scale

Let's start at the beginning, with the first question that lots of people ask is: "Does Neo4j scale?" Interesting. Should not surprise anyone in an age of "big data" right? Let's tackle that one.


To me, this is one of the trickiest and most difficult things to answer - for the simple reason that "to scale" can mean many different things to many different people. However, I think there are a couple of distinct things that people mean with the question - it least that's my experience. So let's try to go through those - noting that this is by no means an exhaustive discussion on "scalability" - just a my 0,02 Euros.

Thursday, 20 April 2017

Autocompleting Neo4j - part 1/4 of a Googly Q&A

As you can probably tell from this blog, I have been working in the wonderful world of Graphs for quite some time now - Neo4j remains to be one of the coolest and inspiring products I have ever seen in my 20 odd years in the IT industry, and it certainly has been a thrill to be part of so many commercial and community projects around the technology in the past 5 years. Not to mention the wonderful friends and colleagues that I have found along the way.

One thing that does keep on amazing me in working with Neo4j, is the never ending
  • stream of use cases, industries and functional domains where graphs and graph databases can be useful
  • stream of new audiences that we continue to educate and inform on the topic. Every time we do a meetup or an event, we seem to tap a new source of people that are just starting their journey into the wonderful world of graphs - and that we get to talk to and work with along the way.
When dealing with these new audiences, it's also pretty clear that we ... keep on having the same types of conversations time and time again. Every new graphista that gets added to the community, is asking the same or similar kinds of questions... and most likely, they are going to google for answers.

This leads me to the topic of this blogpost, which is both fun and serious at the same time: we are going to try and autocomplete neo4j :) ...

Autocompleting? What's that again?

When we talk about autocomplete, we talk about this amazing technology that Google has built into it's search functionality, that completes your search query as you type - often times "guessing" what you will be looking for most likely before you even thought about it... it can be pretty interesting, even eerily scary sometimes...

Thursday, 30 July 2015

Hierarchies and the Google Product Taxonomy in Neo4j

Quite some time ago, I wrote a blogpost about using Neo4j for managing and calculating hierarchies. That post was then also later used in my book as it proved very useful for explaining one of the key use-cases for Neo4j, Impact Analysis and Simulation. So it should be pretty clear by now that HIERARCHIES ARE GRAPHS right? I think so :) ...

Hierarchical Product Taxonomy

Recently, I was preparing for a very cool brown-bag session at a client's offices, when I wanted to include a demonstration around product taxonomies. These structures are typically presented as some kind of a hierarchy/tree on many eCommerce websites - and are very well known by online users. So I wanted to find a taxonomy, and here here, Google immediately came to the rescue. I found this page on the Google Merchant Center.

You can follow the link to the Excel file, and boom - there's your product Taxonomy for you.

Thursday, 3 July 2014

Using LoadCSV to Import data from Google Spreadsheet

My colleague Rickard recently did a great graphgist on Elite:Dangerous trading. When I read his post, the first thing that struck me was that he had found a great clever way to import data into Neo4j. And since I have been into data imports for a while, I decided to take it for a spin, and write it up for you.

The mechanism uses two fundamental capabilities:

  • Google Spreadsheets have a great capability to export the data into a CSV format. That export capability generates a unique URI that you can download the CSV of a specific sheet in the spreadsheet from. 
  • Neo4j's Load CSV capability can leverage data that is located at **any** URI. The data does not have to be local - it can be anywhere on the network.
So let's give this a try. 

Preparing the Google Spreadsheet

What we want to do is to put some sample data into a spreadsheet first.
You can find that actual sheet over here. Clearly it's not a very big sheet - we are just trying to do a little test.

Now, in order to export this file to CSV, and to make that export accessible over the internet, we need two things: 
  1. the spreadsheet needs to be publicly accessible over the internet. Otherwise Google will ask you to authenticate first, and the Neo4j Load CSV process will not know how to do that.
  2. you need to generate the download URI of the CSV export. That is very simple too. First you do the export:


    And then you take a look at your browser download history to figure out what the URI of the download was. Easy:



    You can copy that URI from there (in this case it is this ugly thing!)- and then we move on to the next stage: importing into Neo4j.

Importing the data into Neo4j

That Import process is very simple now, with Load CSV. Here's the query that I wrote, which uses the URI of the CSV version of the spreadsheet (in green):

It's very simple. First I add the colours using a MERGE, then I connect the persons to their respective colours using a create. Running that import gives me results in a matter of milliseconds, and with no intermediate steps:
And although this graph is very haphazard and not very interesting, I can query it straight away.


That's all there is to it really. Importing data was already way easier with LoadCSV, but thanks to Rickard, we can now do it straight from a Google Spreadsheet. Thanks Rickard!

Hope this was useful.

Cheers

Rik