Tuesday 27 October 2015

Podcast Interview with Luanne Misquitta, GraphAware

A few weeks ago I had a great conversation with Luanne Misquitta from GraphAware. Luanne has been part of our community for a long time, has written a lot on the Neo4j blog, and is just a generally lovely person to talk to. So we spent some time on a Skype call, and this is what that conversation went like:

Of course, there's also a transcript of our conversation:
RVB: 00:01 Hello everyone. My name is Rik - Rik van Bruggen - from Neo Technology,  and here I am recording a podcast with someone from a very different part of the world - all the way from India, in Mumbai - Luanne Misquitta. Hi Luanne. 
LM: 00:17 Hi Rik, how are you? 
RVB: 00:18 I'm very well. Thanks for joining us. It's great to have someone from the Eastern part of the world joining us in the podcast [chuckles]. 
LM: 00:27 I'm glad to be here. 
RVB: 00:29 Yeah, super. Luanne, I think we've known each other for a couple of years. I first got to know you when you were writing about Flavor Networks, which I did a couple of blog posts about, as well. Actually, they were one of my most popular blog posts. How did you get into graphs, Luanne? Could you explain it to us? 
LM: 00:52 I've been working with graphs for about six years. I think that's close to the time there was this Neo4j challenge running to write an application on Heroku using Neo4j, and Flavorwocky was the application that I submitted, that eventually I won the challenge. Some time before that, I got into graphs because I was working for a company that had trouble managing the profile of a person. This was a people management company so dealt with all kinds of aspects - hiring, talent management, compensation, learning, all kinds of things. And central to this entire system was the concept of the profile of a person. Up until the time that I found Neo4j, it was modelled in a relational database, and at some point it was the table that stored data for the person profile.  It was huge. It had over 200 columns, and most of them were null, and it was normalized and denormalized very frequently, and it just didn't make sense any more. So, I was looking for something to solve this problem and that's when I came across this thing called a graph database, and of course, that was Neo4j. At that time, there wasn't any Cypher, there wasn't the Neo4j server either, it was just good old embedded. 
RVB: 02:27 Good old embedded, yeah [chuckles]. 
LM: 02:30 And immediately I knew that was the thing to model this, because it was schema-free, there were direct connections between a person and everything he's done. So one of the problems we have is that when you're collecting information about a person's profile, there are no two profiles that are the same in terms of the kinds of attributes they capture on a person. Of course, there are the usual standard ones - your name and where you live - but there are things like gaps in employment history, there are gaps in their learning. There are different types of learning. You do your talent manage-- your talent section looks quite different depending on where you are, and what you are doing, in which company you are in. So these kinds of things are very hard to fit into rows and columns, and that's where I thought Neo4j fit in. Once I started with it, I have never looked back. It's really been the solution for most things. 
RVB: 03:27 Yeah, it's quite a common thing, right? You've got a really complex domain that really is so difficult to model and store, query in traditional relational systems and then you take it to the graph world and a-- 
LM: 03:42 That's right. 
RVB: 03:42 --whole range of problems go away. 
LM: 03:44 That's right. 
RVB: 03:46 It's very, very cool. And how did you do the-- 
LM: 03:48 It wasn't just modelling the person. I'm sorry. 
RVB: 03:52 No, go for it. 
LM: 03:54 It wasn't just modelling the person, but just the fact that we could model it this way  exposed a whole lot of insights that we did not think about earlier, and for that kind of-- for the company that I was in that was really important. So, you weren't just interested in a model of a person and then spitting it back out to whoever is looking at your profile, but you wanted insights into potential jobs that this person might be good for, or if he's got a high flight risk, then who is going to replace him, what sales does he match. Is he in the wrong job at this time? Is he really sitting in your company working in a job that he hates, and he's blogging about other things? And the last one I mentioned it was real use case. When I did a quick POC in about two hours one night, and I just picked up random data from one of our internal company social networks, and the first thing that surfaced was this guy who was sitting and doing java development, there quite miserable, but he was blogging about and doing sample applications on iOS. At the very same time, the very same company was looking for people to work in their new iOS team. And here he was sitting under their noses, and no one knew about him. So these were the kind of things you could source immediately, which staring at a table-- 
RVB: 05:19 You would never find it. 
LM: 05:20 Impossible. Almost impossible. 
RVB: 05:22 Totally right, very cool. So I'm hearing the modelling advantages but what did the flaring-- the flavor example have to do with that? That was an interesting one as well? How did you get into that one? Are you a chef? 
LM: 05:39 Well, I love food. No, I'm nowhere near a chef but I love food, I love cooking. And I was thinking of what am I going to submit to this contest? Most of the enterprise-y things were done and I was quite bored with enterprises at that time.
Then I thought about food, I like food. Something that I had been reading about recently, actually a book review called the Flavor Bible, which lists ingredients and what they pair with. The rest of which from the classical triangle. So, two ingredients  paired with a third and you can then combine them and produce something that tastes really good. So I thought that is a good graph problem, and it was really simple, but it was a domain that I liked and I thought it was fun to work with. That's how Flavorwocky came about. 
RVB: 06:40 Luanne, recently - more recently, at least - you actually started working full-time with Neo projects at GraphAware, right? You're part of the GraphAware team-- 
LM: 06:49 That is right, I work for GraphAware now. Yeah. 
RVB: 06:53 How is that going for you [chuckles]? 
LM: 06:56 Very great. There can't be anything better than  working with Neo4j all day. At the moment, I am part of the Spring Data Neo4j team, and as you know, we've just released our Spring Data Neo4j 4 in September. So it's going really good. We've produced some great stuff and-- 
RVB: 07:18 Very cool. I am a big fan. Last night I met up with Michal in London and we were doing some Czech beer tasting [chuckles]. It was an eventful evening. Very cool. 
LM: 07:33  What do you know? 
RVB: 07:35 No, don't get started on that one [chuckles]. That's going to be a very long podcast then. No. Let's talk about the future, Luanne. What does the future hold? You've been involved in some really exciting projects like, for example, Spring Data Neo4j. Where do you see this going? Can you share some of your perspectives? 
LM: 08:00 Well, I'll share with you what I've been noticing over the past couple of years and-- and as you know, I do the trainings for Neo4j in India as well, so I've been looking at various kinds of audiences coming in through for at least two years now. And there has been a definite shift in attitude towards graph databases from the early days where it was a real struggle to get people to understand why it's important. Although they got it, it was still something that they would really have to fight very hard to use in their organization. That has changed significantly over the last, well, even the last six months really, where you have people who already know about graph databases, they can immediately see where it's going to fit into solving their problems. I think with the kind of-- the reputation that Neo4j has , it's now becoming easier to get Neo4j are used in these kinds of companies, and I'm not talking about startups which pick up Neo4j very quickly, the other larger, mid sized company or enterprises. I think what I would like to see at some point and fairly soon is when graph databases become something that you just use. If you're planning a new project, it's very common to say, "Hey, we need a database," and no one will challenge you. You'll go and you'll pick up a database and you'll use it, typically an RDBMS. If you were to say, "Hey, we need a graph database," then it should be exactly that. Pick it up and use it. You shouldn't have to be debating for months over whether it's better than sequel or not and whether it fits or not. I'm really looking at that day as a defining moment for graph databases  where they are just used, and people know when to use them and why to use them, and there aren't any questions about should I or should I not, unless it's a really stupid use case. That's what I think would be the ultimate future for Neo4j and graph databases. 
RVB: 10:20 I'm looking forward to that day as well. And I think lots of us work towards that goal and it would be a great thing. What about the Spring Data stuff? Is that ready for prime time right now? Are you guys planning new stuff there? 
LM: 10:37 Yes, it is. The Spring Data for that been released in September is, of course, ready. We are always planning new things. We have support for-- as you know, Spring Data 4 supports currently the remote Neo4j server mode only, and it was written from ground-up to actually support that, so it's  really fast. We've broken it up into-- it's not only Spring Data Neo4j 4. It actually depends on a new library called the Neo4j Object Graph Mapper. So, if you want really fast OGM, then-- and not Spring, you can use the OGM directly. If you're a Spring person, then, of course, Spring Data Neo4j really uses the OGM under the covers, so there are a lot of features planned for both of those two. Very shortly, one of them is support for embedded Neo4j, and a whole lot of the new protocol, which is [crosstalk] coming up in Neo4j. Support for those, and as well as-- there is so much to do, but we are continuing to work on that and you should see some releases out pretty soon I hope. 
RVB: 11:51 Super. And I'm hoping that I'll see you at GraphConnect in San Francisco? 
LM: 11:55 I think you will, yeah. 
RVB: 11:57 Yeah, well, looking forward. That's super, great. All right, thank you for coming on the podcast, Luanne. It's been a great chat and I really appreciate it. 
LM: 12:06 Thank you Rik - it was great talking to you again. 
RVB: 12:08 Same here, and I look forward to seeing you on the West Coast. 
LM: 12:12 Yeah, soon. 
RVB: 12:14 Cheers, bye. 
LM: 12:15 Bye-bye.
Subscribing to the podcast is easy: just add the rss feed or add us in iTunes! Hope you'll enjoy it!

All the best


Tuesday 20 October 2015

Counting down to GraphConnect San Francisco 2015

Yey! Tomorrow is GraphConnect - so it's going to be a high-day for me, for sure. Meeting with lots of wonderful customers and community members - it's going to be a great event.

Now as you may remember from GraphConnect Europe earlier this year, I have a little habit of creating "schedule graphs" - essentially the conference schedule as a graph. We all know the "tabular schedule" of course:

But I - naturally - want to look at this data in a more likeable format, ie. as a graph.

So I have a very simple way of doing that. I spend a half hour or so arranging the data into a google doc:

I then make it public and allow for it to be downloaded as a CSV file, and then grab Cypher to upload the data into a shiny new Neo4j database. The import script is very similar to the one that I created for Graphconnect Europe, and is now also available on Github. Just copy and paste these statements into your Neo4j database, and you will have the schedule graph loaded locally in seconds. The results looks something like this:

If you don't want to be loading the data locally, I have also created a little GraphGist page for you to play around with. You can find that over here.

Hope you enjoyed that, and look forward to seeing you today/tomorrow/thursday at one of the conference events!



Wednesday 14 October 2015

Stockholm meetup got a bit out of hand

Tonight we had a lovely meetup in lovely Stockholm at our friends of HiQ. They have a wonderful office and event area overlooking downtown Stockholm, a perfect setting to sit back and talk graphs. So we did. My friend and colleague David Montag did a wonderful talk, and I tried to tell people how to really NOT mess up their Neo4j project. The presentation is over here:

But somewhere along the way, I also started talking about my lack of a social life (so sad!!!) and the fact that I have a bit of fun with graph karaoke. I know - it's stupid. But so about half an hour before my talk, someone says "Can you do that for "Roxanne"?" ... and I am like - being a big Police fan and all - alright, Challenge Accepted.

So here's what I did:
  • I grabbed the lyrics from AZLyrics
  • I put it into this google sheet.
  • I made the google sheet available to the public (important).
  • I downloaded that sheet as a CSV file, and grabbed the URL from my downloads folder.
  • and then ran this cypher query over it:
 load csv with headers from "https://docs.google.com/a/neotechnology.com/spreadsheets/d/1WK-AKp-KNegaQ5-hbS79wvaNEj_SDj971iCuX9GMDSo/export?format=csv&id=1WK-AKp-KNegaQ5-hbS79wvaNEj_SDj971iCuX9GMDSo&gid=0" as csv  
 with csv.Sequence as seq, csv.Songsentence as row  
 unwind row as text  
 with seq, reduce(t=tolower(text), delim in [",",".","!","?",'"',":",";","'","-"] | replace(t,delim,"")) as normalized  
 with seq, [w in split(normalized," ") | trim(w)] as words  
 unwind range(0,size(words)-2) as idx  
 MERGE (w1:Word {name:words[idx], seq:toInt(seq)})  
 MERGE (w2:Word {name:words[idx+1], seq:toInt(seq)})  
 MERGE (w1)-[r:NEXT {seq:toInt(seq)}]->(w2)  

And that's it. Graph is ready. Next I find the song in Spotify, play it a few times to rehearse, and then we get this:

So there you go. I hope you enjoyed it as much as I did. And don't forget to register for GraphConnect next week!!!



Tuesday 13 October 2015

Podcast Interview with Chris Skardon, freelance developer

Here's an interview that got me really excited: Chris Skardon from Tournr recently took charge of the Neo4j .Net driver development, and that was a good enough reason for us to have a chat. Being a "4j" product, the Microsoft ecosystem has long felt a little unserved by the graph database community, and we are trying to make good for that. Hopefully, this interview will give it some of the credits that it deserves, and many more people will get "stuck in" in moving this environment forward. Here's our chat:

And of course here's the transcript of our conversation:
RVB: 00:02 Hello, everyone. My name is Rik, Rik Van Bruggen from Neo Technology and here I am again recording a lovely new episode for our Neo4j podcast series. Today, it's interesting, I'm joined here by someone from the UK, Chris Skardon from Tournr. Hey, Chris. 
CS: 00:21 Hey, Rik. How are you? 
RVB: 00:22 I'm very very well and yourself? 
CS: 00:24 I'm good, thank you. 
RVB: 00:24 Super. Super. Hey Chris we've been interacting and known each other for a couple years already and you've been in the Neo4j ecosystem for a while. But I was triggered to get you on this podcast because of the dot net client announcement. Right? 
CS: 00:41 Yes. Yeah. Exactly. 
RVB: 00:42 Absolutely. So tell us who you are, Chris, and what have you been doing with Neo for the past couple of years. 
CS: 00:49 Right. Well, I'm Chris. I'm a freelance developer. I spend pretty much all my time in the dot net framework. A couple of years ago a client came to me and asked me to have a look at how to do, how to draw up one of their systems for them. I spent some time going through various no sequence databases to try and hunt out the best ones for their environment. And came across Neo4j probably about version one-six, I think. I've kind of stuck with it since then. 
CS: 01:34 It's addictive. Isn't it [laughter]? 
RVB: 01:36 Yeah. 
CS: Yeah. No. It's very addictive. I use lots of different databases at the same time and I'm not restricted to just one or the other but I find myself gradually moving more and more towards just putting everything into Neo because it works and does everything I need it to do. 
RVB: 01:55 Super. That's great to hear. What was the main attraction when you started going with it? What was the domain like or what was the main reason for going to Neo? 
CS: 02:07 Well, the domain is a large... The thing with databases is you can pretty much fit any domain into any database. 
RVB: 02:17 I agree. 
CS: 02:17 It doesn't really matter. You can always fudge something into it if you want to. What I found with Neo was that it just fit around me just, it fitted around me having to translate anything. I didn't have to write a set of translation tables in my SQL server or start putting in weird links in my document database to try and hook things around and find things. It was just automatically linked and it makes navigating through the domain a lot easier. I mean the domain in this aspect was like a-- it's very hard to describe really. It's basically a graph [laughter]. The graph database just fit in it exactly as it's in here. Then they described it to me and it was a graph. They drew it out on paper, it looked like a graph. Everything they've ever drawn has been a graph based. It just was what it should always been. Neo was fast and forward and it's worked well for me since then. 
RVB: 03:19 Super. So my second question on these podcast is always, "Why did you get into graph databases?" So it's mostly around the modeling then for you. Right? The modeling fit that drove you there. 
CS: 03:34 Yeah. Yeah. I think the modeling fit and being able to query things. Obviously, started off from one six cypher was fairly basic and you were still doing queries using gremlin. 
RVB: 03:51 Those were the days. 
CS: 03:53 Yeah [laughter]. Back in the days. Gradually, over time it's moved much more towards cypher and cypher's just the much nicer language to use. It's a lot clearer to read and still get in the wrong, easily get it wrong sometimes. In general, you can takes something that's generated from the cypher and just read it and it makes sense. You know what you're pulling out because you can see it. 
RVB: 04:22 Yeah. 
CS: 04:23 I think as cypher gets more and more things added to it, I think it will just get better and better. 
RVB: 04:29 Yeah. That's good. Absolutely. So what about the dot net client. How and why did you get into that? That's probably a interesting as well. 
CS: 04:38 Well, so being a dot net developer, first thing, which pretty much every dot developer will do is, they'll fire up visual studio. Create yourself a little console test app to hook against the database and head into the NuGet world and do a quick search for Neo4j. So NuGet's like our package manager. So the equivalent for Ruby I think is things like Gems. I'm not sure what PHP has or python or any of those but it's just a package manager. You do a search for Neo4j and the top one pops up is Neo4jClient. There's a couple of other small ones which are developed independently or by other people but generally, I'm going to go with the biggest one because it's the one that seems to have the most documentation and users on it, and most likely to give me help when it goes wrong. 
RVB: 05:35 Yes. 
CS: 05:37 I started using it. After a  little while, there's a few things you want to have in it or there's a few bugs, and issued a few pole requests. Started answering questions on stack overflow. To, firstly, to help myself learn how to do the different things that people are trying to do. So I help the community, because the more people who can use it easily the more people there are to evangelize, in a sense, and get it out there. Gradually, over time, I just took over. I started doing a few more pole requests and answering more and more stack overflow. Then we hit our hiatus for a year or so where the original author Tatham had to step back because of many things, life generally. 
RVB: 06:24 Yes [laughter]. 
CS: 06:25 And-- 
RVB: 06:27 Life gets in the way. 
CS: 06:28 It does get in the way. 
RVB: 06:29 Yeah. 
CS: 06:30 We spent a lot of time re-routing the pull requests, really small pull requests were maybe getting pushed through fine. The big one which we missed for a while was transaction support. 
RVB: 06:41 Yes. 
CS: 06:43 The pull request that existed for that was massive. I mean it was good but it was such a big change that Tatham would have never and the time to do it. It kind of languished there for a while. 
RVB: 06:59 Yes. 
CS: 07:01 Then about four or five months ago, either Tatham popped up on the list of clients give up page and said "I don't have time. I'm looking to hand this over to find someone." I had been doing a lot of help in stack overflow and stuff. I said "I'll take over." So long story short, I have. 
RVB: 07:25 Super. Yeah, I can't tell you how happy I and we are about that because it's, obviously, it's a big development community and it's really important for us to have someone maintain and help with that. So thank you so much for doing that I really appreciate it. So maybe just a final question here, Where is this going, Chris? Where is this going from your perspective? How do you see graphs evolve in the .Net world in general and how do you see the .Net client, the Neo4jClient evolve in the next months and years? 
CS: 08:03 Sure. 
RVB: 08:04 Any comments on that? 
CS: 08:05 Yeah. Okay. The client itself. We'll start with that. I think that's going to keep tabs with updates to the actual server itself. So two-three comes out soon and hopefully were able to cater everything that's in there. Then when version three comes out we're looking at, well I'm looking at adding the ability to use the bolt serialization underneath it it instead of just using rest. 
RVB: 08:32 Lovely. 
CS: 08:36 I'd like to get it to be as good as some of the other NoSQL databases are, in terms of .Net usage. That's just end up adding a few niceties around and investigating whether something like a link driver is feasible or even worthwhile doing and seeing, basically, see how it goes and keep on pushing it and adding new bits into it and make it faster and better. In terms of graphing in .Net world, hopefully, more people will start to use it. I'm lucky I work for myself so I can help pick the databases. 
RVB: 09:24 Yes. 
CS: 09:24 And I don't have the problem of being weighed down by a big load of servers behind me. They all run SQL servers and they've paid for the licenses so they are going to use their SQL servers until their licenses run out. I can pick faster databases, different databases. I think dot net is moving that way now. People are starting to use the different databases more and more. Hopefully, Neo fits into there well. It's a well known graph databases that performs well and .Net does interact fine with it. I don't have any problems with it. I've actually gone through and converted to one over the last couple of months from a document database to Neo4j. Aside from a drastic in a cut in code base-- 
RVB: 10:14 Yes. 
CS: 10:17 -- it hasn't been that hard to do. I'm very pleased with it. I think it gives me a lot more forward momentum. I can do a lot more with my projects now when they're based around the graph than I could do with a document. I have a lot less problems with them. It's a lot easier to do. 
RVB: 10:39 Super. Cool. Well, I mean that's great input and great feedback and I'm sure lots of people are excited to see the .Net client but also Neo4j and the .Net world evolve that way. Thanks again for your help. Good luck with Tournr and your projects [laughter] for different clients and I think we I'll see you and buy your beer at GraphConnect. Right? 
CS: 11:06 Yeah. See you then. 
RVB: 11:08 I'm looking forward to that Chris.Very cool. Thank you for coming on the podcast. Really appreciate it and I'll talk to you soon. 
CS: 11:15 Yes. Same to you to Rik. 
RVB: 11:16 Bye. 
CS: 11:17 Bye.
Subscribing to the podcast is easy: just add the rss feed or add us in iTunes! Hope you'll enjoy it!

All the best


Friday 9 October 2015

Graph Karaoke for the weekend: Return to the Moon

I have always been a fan of The National, and when Matt Berninger embarked on a new band thingy, I of course had to check it out. Now this song has been in my head all week - so I mad a little "graph karaoke" for you guys - hope you enjoy!

See you at GraphConnect for some more of this stuff?


Tuesday 6 October 2015

Podcast Interview with Jesus Barrasa, Neo Technology

Over the past couple of months, we have been doing a lot of work at Neo4j to try to better explain the value of Neo4j to our prospects and customers. This has been a true team effort, with lots of engineers, marketeers and sales folks participating in articulating how complex technology can be used to add true value to business processes. And as we did that, we added some really talented people to our team. One of them is the person that I am interviewing in this particular podcast episode: Jesus Barrasa. Jesus has a lot of experience with graphs and even (a particular kind of) graph databases - so going in I knew it was going to be an interesting chat. And guess what: it was. Listen on:

Here's the transcript of our conversation:
RVB: 00:00 Hello everyone. My name is Rik, Rik Van Bruggen from Neo and here I am again recording a Neo4j graph database podcast and my guest today is all the way in the UK. Jesus, hi Jesus. 
JB: 00:13 Hi Rik. How are you? 
RVB: 00:14 Hey. I'm always scared of pronouncing your name in the wrong way. I'm sorry. 
JB: 00:19 You did great. You did great [laughter]. I've heard much worse than that so that was brilliant. 
RVB: 00:24 Okay [chuckles]. Okay, Jesus, you just joined Neo a couple of months ago as a pre-sales engineer but you have a long-standing history with graphs. You did a PhD on the subject if I'm not mistaken, right? 
JB: 00:37 That's correct, yeah. It all started probably more than ten, nearly 15 years ago, so quite a while yeah. 
RVB: 00:43 Yeah. 
JB: 00:43 And you're right. It all started in the semantic technology space, in the RDF space, so that was the first time I was exposed to modeling data as graphs and yeah. That's been a long story. After the PhD I did work for a company in London called Ontology where we did use graphs to best resolve problems in companies, mostly in the telecommunications sector and yeah. As you say, two months ago I joined the field engineering team in the London office. 
RVB: 01:18 That's super. What was your PhD about exactly then? 
JB: 01:21 Well, at the time… I did model-- I mean, I formalized a way of translating relational schemas into ontologies. Ontologies is the way you represent metadata in the RDF model. I don't know if we will have time to talk about that but yeah, it's actually an automated mapping between relational schemas and ontologies. That's what I-- 
RVB: 01:45 There's a lot of people looking at that I think, still today, I get that question quite often. 
JB: 01:50 Absolutely. 
RVB: 01:51 So another question because you've been working so long in the RDF world, what's the difference between the RDF semantic technology space and the property graph model of Neo4j - what's the key difference for you? 
JB: 02:04 All right. I'd say you can answer this question from two perspectives because obviously RDF is a presentation paradigm and there's the different implementations, the triple stores that you can find in the market but as a model I think the main thing they have in common is the fact that they both represent data as a graph. And that makes them very, very close to each other. The difference I would say is RDF is simple as it can be. It's only based on the notion of URIs to identify resources or nodes if you want to establish the properties with a property graph but there's this element called triple, subject, predicate, object. And that's all the constructs that you have to model your domain. And of course I think that's the biggest difference because in the property graph model, you have nodes with properties, you have relationships with the properties and they have this brilliant thing that's this white-board friendliness, this excellent thing that's the way you conceive it, the way you model it in your head, in your whiteboard, is exactly the way it's represented and stored physically, whereas in RDF there's still this gap where you have to translate that into triples, things that may sound simple like giving a weight to a relationship so there's a connection between Rik and Jesus because we work together. You want to give weight to this relationship. That's something that's completely natural in the property graph but it's not something you can do directly on a triple, so you have to model this relationship probably as an intermediate resource. There's a bit of a gap and I think that makes it sometimes less intuitive, sometimes a bit less humane if you want. So that's the difference. 
RVB: 04:06 Well [crosstalk] actually that was sort of what I was hoping you would say because I've always been told that the difference is really on the predicate, the fact that it's so difficult to qualify a predicate in the RDF model. 
JB: 04:20 That's [crosstalk] exactly. Then there are other interesting things in RDF, the whole idea of being able to use the model itself to represent the metadata, the ontology and that gives you in certain cases some interesting powerful things you can do like querying all the data and the metadata at the same time. But I would say the biggest thing is more what they have in common and it's the fact that they look at data as a graph, a set of connected nodes. 
RVB: 04:49 What attracted you to the graph in the first place? Why did you get into this field in the first place if you don't mind me asking? 
JB: 04:55 Well, yeah, sure. I think there's two things. One was this incredible flexibility. At the time when I started we didn't talk about NOSQL. That's a concept that was coined later. It's the idea of being able to start storing data without having to model it up front. I thought that was brilliant and that gives you an incredible flexibility, this thing of schemaless model or at least implicit schema depends on how you talk about it but this flexibility was one of the things and the other one, I'd say the possibility of infering new knowledge based on the information on your graph, you identify patterns and you can enrich your graph with new knowledge based on the data elements that you have. So I think probably these two things attracted me to this world and I find them unbelievably useful and powerful. 
RVB: 05:51 You know, as your couple months at Neo, what do you think is the most interesting use-cases that you've seen so far? Anything that jumps out? 
JB: 06:01 Sure. Well, I'm amazed-- the thing is the great thing about Neo is how you can use and update your graph in real time, at what speed you can not only read it and query itbut also keep it up to date and how it's possible to identify fraud rings for example is one of the cases that I like the most, being able to pick up the status of your accounts, your users, their information, the transactions that they're carrying out, and at the same time be able to pick up, detect the patterns that identify a fraud ring is one of the ones that I enjoy the most. 
RVB: 06:43 And do that in real time you mean? 
JB: 06:44 Exactly, the real time is the key thing and that's pretty impressive yeah. 
RVB: 06:48 It's not like with my experience a couple of weeks ago with my Amex card, I get a fraudulent transaction on Friday and I get a call on Monday [laughter]. 
JB: 06:57 Oh yeah, just in time right. 
RVB: 06:59 Just in time [laughter]. Didn't really work. Very cool. So where do you see this going, Jesus, where's the industry taking us do you think? 
JB: 07:09 Well, I think adoption is growing. It's amazing the number of different organizations and companies that we talk to. I can't think of a single vertical, a single sector that would not benefit from scenarios where modeling data as a graph adds incredible value, so I think adoption will definitely grow and that's one of the things, and then the other one that I think is going to be key as well is about integrating the graph with the rest of the data architecture. These days there are so many alternatives to represent data and some of them adequate for certain scenarios. I'm all for peaceful co-existence with all the other approaches. So integration I think is going to be the other important one and I think being able to expose the graphing in ways that make it easy to inter-operate with other stores, sometimes not all of the information is going to be in your graph but the extremely valuable information in your graph will need to be combined with some external information. That's one case where you will want to visualize it in different ways using BI tools, well you name it, there's so many elements now in data architectures that integration I think is going to be the other important aspect that we'll see developing in the next few years. 
RVB: 08:22 There was one thing that I wanted to ask you and I obviously forgot. I'm so good at this podcasting thing [chuckles] is actually you've done a lot of work on virtualization of data, right and then [crosstalk] integrate and that links to that integration story I suppose. 
JB: 08:36 Exactly. Exactly. I did work in the data integration space with a data virtualization company in the couple of years between Ontology and Neo and yes, I'm particularly interested in that and it's a way of integration data virtualization that's based on this idea of wrapping the sources and make them look as if they were relational even though they're not so they're not copying the data into centralized stores. You leave the data where it is and you define the logic to extract it and combine it and I think that was a powerful paradigm for new ways of representing data like the graph and make it easy to integrate them with other technologies and other types of stores and yes. 
RVB: 09:27 So in a case like that the Neo4j would be one of the sources of virtualization? Is that what I'm--? 
JB: 09:32 That's correct [crosstalk]. One essential one, that's the thing because the importance in the end is what value is there in your source? Neo can be perfectly, for example, in an MDM scenario. It can be the core. It can be your master model. And you can link it with the different rest of it and provided the detailed information about the entities but exactly, it would be one of the sources and the data virtualization will expose it in a way that's easily consumable by say for example BI tools in analytic scenarios or that's one of the-- 
RVB: 10:05 Are there any examples of that yet? You know, like open-source virtualization tools that integrate with Neo [crosstalk]? 
JB: 10:10 Well, there's not much to be honest. There's one quite limited community edition of one of the vendors called Denodo which is the one I used to work for. There's another one from JBoss but I'd say there's not much, Rik, available in there. I mean, JBoss would be the obvious option. I'm actually now trying to work a little bit with it and try to build some integrations with Neo and yeah. That's what you can find. 
RVB: 10:40 I think this is kind of like community call for help, you know [crosstalk]. 
JB: 10:45 Yeah [crosstalk]. I definitely really hope to be publishing something soon, at least in some idea, some small examples that can inspire people to look at these. 
RVB: 10:55 That would be great. Cool. I think we're going to wrap up here. I think we like to keep these podcastsquite short and snappy but thanks a lot for sharing your perspective. I think that was very interesting although because of my limited presentation skills a little bit chaotic [laughter]. 
JB: 11:13 Right. No, it was great [chuckles]. Great to have this chat with you Rik. 
RVB: 11:15 Thank you, Jesus. And I'll see you soon yeah. 
JB: 11:18 Lovely. Cheers. 
RVB: 11:19 Bye. 
JB: 11:20 Bye now.
Subscribing to the podcast is easy: just add the rss feed or add us in iTunes! Hope you'll enjoy it!

All the best