Monday, 27 February 2023

The 3 concerns in data modeling, and their value

In a previous article, I have already alluded to the fact that I think that the confusion around the three words (model, schema, and metadata) is really important to clarify and get straight for our industry. That confusion has a historical reason: in the past 4 decades, when Relational DBMS’ ruled the earth, there was no real need to be specific around these three different concepts: the model and the schema were basically intertwined. But that has changed with NOSQL, and today we do really have to be more specific because we have more variability – we can have more or less modeling, and more or less schema, as we choose.  Tools like Hackolade and its polyglot data modeling have already made that very clear as well.

The main reason why I think being specific around this is important, is because it also allows us to be specific about the underlying concerns that speak to these words. 

Friday, 17 February 2023

Data Modeling and the eternal struggle between business & IT

In a previous article that I published on this platform, I talked about the confusion that I observed in my first couple of weeks in the industry, between some very important and meaningful terms: model, schema and metadata. In that article, I tried to alleviate some of that confusion by offering some important context and clarifications. In this article, I will try to build on that.
Long time ago, my friend Peter Hinssen wrote a book called "Business/IT Fusion"​. This image was borrowed from that cover. More info on https://www.peterhinssen.com/books/business-it-fusion.

Part of the confusion in the terminology stems from the fact, I believe, that in traditional relational database management systems, your physical model and your schema were tied at the hip. It would be extremely weird to have one without the other. In fact, in relational data modeling, your schema would literally be the output of your physical modeling and would therefore be a different representation of the same thing. The physical model would be human readable, and the schema would be machine readable. That all changed with NOSQL database management systems (document databases like MongoDB or graph databases like Neo4j), where you could have “the data be the model”, and where the enforcement of the model would be completely optional and usually not even considered before one would take the system into production.

Monday, 13 February 2023

Confusing words in Data Modeling

For a few weeks now, I have been helping out my friends at Hackolade with some really interesting work around their core product, the Hackolade Studio and how to make it even more successful in the marketplace. This means talking to a LOT of people - both current customers, active users, partners, friends in the industry - and more. It’s been fantastic to talk to so many interesting people, and to learn so much from their insights.

During these conversations, I have noticed that there’s quite a bit of work to be done to clarify and straighten out the meaning of the words that we use in these conversations. I have noticed that in the NOSQL data modeling space, we are not always very precise with our words - and that this imprecision can lead to all kinds of misunderstandings. Specifically, I have been struck by the confusion around 3 words: model, schema, and metadata.

Tuesday, 24 January 2023

The Agility Angst

First of all: let me start by showing my age. I was born in 1973, and yes, that does mean that there's a big, and I mean BIG, party coming later this year. Big Five Oh looming around the corner - and to be honest I am very, very fine with that. I am happy with my age - except when I am being dropped like a baby at the back of the cycling bunch. Then not so much - dammit.

What does my age have to do with anything? Well - it basically means that I grew up with a very, very different type of software development practice. OO was still young. The mythical man-month was still a thing. And methodologies all depended on strict, rigid waterfall development models.

Nothing wrong with the odd waterfall, of course, but with the emergence of more and more, and better and better, software tools, libraries and frameworks - this whole idea of the time consuming waterfall process has kind of become irrelevant. Who would still go through this entire process and ask their users to wait for months before they would be able to give any feedback? What users would accept that type of arrogance on behalf of their IT department? Seriously?

Wednesday, 18 January 2023

The modeling mismatch


After having spent 10+ years in the wonderful world of Neo4j, I have been reflecting a bit about what it was that really attracted me personally, and many - MANY - customers as well, to the graph. And I thought back to a really nice little #soundcloud playlist that I made back in the day: I basically went through dozens of #graphistania #podcasts that I had recorded, and specifically went back to the standard question that I was asking my interviewees on the podcast: WHY do you like graphs??? WHY, for god's sake!!!

Unsurprisingly, people very often came back with the same answer: it's the DATA MODEL. The intuitive, connected, associative, visual, understandable structure that we humans love interacting with: the labeled property graph (LPG). Have a listen to what people were saying:


Thursday, 12 January 2023

Pastures wide and green

Dear friends

a few weeks ago, I sent out this tweet:


Since that time, a lot has happened: a new year has started, a lot of fun was had with family and friends, a lot of bike rides were done -  and a lot of personal and professional conversations have been had. Obviously, it will take more time to move on from the wonderful world of graphs, but I feel increasingly positive about the journey behind me, and the journey ahead of me. 

With time, it's also starting to get clearer and clearer for me that I have learned so much about the world in the decade that I spent with Neo4j. Here are some bullets of what I learned:

  • Building great companies is hard. Period. It takes hard work and persistence to get to any kind of success.
  • Making people work together towards a valuable vision makes a world of a difference. Graphs are incredibly good at helping the world make sense of data - and there is not a shatter of a doubt about that in my mind. 
  • You can't build a process that beats the efficiency of organic collaboration of groups of people aiming for a common goal. 
  • The intrinsic motivation that you get from that goal will make people walk through fires. I voluntarily worked  my ass off for Neo4j for many years - because I believed in it. Still do.
  • Practitioners slash developers are ah-may-zing. They are the fuel that drives IT innovation - not the CIO up there in the boardroom. Pampering practitioners and making them love your software makes a world of sense, as they will sow the seeds of commercial success. The days of wining and dining your way to a deal are gone. Forever.
  • Helping practitioners be more effective inside their organisations is what I think a salesperson is supposed to do. Not selling TO them, but with them, building the technical and the value case for the investment - together. 
  • Selling with them means overcoming the inherent inertia that any complex system/organisation will thrive on. People don't like new things, because they don't like the uncertainty that comes with that novelty.
  • Overcoming uncertainty is at the core of selling high-tech software. The only way to do that is to focus on maximising the quantifiable value of the software, and minimizing the perceived risk of its adoption.
  • Honesty, authenticity and empathy are core to long term success. Anyone can get a quick hack success. In the long run, that never pays off.
That's just a small sample of some of the great lessons that I learned over the past decade. Going forward, I am going think through the different options that I have, and figure out where I can apply my experience most effectively. I want to find a company in a domain that I love, with a mission that I can get behind, and a team that I can fit into. 

I know I will find that place - but it may take some time. I am going to give myself that time, and in the mean while have lots of great conversations with lots of fun and interesting people. With time, that will lead me to pastures wide and green.

So: if you want to have a chat - please hit me up! You can reach me through this blog, or on the usual social media (Twitter, LinkedIn). 

All the best

Rik

Tuesday, 15 November 2022

A 2nd, better way to WorldCupGraph

Hours after publishing my previous blogpost about the WorldCup Graph, I actually found a better, and more up to date dataset that contained all the data of the actual squads that are going to play in the actual World Cup in Qatar. I found it on this wikipedia page, which lists all the tables with the actual squads, some player details, coaches etc. as they were announced on 10th/11th of November.

So: I figured it would be nice to revisit the WorldcupGraph, and show a simpler and faster way to achieve the results of the previous exercise. So: I have actually put this data in this spreadsheet, and then downloaded a .csv version:

These two files are super nice and simple, and therefore we can actually use the Neo4j Data Importer toolset to import these really easily.