Graph-Centered AppDev Series, Episode 1 [REPLAY] What’s wrong with your application?

By Will Evans / VP of Consulting

March 10, 2021

Events

Reading Time: 24 minutes

(hint: it’s missing graph at the core) … Join Graphable.ai, the world’s leading Graph Database, Graph Data Science and Knowledge Graph consultancy, for this video series, episode 1 replay:

Watch the replay with Graphable’s Director of Data Science Dr. Lee Hong and VP of Strategy & Innovation Will Evans for a 45-minute online video series, episode 1 replay:

Episode 1 synopsis:

Running a successful business in today’s digital world requires applications that meet the unique needs of this modern context. Yet, most applications end up being slow and inflexible, expensive and difficult to improve and evolve. Now more than ever, having applications that can connect and move data flexibly and seamlessly is a critical requirement. In this webinar series, we will walk you through the steps that can transform how you design, build, and run your applications, leveraging GraphDBs as a core building block.

Video transcript:

[00:00:00] Will Evans: All right. Thanks, everyone, for joining us today. As you guys can see, today we’re going to talk about what’s wrong with your application. This is an idea for a webinar series that we came up with over the course of a couple discussions. We realized something as we were looking through a lot of our clients and what kind of things we were working on – that we were seeing a pattern emerge. We wanted to bring that out into a webinar that we could talk about.

We’ve got a lot of things coming up here. We’re planning out this schedule throughout the entire year. We’ll fully admit that we came up with this entire plan and then got some feedback that people don’t really like webinars, and they’re a little bit out on content in terms of how much is put in front of them and boring people running through PowerPoint slides. So we’re trying to make this a little bit more interesting, and as we go through the year, hopefully our production quality is going to increase. But bear with us a little bit today.

What we really want to cover today is the overview and talk about what’s wrong with your application. We’re going to touch on a couple of these pieces today, and then as we go in the future, feel free to sign up, and we encourage you to sign up, for any of our future webinars where we’ll focus on specific aspects of applications from the backend, master data management, serverless framework. We’re doing a question and answer session with one of our clients and then moving on to the user interface, business intelligence, graph data science, and some overall lessons learned.

Today, what we’ll be covering specifically is going to be looking at an overview. We’re going to talk about sports a little bit and get into our friction points and some of our example use cases down the line. I’m going to move ahead into our overview.

The main thing that we want to drive home as we really get started on this is that bad apps have consequences. They’re not just something that’s this esoteric idea where you can have an application that’s bad and it doesn’t matter. When business applications are bad, there’s real life consequences for it.

Sometimes they’re not as dramatic as Citigroup paying off all of Revlon’s debt and then they can’t get $900 million back, but I think we can all look at this user interface and see that this is a bad application. It’s confusing. It’s clearly old. One of the laws of UX design is that most people spend most of their time not using your application, so your application should look like other people’s because that’s how they’re being trained.

While we’re not going to talk about purely UX today, it’s a main view of how people engage with applications, and we’re going to make the argument that bad UX often actually stems from a bad core of the application because when people are forced into situations where they have a lot of relational tables, they have a lot of esoteric rules, it means their UI has to try and deal with that, and they can’t update their application because they have to keep thinking about all of these rules that the database has set up for them, and it prevents you from being able to make changes going forward.

Another example that we want to look at is – this was something that happened a couple years, but again, a bad application. This is the dropdown menu – actually, this is an update to the dropdown menu after an analyst actually sent out an erroneous missile alert to all of Hawaii. The drill PACOM in this top option is DEMO, which sends out a message to an internal service, and the bottom option, PACOM, is an option that sends out a message to every resident of Hawaii warning them that there’s an incoming missile.

Again, because the way users experience applications is through the UI, this is where our screenshots are, but this comes back down to really bad full stack application design. It’s not something that was necessarily bad 5 years ago or 10 years ago or 15 years ago, but the world has now evolved and we can create smarter applications. Lee?

Dr. Lee Hong: I think one of the interesting things in that first line about things being sports – not just the fact that Will and I like sports and have had experience

[00:05:00] in that domain to a good extent, but you kind of think about your app as a team sport. You’ve got different components, different lines, different parts, your databases, your backend. For a soccer person, you’ve got your defenders, you’ve got your midfield. That’s your API set. You’ve got your front that ends up scoring the goals for you, and that’s where people interact with your applications.

One of the interesting things as Will and I were talking about this process is the games in every sport that we’ve played change over time. The rules will change. As those rules change and different components start changing, you will have to update different pieces of your puzzle. The company that originally supports your database no longer supports it, just as an example, so you have to retire your best defender. Now you’ve got to update one little piece that then cascades down.

I think one of the things that Will and I talk about is nobody goes into the process of designing an application with the idea of making it bad or making it difficult for everyone to use. I think that’s something we both learned over time. It’s almost a consequence of little, little things coming together to make your life difficult overall. I’m sure, Will, you can speak to that from experience as well, where designing and making your frontend work with your backend is actually a pretty sticky process.

Will Evans: Yeah, absolutely. This is a little image to highlight what we were talking about here, and this is another example of California’s vaccine appointment website. People interact with the website, but there’s this whole other set of layers behind that where you’ve got whatever database they used to design their website. I’m not positive, but I’m pretty sure it’s not a graph. Then we have an API on top of that.

What we really want to highlight here is that when you have issues and restrictions in the database level, those issues cascade up your system. So if you have a really inflexible relational database at the core, then you’re going to end up with an inflexible API. And if you have an inflexible API, it means that you’re going to end up with an inflexible frontend, which means that 10 years from now, people are going to ask, “Why is your application all gray and have a bunch of buttons with no indicators of whether they’re good or bad? Why is it these impossible-to-understand dropdowns?”

It really comes back down to the database. We’ve got a little bit of an image of this later on, but if we start to dig into a little bit more of the details of that friction in the database cascading up through our application, through the API into the frontend, we have these complex relational schemas. In our sports analogy, which we’ve got later on, we’ll talk about this creating star players (no pun intended to star schemas) within your system.

You have complex relational schemas that are difficult to manage and require a ton of expertise. They’re very semantically difficult to understand. So you have systems that you can’t bring new developers into. There’s lots of joins linked together. You have complicated queries that again are hard to understand, that are often slow, and you have to do “creative” things to connect your data in the way that you want to across many unnecessary keys.

Then we have these really – actually, Lee, why don’t you take the middle one?

Dr. Lee Hong: I think the interesting part was – and coming from the data science, I’m primarily the end consumer of data – the interesting piece is you end up with having to cut a lot of corners because I want my data to return in a decent amount of time. So instead of saying, “Give me the most optimal path,” what I do is cut around the corners to generate an output that returns quickly enough so that the upstream process or the API gets to have its data in time.

As part of that process, then that affects usually the frontend team, which Will runs a lot of the time. The frontend team has to come up with creative interactions with the data. To that first example with the Revlon and Citigroup, it just happened that you needed to enter an additional set of keys because the frontend didn’t lock you into a particular account. You actually had to enter the account number. And unfortunately, the account number with organizations that have that large of an amount of money going through is about an 8-9 digit number. So it’s quite easy for you to confuse the two.

As that happens, there are a lot of these handoffs of the data that then cascade upward to loose data governance, which is a problem that I’ve had to deal with a lot from a data science perspective. In data science, you require knowing, “Am I actually looking at the right piece of data going over time?”

[00:10:00] What you end up with is a gradual migration away from the original source of the data, where I called something X, and over time it starts to morph. As the next layer, the next table takes it up, I forget to make my update, and then over time the data doesn’t mean what I thought it meant before. And unfortunately, then that affects the frontend, so I now need to talk to my frontend developer and say, “Can you now make this interface work in a way that I have to double enter data so that I have both sources to resolve my reference?”

Now that I know, I basically force the user to enter two pieces of data to make sure they’re talking about the right thing. We’ll start seeing these things cascade upward, and I think, Will, this is an interesting area as far as getting to the design part of making your application frontend work and making that possible for a user to make sure that the user is actually entering the right data so that they’re getting what they want.

Will Evans: Yeah, absolutely. I think you just highlighted exactly what we’re talking about here, which is the rules in the database moving up and user interfaces being a symptom, but not a cause of bad applications. They’re a piece, but not the whole piece. Again, going to many unnecessary steps, not being able to quickly retrieve that relevant information and then share it with our users and have those defaults, because being able to get out to the necessary pieces of the system in order to retrieve the right data becomes too difficult.

One of the things that we wanted to really talk about was, again, our sports view of application development. This is going to be something that really strives, or I guess weaves through, all of our different webinars. It worked out way better than we ever anticipated it would.

The rules of the game change and evolve over time. In sports, there are rule changes. In football, there are new penalties. In soccer, they redefine offsides. In all of these different sports, things migrate over time. Where you have a team that’s successful 5 years ago, it might not be able to make it in the same league today.

This is the same thing in the business world. We have new regulations that come out, like GDPR. We have new types of digital money and we have new transactions. We have new expectations from consumers. Those are new rules. Players mature. Aging. We’ve got different technologies that either come out of favor or go into favor, become displaced by something else, like graph databases coming out as a new strong player in the field as relational databases get a little bit older.

You have a lot of different competitors and other teams playing in the same space as you. You’re not out there on a field by yourself; you’re competing against other people and playing in a league. You might have Amazon coming in as the new retail giant. As a Patriots fan, I can say the Patriots are a little bit less competitive than they used to be.

There’s new methods of training, there’s new methods of team building. And most importantly, as a level set at the end here, we’re not recommending for your applications that you can build an NFL team and then go win the World Series. It doesn’t work. You need to create different teams or different applications for different sports. There’s different use cases and different requirements.

But if you have a good application, if you have a team that doesn’t have star players, if you have a team where people are able to adapt and change over time – you don’t have one superstar like Tom Brady who then moves to another team and you’re not as good – that gives you that flexibility, which a relational system doesn’t. You have this core in your database that gives you the flexibility to build very egalitarian teams on top of your graph structure.

That’s really one of the biggest things in relational systems. Relational schemas favor one central table because connecting between tables is very difficult. If I have one piece of information here and then I need to go four tables across to get some other piece of information, that’s very difficult. So what do I do? I try and keep it as close to this table as I can, and that might mean adding another column, or it might mean adding a table really close by. What that means is that in the end, this table defines my application.

[00:15:00] Making any changes to it becomes incredibly difficult. Replacing it becomes incredibly difficult. I can’t replace it with anything else. I can’t add new information easily. Whereas in a graph schema, you have the ability to not have these superstar players. You have a lot of flexibility, and you can get deep connections within your application very easily.

So what we want to talk about now is a few use cases that we have seen over time. Lee, I want you to start off by talking about the fraud detection use case. This is one of our clients that we worked with recently over the past year – actually, the past 18 months – and one of their main pieces was in fraud detection. Lee, I think if you could talk a little bit about the challenges of fraud detection on a relational system compared to in a graph, that would be helpful.

Dr. Lee Hong: I think the interesting challenge with doing any kind of fraud detection is making sure that you can connect the pieces or connect the dots, because most of the time, fraud detection happens as a process of making sure I know the person who’s reporting to me and is saying, “Hey, I am Lee Hong. I actually am Lee Hong and I can prove it.” You need to be able to stitch all those pieces of the puzzle together. I’ve got data coming in from different sources, so I’ve got my table schema, then I’ve got another table that’s coming in or maybe a JSON file that I need to integrate. So now everything starts getting really, really expensive.

To what Will was saying just now, having that flexible schema in the background is really critical, and making sure that schema is able to receive all these different inputs from external systems that are coming in and at the same time look inside my own database to see if that user already existed before or if that user used a different username, used a different email and is actually the same person or the same entity that’s used the same credit card or posted the same profile picture.

I think the example from the table perspective to the NFL team is imagine having every different possible play scenario captured in your player account. So you need to have 1,000 players on your roster to be able to accommodate every different type of play that you have, and that makes it very, very complicated from the table perspective, having almost a column for every possible outcome.

The goal of what we’re trying to discuss here is to think about this from a much more flexible perspective. I think really that’s what helps those interactions with the front. So if somebody’s coming in, they want to be able to log in, and they say, “I’m Lee Hong, and I can prove that I’m not Will Evans.” That’s an interesting piece of this puzzle. From the frontend perspective – and I think, Will, this is the interesting piece – is accommodating that and making sure that’s quick enough so that the user can get through really quickly and not get frustrated and get where they need to go without having to go through 1,000 different steps.

Will Evans: Yeah. You touched a little bit on the master data management aspects of it originally, and then you touched a little bit on, in a graph system, the ability to make those deep connections. I think that’s something we’ve seen in fraud that’s difficult in relational databases. “Okay, we know that this Lee Hong guy is sketchy. He’s committed fraud in our application. We blocked him.” Most people have applications now that are intelligent enough to block one connection. So we say, “Lee Hong and Will Evans work at the same company. Will’s probably sketchy too.”

But the ability to go out from Lee to Will to Kyle to whoever else and keep going out four or five or six hops on a largescale application is really difficult in a relational system. Could you talk a little bit about how that’s easier in a graph and the power there?

Dr. Lee Hong: I think that’s the benefit of switching to a schema that’s more flexible, like a graph, because I can start to chain my path to my first degree connections, my second degree connections, and my third degree connections.

The critical piece that I think a lot of people forget is the flexibility of being able to combine multiple dimensions of a person. I’m not just me because of my name; I’m me because I may work at the same company – I work at Graphable. I live at a particular address. I drive a particular kind of car. All these little pieces come together to define who I am, and there may be other data sources to confirm or deny that. I think that’s the interesting thing, to say two people are related, but

[00:20:00] there are so many ways that two people can be similar or be related. We may even just share the same email handle or username. Those are little things or subtle little messages or hints that you want to be able to capture.

I think having that flexibility really drives that implicity for data management and making sure that you can govern your data over time. That way you don’t have this explosion of columns and fields that you need to keep maintaining in a particular table, and that makes life a lot easier.

Will Evans: Yeah, absolutely. I think it’s a good segue into one of our other use cases that we want to talk about, on user management, which is really around a couple different aspects. The two things that you’ve seen, and I know you experience this a lot – this is a use case where we’re dealing with a company that was doing user management, social networking, connecting people type applications. They actually came to us with a graph database already in hand.

I think the two things I’m really hoping to focus on here are the power that that brought them in terms of being able to actually scale their application and talk a little bit about what they were doing, and then some of the other issues they ran into. I know you spent a lot of time on Saturdays digging into some application code to debug. So talking about some of the issues they saw in really their architecture and how we recommend changing that as well.

Dr. Lee Hong: I think one of the interesting things with user management in particular is the drive towards artificial intelligence to block users. One of the interesting things that happened with Twitter, at some point they started blocking anybody who was tweeting the word “Memphis” in order to prevent people from sending out their personal information.

The concept here is that at some point, there needs to be a level of human intervention, making that administration of your users easy so that you can have a human in the loop to restrict some of that information and not having the computer make the decisions for you.

I think, Will, to your point, the interesting challenge was being able to serve that data back quickly and being able to monitor the number of potential bad users that existed in the system. I think that gets to the point of the flexibility in the middle layer that you’ve been talking about and that we’ll get to cover in a later webinar around serverless architecture. How do you speed up those handoffs? Essentially, like all those handoffs on the field. How do you pass the ball more quickly and get the ball where you need it to go?

Will Evans: I think that was one of the biggest things on that serverless architecture. They ended up with this incredibly monolithic full stack deployment, or single application server deployment, which created a lot of headache.

One of the things that we went back and forth on Slack over and over again with them was, “What environment are we running against? Have these changes been propagated from environment to environment? Are our environments identical?” If we’re trying to mov from staging to production, then best practice is to have staging and production be identical. Staging is where we make sure it works. They had environments that were not identical, and we moved them towards some CloudFormation templates because they didn’t want to go serverless exactly.

But one of the biggest strengths we found with serverless for these applications is being able to have multiple identical environments and being able to quickly spin up environments for testing, spin them back down, very low cost, and being able to define, through infrastructure as code, really easy-to-understand and repeatable full stack environments, from your database to your user management to your actual API functions. It becomes incredibly fast, and you don’t end up having to hunt through an entire application to try to find one configuration that might’ve changed or try to look in one place and all your Java, Python, whatever API code defined errors, when actually it’s just because you haven’t configured your VPN correctly and you don’t have access to the database.

Dr. Lee Hong: The interesting piece about that, Will, was we had a whole bunch of different languages that we had to deal with at the same time. We were running some stuff in JavaScript, we had some stuff in Java, some stuff in Cypher, which is the Neo4j language, and we had some stuff running even in Python. So we had all these different pieces trying to come together.

[00:25:00] I think the hardest part we were struggling with was whenever data wasn’t getting returned quickly enough, everything would get bogged down.

Will Evans: Yeah, that was the biggest thing, absolutely, was having these big heavy applications waiting for things, and how difficult that can be to have applications that are waiting when you’ve got a lot of different timing. Serverless and event-driven architecture helps deal with a lot of those problems as well.

One of the use cases that I know we wanted to talk about was a dispatching and assignment use case where we’re looking at sending out notifications, doing scheduling, dealing with pieces of equipment and the like. Our client came to us and said, “Hey, we’re looking to build this application. We have some ideas,” and we convinced them to go towards a graph architecture – the exact one that we’re talking about throughout this whole webinar series. This full stack build that we were working on was actually the inspiration for this.

What we were finding is that moving towards the graph as the core of the application – so we have a graph at the core, and it drives all of the API and the frontend interface. That being said, there are still reporting needs. There’s still slicing and dicing people want to see. People want to see pretty bar charts. We’re a BI analytics company as well; we understand that it’s important. You need to run your business. Network graphs look really cool, and we’ll look at one a little bit later. They look really cool, but they aren’t necessarily the best for understanding, “How many users do I have this month?” That’s something that still is better done in BI.

What we ended up with was reversing some of the paradigm. In a lot of clients we come into, they have this big heavy relational structure, and they want to add a graph off to the side, and they want to do a little bit of stuff in that graph and then they want to push it back into their relational structure, which to us is really just backwards. It works, it’s a good first step, but what we’re trying to drive towards is having graphs be the core of your application for that flexibility, for that speed.

Real life applications are incredibly connected. You have users who are connected to 10 different things, and we’re trying to pull those all back quickly. In a relational structure, that’s very difficult. So why use a relational structure for your application when you could use a graph and make things easier and more flexible?

And then there is still a role for small ancillary relational structures for reporting, off on the side, where you’re trying to query into your graph, bring some data out, set it up for slicing and dicing, set it up for those filters, make it super fast. In our recommendation, we still might even put it in the cloud, use some of those Software as a Service BI tools. But don’t make the relational structure the core of your application, because it’s really only better for one piece – the BI. Make the graph the core of your application because it’s better for the application, and have BI be an added thought at the end.

We have been able, throughout the course of this engagement with our client, to build their graph schema, develop an application on top of it. We’ve really been able to build out this full enterprise application incredibly quickly for a number of reasons. I’m actually just going to switch – one second here.

If I just go back into this PowerPoint here for a second and go back to our overview, we’ve covered all of these aspects with them. We looked at the backend. We said, “A relational structure with some massive tables in the middle is not going to be a good setup for this.” As we talked about in one of the middle slides, you end up making these esoteric rules, which is one of the biggest problems with relational databases. In order to make it work, in order to understand it, you have to say, “This is only going to be one-to-one, or it’s only going to be one-to-many or many-to-many.”

You have to define those, and you have to make them clear because you have to put a join table in the middle. If that needs to change down the line, it can become almost impossible. Or if I need to add a new type of setting down the line – I want to say, “I had two reports earlier; I want to add a third report” – that can become almost impossible in some relational structures because of all of the extra metadata, metarules, that they force you to create.

In this graph backend, we’re able to actually create a semantic structure, and that’s what we’ll talk about as we get into our backend session in more detail: those semantic rules that we can leverage in a graph so that new people, new developers –

[00:30:00] there’s always going to be developer turnover, so you need to design for that from the beginning. What we want to do is actually create a schema that is both robust and flexible, but also semantically intelligent and expressive such that new developers can come in and understand a few basic rules and understand the entire schema.

It deals with master data management. In this case, it’s not really users, but looking at different pieces of equipment, equipment hierarchies, understanding how to move up and down that hierarchy. We can do it in a limitless fashion within a graph, which is almost impossible in a relational application.

When we look at serverless, we’re completely scalable. We can deploy new environments in minutes, which is incredibly powerful for testing. Question and answer, obviously we didn’t do with our client. User interface – one of the biggest things is being able to look at the schema and design the user interface. We’ve come up with some structures to make it very simple for the frontend to keep some of that graph flexibility rather than have an API in the middle that cuts out that flexibility based on relational structures.

Then, as we talked about, we’re bringing that data from the graph into a business intelligence tool. We have our relational schema in there; we’re not saying that relational is dead. It just maybe doesn’t belong at the core of your application. And then on top of that, as we get more data, we’re moving towards graph data science, understanding patterns of behavior, and being able to do more predictions for the users.

Let me go back to our application. One of the things I wanted to talk about was reducing friction. Lee, you put some of this together. One of the main things, and it’s going to be the main point throughout this whole series, is starting with a flexible database in the background. Start with a graph database, and then from there – Lee?

Dr. Lee Hong: The interesting piece, and the thing that tWIll and I will talk about, and I think it will be the theme of the entire webinar series – the concept of design-driven data and data-driven design and melding those two together.

I think even though data science is where I make my living, data science is not something that’s a must-have. It’s something to think about, but it’s not a necessity. The most important thing is making sure that your design is driven by the data you have, and it works well against the back, and your design on the frontend is also driven by the data you have and the structure in the background. That’s why having that flexibility is going to be critical.

But one of the major upsides of having a graph is I know that I always have good referential integrity and I can trace the path from where the frontend receives the data, puts it in the background, and passes it along back to the user. Anything that changes, anything that gets updated, it continually moves along.

Will Evans: Yeah, as we said, reducing friction. What I want to jump into now on some of these points is how we create some of these schemas and just give a little sneak preview into our webinar of next week, when we’ll talk about the backend, highlight one of the really impressive products that we resell, which is Hume, and look at how we can use that to create a graph database schema really briefly, just as a little teaser for next week.

If I open up Hume here – in Hume, we have a concept of knowledge graphs, and it’s also just a really nice schema design tool for a graph database. In our applications, because we’re leveraging graph at the core, there’s always that value in being able to access our graph directly, especially in the data science and analysis aspect where we can actually have core actions.

If we open up one of our existing knowledge graphs, we can see the types of schemas we can define where we go towards a graph system rather than a relational system.

In this graph database, this is one of our use cases that we look at in terms of making beer recommendations. In this graph, we have beers. We have reviews. We have users. And then something that we’ve done is we’ve actually extracted, using Hume, some named entities from these reviews in terms of appearance, flavor,

[00:35:00] glass type, texture. These are connected to reviews, and it’s really interesting because – excuse me as I go back to my visualization – I can start to navigate around in this graph. Something that we can see in the power of our graph is that if I look at these reviews, each review might have multiple flavors.

You can see on here – that’s a little bit small. If I expand this, this review mentions several flavors. I’ve got a review and it mentions breads and it mentions a fruity malty flavor. Then from there, if I get rid of some of my other nodes, I can explore out and I can say, “Give me the other reviews that mention this flavor.” From here, I can expand my selection.

What I’m doing here is obviously navigating with my mouse, but queries in a graph can operate in the exact same way. The speed that we’re getting here in terms of my navigation is the actual speed that your application retrieves. So you can move from something like an individual flavor to a review to other flavors in that review to all of the reviews that mention that flavor, and then we could look at all of the beers that those reviews are about. That’s relatively interesting. But then we could also enable some grouping on top and start to see that we have beers within the same style as well.

This is really the power of our graph. This backend design isn’t for – you don’t expect end users to actually come in and click around in your graph and navigate. This might just be driving beer recommendations. We might have an application where we have a user who really likes Blue Moon, and we want to make recommendations to them. And we can. We can recommend similar beers based on those NLP keywords that we’ve extracted.

This is really, really important. I want to highlight it for a second. It’s super important because not only can we make these recommendations, but we can then explain them. So as we run this query, we can think about, if I wanted to make good recommendations to my end users, what would I need on top of that to understand whether or not that was a good recommendation?

Being able to leverage a graph makes that explainable. That’s one of the biggest issues we have with a lot of data science. It’s just another issue that graphs can help solve – the graph actually allows you to explain why something was recommended.

So as we come in here, I’ve run my query and now returning results on explaining the similarity. What you’ll see up on the screen is we have two beers here that on the surface are not super similar. You’ve got a Blue Moon Belgian White, which, for those of you who know, is a pretty light wheat flavored beer, and then a Brooklyn East India Pale Ale. An IPA is typically a little bit more tart, a lot less sweet and wheaty than a Blue Moon is.

But we can see based on the explainability of our recommendation that both of these beers, from our reviewers’ reviews, actually have a strong intersection of sweet flavors, orange and citrus, as well as being light and smooth. So for someone who likes a Blue Moon Belgian White, this Brooklyn East India Pale Ale might be a really good recommendation.

The flexibility and ease of being able to do this comes from our schema and being able to navigate from beers to reviews to glass types to reviews back to beers, and be able to do that across our entire graph of 1.6 million reviews and make incredibly relevant recommendations to our users.

To wrap up at the end here, we’re going to dive into this schema design in more detail next week. As you saw on that schedule, we’ll be diving into building up our application from the bottom to the top, looking at the backend, moving up into our serverless, our architecture design, moving into our frontend, moving into our application design and our UX, our UI, ad then eventually looking at that as a whole picture in terms of lessons learned and doing some deployments.

As I mentioned, we know that people are a little bit tuned out on webinars, or they can be, so we’re working internally to look at making these a lot more interactive and interesting and not just people reading PowerPoints. So hopefully this was a good step in that direction. From both Lee and I, and everyone at Graphable, we wanted to thank you all for your time. I think that we’ve got just a couple questions here.

[00:40:00] Let me just look at these quickly before we stop here. Feel free to send any other questions in. We’ve got one question. We’ve covered everything for making us future-proof. We certainly hope so. That’s our point.

Levine mentioned, “One thing I don’t see is domain-driven design event sourcing, how events modify graphs.” It depends on the schema. One of the things is being able to have different pipelines.

One of the things we like about serverless architecture is being able to have that event-driven design and connect into a graph to modify it, as well as within Hume, which we leverage for a lot of our builds, there’s a full orchestra pipeline that works with an event-driven design to be able to modify your graph and keep it up-to-date. From specifically a knowledge graph standpoint, keeping your graph up-to-date is really hugely important. We see it’s one of the main flaws in most knowledge graph designs.

One of the other questions, “Is it not true that both frontend and backend pipelines should match up so there’s less friction between frontend and backend developers?” That’s really something that we agree with. I think there’s some variety there in terms of exactly how much overlap there should be versus how much decoupling you do versus how tightly coupled you are.

One of the things we’ve found is that being a little bit more tightly coupled is okay as long as that coupling is flexible. If you have a graph database at the core, you can actually make a middleware API that exposes that flexibility to the frontend in a secure and safe way, and therefore, the fact that the frontend has to have some knowledge of the backend isn’t a limitation; basically, it’s a real boon for efficiency, especially when you have a semantically informative graph database design where the frontend can actually understand what’s going to get returned by making certain calls. You almost move towards a GraphQL type structure.

We actually haven’t done – we don’t really recommend leveraging GraphQL in frontend type applications. They’re really powerful in a BI type API where you want to leverage a lot of data. But moving a little bit towards that direction where your API has some semantic knowledge of the graph itself we find is really impactful.

The other thing I wanted to mention is that if you want to connect with us, please feel free to reach out to sales@graphable.ai. I’m actually going to put that on the PowerPoint just briefly as well, for those of you who want to see it in the recording. We’re more than happy to chat.

As a takeaway from this, we do a lot of interacting with clients, potential clients, to talk about their use cases. We’re happy to hop on as an expert and do a 30-minute or hour-long call to go over your use case and make some recommendations and talk about how this paradigm might help. So please feel free to reach out. If you want to sign up for more of those webinars and see other events and blogs and things that we do, that’s at graphable.ai/events. Thank you all very much for your time.


Graphable delivers insightful graph database (e.g. Neo4j consulting) / machine learning (ml) / natural language processing (nlp) projects as well as graph and Domo consulting for BI/analytics, with measurable impact. We are known for operating ethically, communicating well, and delivering on-time. With hundreds of successful projects across most industries, we thrive in the most challenging data integration and data science contexts, driving analytics success.

Want to find out more about our Hume consulting on the Hume knowledge graph / insights platform? As the Americas principal reseller, we are happy to connect and tell you more. Book a demo by contacting us here.

Check out our article, What is a Graph Database? for more info.


We are known for operating ethically, communicating well, and delivering on-time. With hundreds of successful projects across most industries, we thrive in the most challenging data integration and data science contexts, driving analytics success.
Contact us for more information: