Neo4j Cluster with Docker Swarm – Simple Steps

By Benjamin Guegan / Consultant

June 9, 2023

Blog

Reading Time: 8 minutes

What is the best way to deploy a Neo4j cluster with the container orchestration tool Docker Swarm? Find out more in this article. The Neo4j graph database functions well in a range of architectures. Adding containerization for your Neo4j cluster project and going deeper with container orchestrations can significantly increase the efficiency of our development, deployment and admin operations.

What is a Neo4j Cluster?

A Neo4j cluster is a highly available and horizontally scalable system that group database servers together as a single unit of work. It enables different databases to take responsibility for specific operations (e.g. read or write) on the same data set, making it possible to improve performance. A Neo4j cluster provides scalability and fault tolerance by allowing easy addition or removal of nodes, replication of data and automatic failover. This allows applications to take advantage of the performance benefits of a clustered database while still maintaining data integrity and security. The result is an efficient, highly available system that can be flexibly adapted to changing workloads and user requirements.

What is Docker and Docker Swarm?

Docker is a software that enables the creation of lightweight, portable containers that can be used to run applications in isolation. Docker Swarm is an orchestration capability that is used to manage multiple Docker containers as a single unit. It enables administrators to deploy and maintain large numbers of containers with ease and flexibility. With Docker Swarm, users can easily scale their applications, define network topologies, and perform rolling updates of application deployments with little effort. See more detail below.

Neo4j cluster
How Does Containerization Work?

As compared to virtualization (which uses a hypervisor to mount a full-blown Operating System and dedicated resources, e.g., CPU, RAM, and storage), containerization reuses its OS host. This means there are shared resources, fewer OSs to maintain, and fewer licenses to buy. It is only possible now after many years of valuable contributions to the Linux Kernel – specifically the Linux Container – from important corporate contributors like Google and made available by Docker.

Container Portability and Compatibility Basics

There are various ways to understand the concept of an image and a container. A few examples of the parallels that have been drawn include class and instance in an OOP context, build-time and run-time constructs when using a compiler, as well as a Virtual Machine (VM) template and a VM instance as another example. But note that even if these help us build a mental model, they are incomplete comparisons.

  1. Portability
    • Unlike its VM counterpart, which contains its own OS, a container sits on top of the host OS. The benefits of sharing OS across containers is in not having to mount an OS for every container, which can significantly reduce the boot time. Where it would take minutes to launch a VM, you only need seconds with a container.
    • Moreover, container images are truly platform portable in the sense they can be distributed on any registry servers and container hosts, uncompressed and mounted there, but importantly this does not address if they can actually run which is discussed more below.
    • This means you can move the image anywhere, e.g., local machine, cloud platforms, on-premises or even virtual machines with ease.
    • This is what is called host portability.
  1. Compatibility
    • It’s important not to confuse host portability with OS portability. We still need virtualization like a JVM or an Erlang VM Compatibility to run our program cross-platform. Because they use the host OS, container images are are also tied to their host OS – including packaging OS, libraries, files and other dependencies on any given platform – and will also not necessarily run on any other hardware architecture, OSs (e.g., Linux vs Windows) and Linux distributions. ARM Linux systems run ARM Linux containers, x86 Linux systems run x86 Linux containers, and x86 Windows systems run x86 Windows containers.
    • Anyone without container experience might be tempted to believe that a container will run anywhere – for example with a container Linux distribution different than the host Linux, since the Linux kernel is in fact built on the idea of a fairly strict set of common API calls. But, mixing and matching the user space and the kernel space might at times break, even if it can work many times, especially if playing with syscalls.
Docker Components – Detailed Overview

Docker’s primary value-add is to build and run containers, and to automate their deployment all as part of a set of capabilities that are modular by design.

It is a constellation of tools, e.g.Docker Compose (previously Fig), Docker StackDocker NetworkDocker Volume and Docker Swarm. When we talk about just the name Docker, generally we mean the Docker Engine itself. But, even then, it is composed of multiple layers of tools: Docker Daemoncontainerdrunc

It is also an open-source project. Following that philosophy, its tools are often based on open standards – hosted by various 3rd parties. They include but are not limited to:  

Now that we have covered the various elements of the Docker landscape, let’s dive into some code and see how we could use it to deploy a Neo4j cluster.

Deploying a Neo4j Cluster with the Docker Stack

If you have already used Docker, it is likely that you have already used Docker Compose which is best suited for development scenarios, but Docker Stack is better suited for production. While Stack is unable to build new images, it is included with the Docker CLI without needing to install additional packages to use it, and importantly it is deployed as part of the swarm mode. It is of particular interest for deploying Neo4j clusters because it unlocks some particular properties of the shared Compose specification that have swarm-related properties.

There are many reasons to choose Docker Stack for production, even in a single engine scenario, and they include the following:

  • integrated secrets,
  • auto-recovery,
  • rollback,
  • updates,
  • scaling,
  • health checks
  • and much more.

Docker is build with many layers. And Docker Swarm leverages them to support the best practice of implementing security by layers and provide a secure solution by default, even before taking extra security steps.

Hands-on Ways to Follow Along with Our Example

If you would like to follow along with hands on keyboard, you could use Docker Desktop on just a single swarm node as we walk through the steps. That being said, to truly test a multi-server swarm mode, you would ultimately need multiple servers. For that, you could also use Docker Playground (Play with Docker), a VM with Vagrant (which comes with integrated support for Docker), or a cloud computing platform (e.g., AWS, Microsoft Azure, Digital Ocean).

Recipe Steps to Create and Deploy a Neo4j Cluster with Docker Swarm

Along with the code example, we have attached an explanation – in the key code section – about the decision we thought could be helpful to understand.

1. Prepare your Compose file

The Compose file includes the configuration for your entire stack. It spans your infrastructure needs, e.g., volumes, networks and securities.

By convention, we have named our file docker-stack.yml.

version: '3.9'

services:
  core:
    image: neo4j:4.4-enterprise
    networks:
      neo4j-net: 
        aliases: 
          - lan
    ports:
      - 80:7474
      - 6477:6477
      - 7687:7687
    volumes:
       - ./neo4j.conf:/conf/neo4j.conf
    environment:
      - NEO4J_AUTH=neo4j/changeme
      - NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
      - EXTENDED_CONF=yes
      - NEO4J_dbms_mode=CORE
    user: ${USER_ID}:${GROUP_ID}
    deploy:
      replicas: 3

  replica:
    image: neo4j:4.4-enterprise
    networks:
      neo4j-net:
        aliases: 
          - lan
    ports:
      ports:
      - 7475:7475
      - 6478:6478
      - 7688:7688
    volumes:
       - ./neo4j.conf:/conf/neo4j.conf
    environment:
      - NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
      - NEO4J_AUTH=neo4j/changeme
      - EXTENDED_CONF=yes
      - NEO4J_dbms_mode=READ_REPLICA
    user: ${USER_ID}:${GROUP_ID}
    deploy:
      replicas: 3

networks:
  neo4j-net:
  • (line 5) image direct Docker to pull the image referenced from Docker Hub and start the container. We picked Neo4j Enterprise Edition as the cluster feature is only available with it.
  • (line 6) network in swarm mode creates by default an overlay network and integrates a virtual IP (VIP), a load balancer and an embedded DNS for its ingress routing mesh. On the other hand, Docker Compose uses, by default, a bridge network (layer-2 switch). Those can lead to discrepancies when executing the same Compose file using Docker Compose and Docker Stack. 
  • (line 8) aliases are set to land on both services. It allows us to reference them on the overlay network they are part of through a shared service name.
  • (lines 14-15) volumes inform Docker to mount a volume from a source location (./neo4j/core/neo4j.conf) to the target (/conf/neo4j.conf) on the host. As volumes are mounting into the container’s filesystem, their lifecycle is detached from the container. Their mount point (target) is a directory logically connected to a storage location (source) that could even reside on external storage systems and be shared between containers if you deem so. For this reason, volume needs to be created before mounting.
  • (line 19) EXTENDED_CONF is a Neo4j-specific environment variable that allows us to execute embedded commands inside the neo4j.conf file. It will enable shell expansion on neo4.conf.
  • (line 20) NEO4J_dbms_mode is set to CORE. Neo4j Core instances are leaders under the Raft terminology, allowing reading and writing to the database. 
    Note: Neo4j and Docker Swarm both use the consensus algorithm Raft.
  • (line 21) user forces the user that runs the container. Accordingly, it needs to be allowed write and read permission to the neo4j.conf file.
  • (line 23) replicas are set to 3. This is the number of containers for the service we want to run.
  • (line 42) NEO4J_dbms_mode is set to READ_REPLICA. Neo4j Read Replicas are followers under the Raft terminology and allow only reading to the database.
2. Preparing your Neo4j Configuration file

By convention, we have named our file neo4j.conf.

# Setting that specifies how much memory Neo4j is allowed to use for the page cache.
dbms.memory.pagecache.size=100M

# Setting that specifies the initial JVM heap size.
dbms.memory.heap.initial_size=100M

# Strategy that the instance will use to determine the addresses of other members.
causal_clustering.discovery_type=DNS

# The network addresses of an initial set of Core cluster members that are available to bootstrap this Core or Read Replica instance.
# If the DNS strategy is used, the addresses are fetch using the DNS A records.
causal_clustering.initial_discovery_members=tasks.lan:5000

# Address (the public hostname/IP address of the machine)
# and port setting that specifies where this instance advertises for discovery protocol messages from other members of the cluster.
causal_clustering.discovery_advertised_address=$(hostname -i)

# Address (the public hostname/IP address of the machine)
# and port setting that specifies where this instance advertises for Raft messages within the Core cluster.
causal_clustering.raft_advertised_address=$(hostname)

 # Address (the public hostname/IP address of the machine)
 # and port setting that specifies where this instance advertises for requests for transactions in the transaction-shipping catchup protocol.
causal_clustering.transaction_advertised_address=$(hostname)

# Enable server side routing
dbms.routing.enabled=true

# Use server side routing for neo4j:// protocol connections.
dbms.routing.default_router=SERVER

# The advertised address for the intra-cluster routing connector.
dbms.routing.advertised_address=$(hostname)

  • (line 8) Set causal_clustering.discovery_type property to DNS. By default, Neo4j sets it to LIST so that you can hard-code a list of container addresses. We would undoubtedly prefer a dynamic way to accommodate our workloads in production, especially when using a service instead of a container. So, we are asking Neo4j to resolve the addresses of a given domain name and return their A records.
  • (line 12) Set causal_clustering.initial_discovery_members to tasks.lan. This property is the domain name we want Neo4j to resolve and return the IP addresses and ports of the cluster members. That property is composed of the service name lan. It is the alias name we gave to our two services on the overlay network (i.e., neo4j-net). But, if we had provided Neo4j only with this information, it would have resolved the service address, the Virtual IP (VIP) that Swarm uses to load balance. We want to have the container (“tasks”) addressed. To achieve this, you need to add the prefix tasks to inform the internal DNS Resolver that you want the container IP addresses.

We are now ready to create our Swarm and deploy our application!

3. Creating a Swarm
  1. Execute the following command on one server to initialize the swarm: 
    $ docker swarm init
  2. Execute the following commands two times to add two nodes manager to the swarm: 
    $ docker swarm join-token manager
  3. And paste the response to the nodes you want to attach as manager, e.g.
    $ docker swarm join --token SWMTKN-1-5gxscjx4nolv268x8s87hrydnta9itsbz7tgnq7p31riwha7cs-6cusyz5d82d3qkxns0cry3cy2 192.168.99.103:2377

    Note: To reduce the probability of a split-brain – a case where a network partition occurs and renders the application unable to make a decision and alter itself due to a loss of quorum – it is good practice to have an odd number of leaders (in Raft terminology). Leaders are called managers with Swarm and cores with Neo4j.
  4. Execute the following commands three times to add three nodes worker to the swarm:
    $ docker swarm join-token worker
  5. And paste the response to the nodes you want to attach as a worker, e.g.
    docker swarm join --token SWMTKN-1-5kkw2t9nmgsu2nudpagczdsd1i5xyhoa02xg4ir5p7z9o90fml-dnz96zjdbb2chqkzh4ferhzm0 192.168.0.17:2377
  6. Verify that your node is attached to your swarm by executing the following command:
    $ docker node ls
4. Deploying a Neo4J Cluster at scale

To deploy your application follow the steps below:

  1. Before deploying your cluster, set the following environment variables: 
    export USER_ID="$(id -u)"
    export GROUP_ID="$(id -g)"
  2. Execute the following command to deploy our Neo4j Cluster:
    $ docker stack deploy -c docker-stack.yml neo4j
  3. Verify that your service is running by executing the following command: 
    $ docker service ls
  4. Verify that your cluster is running by accessing Neo4j Browser (e.g., localhost:7474) and execute the following command: 
    $ CALL dbms.cluster.overview() or $ :sysinfo
  5. You should have a similar result as a confirmation of your Neo4J Cluster running:
+-------------------------------------------------------------------------------------------------------------------------------------------------
| id                                     | addresses                                          | databases                                       | 
+-------------------------------------------------------------------------------------------------------------------------------------------------
| "910d22fb-c2f5-4578-8d0f-87eeb6f30d1f" | ["bolt://localhost:7687", "http://localhost:7474"] | {neo4j: "FOLLOWER", system: "LEADER"}           | 
| "b5cf251c-5b15-4138-b5f9-1ea6b6b85add" | ["bolt://localhost:7687", "http://localhost:7474"] | {neo4j: "LEADER", system: "FOLLOWER"}           | 
| "c6e67e79-6490-41b6-a92a-a9ca0061518b" | ["bolt://localhost:7687", "http://localhost:7474"] | {neo4j: "READ_REPLICA", system: "READ_REPLICA"} | 
| "04a197b8-5671-43af-a8d7-53f1784a43e4" | ["bolt://localhost:7687", "http://localhost:7474"] | {neo4j: "FOLLOWER", system: "FOLLOWER"}         | 
| "6c5ce972-0ad1-46bc-a394-f16e29363824" | ["bolt://localhost:7687", "http://localhost:7474"] | {neo4j: "READ_REPLICA", system: "READ_REPLICA"} | 
| "6d885dae-e570-4de9-b3f8-d36d89f5b781" | ["bolt://localhost:7687", "http://localhost:7474"] | {neo4j: "READ_REPLICA", system: "READ_REPLICA"} | 
+-------------------------------------------------------------------------------------------------------------------------------------------------
5. Managing a Neo4J Cluster
  • To scale your service core to 5 nodes, execute the following command: 
    docker service scale core=5
  • To downsize your service core back to 3 nodes, execute the following command:
    $ docker service scale core=3
6. Removing your Neo4J Cluster
  • Execute the following command and reference the stack you want to remove (we have previously named our stack: neo4j): 
    $ docker stack rm neo4j
Conclusion – Neo4j Cluster with Docker Swarm

Hopefully, this overview and recipe have made the task of understanding how to deploy a Neo4j cluster using Docker Swarm much simpler. Graphable is always here to help as your organization looks to adopt this powerful deployment approach, contact our team to meet for a free consult.


Graphable helps you make sense of your data by delivering expert data analytics consulting, data engineering, custom dev and applied data science services.

We are known for operating ethically, communicating well, and delivering on-time. With hundreds of successful projects across most industries, we have deep expertise in Financial Services, Life Sciences, Security/Intelligence, Transportation/Logistics, HighTech, and many others.

Thriving in the most challenging data integration and data science contexts, Graphable drives your analytics, data engineering, custom dev and applied data science success. Contact us to learn more about how we can help, or book a demo today.

Still learning? Check out a few of our introductory articles to learn more:

Additional discovery:

    We would also be happy to learn more about your current project and share how we might be able to help. Schedule a consultation with us today. We can also discuss pricing on these initial calls, including Neo4j pricing and Domo pricing. We look forward to speaking with you!


    Graphable helps you make sense of your data by delivering expert analytics, data engineering, custom dev and applied data science services.
     
    We are known for operating ethically, communicating well, and delivering on-time. With hundreds of successful projects across most industries, we have deep expertise in Financial Services, Life Sciences, Security/Intelligence, Transportation/Logistics, HighTech, and many others.
     
    Thriving in the most challenging data integration and data science contexts, Graphable drives your analytics, data engineering, custom dev and applied data science success. Contact us to learn more about how we can help, or book a demo today.

    We are known for operating ethically, communicating well, and delivering on-time. With hundreds of successful projects across most industries, we thrive in the most challenging data integration and data science contexts, driving analytics success.
    Contact us for more information: