Initializing VMs for the Syntropy Stack


  • If you'd like to get have a look at our Medium Article covering this, feel free to check it out here.. Special thanks to our contributor Craig for such a helpful walkthrough.

For this tutorial, we will be going through how to spin up with Virtual Machines that are configured to work with the Syntropy Stack. You will need the following as prerequisites for the build:

  • You need to have a Syntropy Stack account, as well as an active Agent Token.
  • Three separate Virtual Machines (VM), each running on a different cloud provider
  • No ports on the VMs should be open and exposed to the internet
  • Wireguard and Docker need to be installed on each VM

Infrastructure Breakdown

Each VM represents an endpoint. An endpoint represents a node within your Syntropy Network. We will connect endpoints together using the Syntropy UI to create a network.

Our MQTT network that we’re building will have 3 nodes (endpoints). Each node will have its own Syntropy Agent running inside a Docker Container.

The Mosquitto Broker and NodeJS Pub/Sub apps are all running inside their own docker containers alongside their respective Syntropy Agents (see the diagram below). We’ll refer to the Broker, Publisher and Subscriber containers as the “services”.

Each node has its own Docker network and therefore lives on its own subnet — we connect these disparate Docker networks (or subnets) together using Wireguard VPN tunnels to form a single Syntropy network that spans your three separate cloud providers.

What is MQTT?

Unlike HTTP, which is based on a request-response model, MQTT is a messaging protocol that uses a publisher-subscriber (often shortened to Pub/Sub) model. In a Pub/Sub model, a central message Broker is required and clients aren’t assigned addresses.

Clients can be Publishers (sending messages), Subscribers (receiving messages), or both. A subscriber “subscribes” to one or more topics, and will receive any message published to those topics. In our example, we’ll be setting up a separate Broker, Publisher and Subscriber, each running on a different VM.

For our Broker service, we’ll be utilizing Eclipse Mosquitto, which comes with a prebuilt docker image, making it perfect for integrating with the Syntropy Stack.

Setting up our Virtual Machines

Each VM should be running on a separate cloud provider’s infrastructure. This isn’t an absolute requirement, but we do this to help illustrate the flexibility, interoperability, and ease of use when working with the Syntropy Stack across the expanse of the internet.

The most important thing is that your VMs don’t share a public IP address. So, if you don’t want to use three separate cloud providers, try placing your VMs in different geographic regions. For the purposes of this tutorial, we chose to set up VMs as follows:

Broker: Digital Ocean, Type: Basic, 1vCPU, 1GB memory, OS: Ubuntu 20.04
Publisher: Google Cloud Platform, Type: e2-micro, 2vCPU, 1GB memory, OS: Ubuntu 20.04
Subscriber: AWS, Type: t2-micro, 1vCPU, 1GB memory, OS: Ubuntu 18.04

Using Docker with our Virtual Machines

We’ll be utilizing docker-compose to bring our services online. Before we copy our files across to our VMs, we need to add our Agent Tokens to the docker-compose files for each of the Broker, Subscriber and Publisher.
You’ll need to replace the <YOUR_API_KEY>with your own agent token in the docker-compose.yaml file for each node (found in broker/, publisher/ and subscriber/ directories in your local version of the repo). You can change the provider to match each server’s cloud provider, a reference to the providers can be found here.

  image: syntropynet/agent:stable
  hostname: syntropynet-agent
  container_name: syntropynet-agent
    - SYNTROPY_API_KEY=<YOUR_API_KEY> # <==== Your token goes here
    - SYNTROPY_PROVIDER=<PROVIDER_VALUE> # <==== change this

Copy each of the service directories (broker | publisher | subscriber) to a different VM using scp or your favorite sftp client.

Example copying the broker folder to the Broker VM:

scp -r /path/to/broker <user_name>@<broker_remote_ip>:/broker

Do this for each of your three of your nodes, copying a separate service folder to each.

You can split your terminal so you have visibility into all three VMs at the same time.

Now that we have all the files in the right place, we’re ready to provision the VMs!

Provisioning your Virtual Machines

Perform each of these steps on each of your VMs.

  1. SSH into each VM
  2. Ensure Docker is installed

Check if Docker is installed using: docker -v

If it’s not, you should see output like this:

root@ubuntu-s-1vcpu-1gb-nyc3-01:~# docker -v
Command 'docker' not found, but can be installed with:
snap install docker     # version 19.03.11, or
apt  install  # version 19.03.8-0ubuntu1.20.04.1
See 'snap info docker' for additional versions.

A snap is a bundle of an app and its dependencies that works without modification across Linux distributions. Install docker using the command provided: sudo snap install docker

  1. Install Wireguard
    Update the apt package index and upgrade your distributions existing packages (this could take a minute or two).
sudo apt -y update && sudo apt -y upgrade

Next, install Wireguard and Wireguard tools.

sudo apt install -y wireguard && sudo apt install -y wireguard-tools

Check that Wireguard is correctly configured using: dpkg -s wireguard

Finally, load the Wireguard kernel module.

sudo modprobe wireguard  
sudo echo wireguard >> /etc/modules-load.d/wireguard.conf

That’s it, the Syntropy Agent will take care of creating and configuring the tunnels when the time comes.

Starting VM Services

Start your services in the following order:

  1. Broker
  2. Subscriber
  3. Publisher

While SSH’d into each container, navigate to the service’s respective folder (either broker/, publisher/, or subscriber/ ).

Start the containers using docker-compose: sudo docker-compose up -d

Once the images have been pulled and the containers have been started, check that the containers are running with: sudo docker ps

The output should look something like this:

You’ll want to view the logs for each container, where <container_name> is either mosquito, nodejs-subscriber, or nodejs-publisher: sudo docker logs --follow <container_name>

The — follow (should be two dashes) flag keeps the log output open for the container so you can see it updated. The outputs for each container should look like the following:

// Mosquitto
1610058312: mosquitto version 1.6.12 starting
// Subsriber 
Initializing Subscriber
// Broker
Initializing Publisher

That’s it for the command-line, your three services are up and running, all that’s left to do is to create your network and connect the endpoints using the Syntropy UI. However, before we get there, let’s take a quick look at what’s going on under the hood of one of our NodeJS apps so we can understand what’s going on, seeing as we haven’t had to write any code.

Here’s the docker-compose.yaml file for the Publisher:

version: "2"
    image: syntropynet/agent:stable
    hostname: syntropynet-agent
    container_name: syntropynet-agent
      - NET_ADMIN
      - SYS_MODULE
      - SYNTROPY_API_KEY=<YOUR_API_KEY> # <==== Your token goes here
      - SYNTROPY_AGENT_NAME="mqt_1_publisher"
      - SYNTROPY_PROVIDER=<PROVIDER_VALUE> # <==== change this
      - SYNTROPY_TAGS="mqtt"
      - SYNTROPY_SERVICES_STATUS=false # default is false
    restart: always
    network_mode: "host"
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - "/dev/net/tun:/dev/net/tun"

    container_name: nodejs-publisher
    build: .
      - syntropynet-agent

# subnet will be
    driver: bridge
        - subnet:

We can see that we have two services: syntropynet-agent and nodejs-publisher. We won’t go into all the details of what’s going on, but we can see that we’re passing some important information to the syntropynet-agent via the environment variables, such as our Agent Token that authorizes or agent to connect to the Syntropy Network. The image property is telling docker to pull down the prebuilt syntropynet/agent:stable docker image and use that to run the Syntropy Agent container.

The nodejs-publisher doesn’t have an image defined, so instead, we use the build: . property to indicate there is a Dockerfile in the same folder and we should build the image using that. It’s important to note that we’re passing the syntropynet-agent to the depends_on property. By doing this, we’re telling the publisher to only start once the agent is up and running because we need it in order to connect to the network.

Finally, we’re setting the default Docker network’s IP address to using the network’s ipam (IP Address Management) property.

Publisher's Dockerfile:

FROM node:12

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./

RUN npm install
# If you are building your code for production
# RUN npm ci --only=production

# Bundle app source
COPY . .

CMD [ "node", "publisher.js" ]

The Docker file is pretty minimal. It basically says:

  • Pull down the node:12 Docker image
  • Place our files in the /usr/src/app directory
  • copy the package.json file into the container
  • install the npm modules
  • Copy the app source (ie. publisher.js ) into the container. Our .dockerignore file makes sure any local node_modules folder isn’t copied in to overwrite our install
  • Run the command node publisher.js

And last, but not least, our Publisher NodeJS app:

const mqtt = require("mqtt");
const moment = require("moment");
const client = mqtt.connect("mqtt://");

const CronJob = require("cron").CronJob;

// publish a message at 5minute intervals
const job = new CronJob(
  "5 * * * * *",
  function () {
    const time = moment().format("MMMM Do YYYY, h:mm:ss a");
    console.log(`[sending] ${time}`);
    client.publish("hello_syntropy", `Powered by SyntropyStack: ${time}`);

console.log("Initializing Publisher");

client.on("connect", function () {
  console.log("Established connection with Broker");
  client.publish("init", `Powered by SyntropyStack`);

client.on("message", function (topic, message) {
  // message is Buffer, so convert to string

We create an MQTT client and have it connect to our Broker service running on the subnet. It’s important to note that we won’t see the publisher connect until we’ve connected the endpoints in the Syntropy UI.

When the client connects to the broker, publish a message to the init topic that says “Powered by Syntropy”. Our CronJob will publish a message to the hello_syntropy topic at our predefined interval, along with a human-readable timestamp.

Something that catches first-time users is exposing the MQTT ports via the docker-compose and Dockerfile.

Turns out that's not necessary as using a user-defined bridge driver in the default network is sufficient, as the internal containers can reach each other over this user-defined bridge without the need for the exposed ports. This stackoverflow post is helpful when understanding why this is the case.

Login to Syntropy UI

You can login here if you haven’t already.

Navigate to End-points section, you should see your services online:

Once you’ve confirmed your endpoints are online, it’s time to create your network. Navigate to the Networks section of the site, click Create a new Network and give your network a name:

While still in the Networks section, you’ll want to click Add End-points. Select your three endpoints and click Add selected to add them to your network. You’ll see your three endpoints appear as nodes on the graph. To create the connections, select the mqtt_1_broker node and select the other two nodes in the modal that appears. There’s no need for the Publisher to connect to the Subscriber, as they communicate via the Broker.

Finally, you need to make sure that the services running on each node are connected. In the Connections section, under the node graph, select each service and click Apply Changes. All the green squares next to the service IPs should be solid green.

After creating your connections in the Syntropy UI, return to your terminal and you’ll see that the Subscriber is receiving a timestamped message being sent by the Publisher!

Publisher log output:

Initializing Publisher
Established connection with Broker
[sending] January 7th 2021, 10:53:05 pm
[sending] January 7th 2021, 10:54:05 pm
[sending] January 7th 2021, 10:55:05 pm

Subscriber log output:

Initializing Subscriber
Established connection with Broker
[subscribed] topic: hello_syntropy
[subscribed] topic: init
[received][hello_syntropy] Powered by **Syntropy Stack**: January 7th 2021, 10:53:05 pm
[received][hello_syntropy] Powered by **Syntropy Stack**: January 7th 2021, 10:54:05 pm
[received][hello_syntropy] Powered by **Syntropy Stack**: January 7th 2021, 10:55:05 pm

Congratulations, you’ve created your very own secure, optimized network between cloud providers!


The Internet of Things and MQTT are certainly not new technologies, however, when combined with the Syntropy Stack they become more efficient, more reliable and more secure. Think about a service that relies on real-time notifications where every ms counts and your edge-servers are spread out across the globe. What about if you had an IoT-enabled glucose monitor that was sending data to your medical provider, wouldn’t you want that data to be as secure as it possibly could be?

There are so many ways this technology can be used to help build a better internet. The best part is it fits right into existing infrastructure and can be adopted by anyone, anywhere. You could improve the quality of VoIP, reduce latency across a limitless range of applications, reduce lag for gaming, make the internet secure by default and privacy ubiquitous!

What would you build?

Helpful Docker Commands

Here are a few helpful docker commands if you find yourself playing around, remember to run them with sudo

  • list the running processes: docker ps
  • check the networks: docker networks ls
  • inspect a specific network: docker network inspect <container_name>
  • view a container’s logs: docker logs --follow <container_name>
  • shell into a running container: docker exec -it <container_name> /bin/bash
  • stop the running containers (if started with docker-compose): docker-compose stop
  • remove all dangling containers and networks: docker system prune
  • delete all downloaded images: docker rmi $(sudo docker images -a -q)
  • If you want to inspect the connection created through Wireguard: sudo wg show

and the output will look something like this:

craigpick@mqt-1-publisher:~/publisher$ sudo wg show
interface: 0000000213p0pNO
  public key: AVdguV3f2YJYv81cx+9dZa08iSFO1pkCGk5jdV6cyXE=
  private key: (hidden)
  listening port: 47939
peer: KzhHKS5ozGTtgzx8oXEoqWaeDBs/rTg+ICI4ge8kj0c=
  allowed ips:,
  latest handshake: 1 minute, 52 seconds ago
  transfer: 516.98 KiB received, 517.79 KiB sent
  persistent keepalive: every 15 seconds