I thought to write a quick one-month update to my Nomad setup post.

So far, everything is working as expected. Here is a list of things that have changed over time.

Reverse Proxy

The setup always relied on a reverse proxy in front of Nomad to manage incoming traffic and re-route it to the right back end service.

I have switched to Traefik as the reverse proxy in front of the server instead of using Caddy. This is because Traefik works with Consul and I can add routes using tags directly in the Nomad job file instead of having to add to Caddy’s configuration file.

Edit: I actually went back to Caddy because Treafik had too much of an overhead and kept crashing every so often. Caddy works with fewer resources.

A Mini Cluster

One of the biggest strengths of Nomad is the orchestration of multiple clients. I took advantage of Oracle Cloud’s offer of two free compute instances to try out a multi-client cluster.

So, I added a client to the Nomad cluster. Pushing jobs across the two clients is seamless enough. I just keep some services on specific clients because I haven’t figured out how to set up ssh routing. Once that is done, all services should work on all clients.

All data is shared between the two clients using an NFS drive. I haven’t noticed any issues so far with this setup. Some file access has been slow on one service, but I don’t know if that is the NFS drive or some other thing specific to that specific service. It is barely noticeable anyways and the client-side app keeps a good cache, so it is usually something that I notice every so often.

IPFS Pinning

Last thing, I build a docker image to push this blog to the IPFS network, so that whole process is automated through drone CI. It’s based on ipfs-deploy and uses a similar shell script to process and publish the public folder as what I described here.