Better isolation
A few thoughts on standing up web services with better isolation (i.e., with less chance that I’ll shoot myself in the foot.)
This post started out as a digression in my Mastodon post.
What happened was, I wanted to spin up a test version of my weblog. It needed to be on the public internet because it needed to be HTTPS because reasons.
I run a few different applications on a single VPS (hosted by the fine folks at Positive Internet). Nginx does SSL termination and serves up static files or forwards requests to Node.js servers where I run some web infrastructure and SaxonJS. Some of those applications talk to PostgreSQL databases. All pretty boring stuff.
In principle, spinning up a test version is easy:
- Create a CNAME record for
sotest.nwalsh.com
, create a new nginx configuration for it, and let CertBot work its magic to get me a new SSL certificate - Deploy the testing code to a new location, edit the database and server details, fire it up, and initialize everything.
- Optionally, backup the current “So” database and restore it into the testing database.
And here’s where the voice of bitter experience interjects: what if you fuck up? What if you accidentally leave some database connection or Node.js configuration, or some other thing you haven’t even thought of, accidentally pointing at the “production” database (running, as it is, in the same PostgreSQL server)? What if you fire up the testing server and manage to completely mangle the real server?
I really want better isolation. In fact, I really want better isolation anyway because it makes upgrading so much simpler. A while back, I upgraded the version of Node.js (gotta stay on top of those security patches) I’m using. I tested everyting locally, so I was confident it would work, but the actual upgrade was a half hour of downtime with me typing madly into a shell window.
That worked fine. I had an even more stressful time of it one evening last week when I installed a new Node.js package (to improve some of the browser security headers), and everything fell over. (I ended up uninstalling and reinstalling Node.js entirely.)
What I do locally is run a few little Docker containers. And I think
that’s what I want to do on the server as well. It took a bit of
effort to sort out the firewall configuration that my ISP provided,
but I got there in the end. (Initially, localhost
couldn’t talk to any of the networks
created by Docker compose; you have to setup some
wildcard interface names.)
I’m testing out a new approach where the Node.js and PostgreSQL servers for each service run in containers. The system version of nginx still does SSL termination and can still serve up static files. If it needs to talk to Node.js, it forwards to the port exposed on the appropriate Node.js container. Node.js in that container talks to an entirely isolated verison of PostgreSQL in the other container.
Is this a good idea?
As a test, I’ve setup drinks.nwalsh.com that way.
I also setup the sotest
server that way. Shame
it was a complete fail.