Some platforms are more delicate than others. Once, in college, I ended up with a Scala installation that used a different version of Java than my version of Eclipse. It was a lot like trying to maintain a particularly aggressive aquarium. How do I keep these feisty little guys from trying to murder each other? Headaches like this are one reason I try to keep my dev environment as clean as possible with a stock installation of Ubuntu, a handful of programs, and precious few customizations. It makes it a little harder to foul up and a lot easier to nuke and pave with a fresh installation if needed. Recently, I've decided to embrace the logical conclusion to this method by avoiding installing any extra command line tools at all. No node, no javac, no competing for my $PATH
; just me, emacs, and docker.
Why not just install things on the host machine?
Developers often wax poetic about the state of flow that comes from a smooth development experience. It's as addictive as it is productive. If this sounds familiar to you, you probably also know the opposite is true: anything that takes you out of that state feels worse than disruptive. This is why tooling problems feel like such a nightmare--you have to drop what you're doing and decide whether to spend hours searching for reinstallation instructions and configuration tips or just nuke the machine and pave it with a fresh operating system installation.
The thing is, I don't want to have to be super careful with my environment, and sometimes I can't afford to be. Have you ever needed to downgrade a tool and ended up with two half-installed versions? And even if you're _really good_at managing your own system (I'm not, tbh), you still have to consider the risks of a solution that works on your machine if you don't know it will deploy smoothly on a container.
Sometimes these can be fun puzzles, but if you're a developer, this probably isn't what you want to spend your time doing.
Example case: because npm is being a jerk
I don't know why, but my node snap package installation is not at all playing nice with ReScript, so I've uninstalled it and will not install it again. However, I still want to be able to use node commands (node
, npm
, and npx
). I can do this using Docker.
It's a good idea to go ahead and pull the image appropriate to the tool you want to replace because it may take a while to download.
docker pull node # latest because whatever
Building out an appropriate run
command
You probably know that you can run commands inside a container by postfixing those commands and arguments to the run command, like so:
docker run --rm node:latest sh "echo /"hello, world/""
But we need a bit more than that. Let's start by making sure our we can properly interact with our run command by running it as an interactive TTY (-it
).
docker run \
-it \
--rm \
node:latest
This will allow us to interact or interrupt our node commands if needed, using the keyboard.
Next, let's give our run command access to the network, so that npm can download things, for example:
docker run -it \
--network host\
--rm \
node:latest
To avoid permissions issues, let's run as our current user by giving it our user id, group id, and read access to the relevant information from our system.
docker run \
-it \
--network host \
--rm \
--user $(id -u):$(id -g) \
-v "/etc/passwd:/etc/passwd:ro" \
-v "/etc/group:/etc/group:ro" \
node:latest "$@"
Our run command will also need access to our files and a working directory to start in. At a minimum, the command will need our current working directory to operate on current files. For my purposes, I use some locally installed packages which may be anywhere in my home directory, so I am going to give the container access to my whole home directory, but also tell it to start working in the current directory:
docker run \
-it \
--network host \
--rm \
--user $(id -u):$(id -g) \
-v "/home:/home" \
-v "/etc/passwd:/etc/passwd:ro" \
-v "/etc/group:/etc/group:ro" \
-w $PWD node:latest "$@"
Okay, that was a lot of typing. Let's make sure we never have to do that again.
Replacing commands using exported shell functions
If you don't already have a .bash_functions file, you're going to want to create one and reference it in your .bashrc. (You could also just put everything in the .bashrc, but it's better to have things separated out.)
Open your .bashrc file and insert the following lines:
if [-f ~/.bash_functions];
then . ~/.bash_functions
fi
A standard Ubuntu 20 installation includes a similar if
statement for .bash_aliases, so I put mine right below that, but it doesn't really matter much where it goes.
Now open (or create) your ~/.bash_functions file, and create a function we can reuse for each of our node-based commands. I am going to call mine, "container-node."
~/.bash_functions
function container-node
{
docker run \
-it \
--network host \
--rm \
--user $(id -u):$(id -g) \
-v "/home:/home" \
-v "/etc/passwd:/etc/passwd:ro" \
-v "/etc/group:/etc/group:ro" \
-w $PWD node:latest
}
Now we need container-node
to collect the command and arguments we give it."$@"
represents that arbitrarily long list, so let's append it to the end.
function container-node
{
docker run \
-it \
--network host \
--rm \
--user $(id -u):$(id -g) \
-v "/home:/home" \
-v "/etc/passwd:/etc/passwd:ro" \
-v "/etc/group:/etc/group:ro" \
-w $PWD node:latest "$@"
}
So, now, for example, if we call this function like
container-node npm run build
the npm run
command will run on the docker container we've defined.
Now it should be easy to see how we can build on this to simulate a real node installation, by defining three more functions and again using the "$@"
to pass arguments:
function container-node
{
docker run \
-it \
--network host \
--rm \
--user $(id -u):$(id -g) \
-v "/home:/home" \
-v "/etc/passwd:/etc/passwd:ro" \
-v "/etc/group:/etc/group:ro" \
-w $PWD node:latest "$@"
}
function node
{
container-node node "$@"
}
function npm
{
container-node npm "$@"
}
function npx
{
container-node npx "$@"
}
Now if we source
this file, we'll have our commands available in the current bash window.
export
ing to scripts
We have one small remaining problem: other shells spun off of our main shell won't be able to read our functions, like in scripts, for example. If we want to truly be able to run our node commands like we would with a node installation, we shouldexport
our functions.
~/.bash_functions
function container-node
{
docker run \
-it \
--network host \
--rm \
--user $(id -u):$(id -g) \
-v "/home:/home" \
-v "/etc/passwd:/etc/passwd:ro" \
-v "/etc/group:/etc/group:ro" \
-w $PWD node:latest "$@"
}
function node
{
container-node node "$@"
}
function npm
{
container-node npm "$@"
}
function npx
{
container-node npx "$@"
}
export -f container-node
export -f node
export -f npm
export -f npx
Our changes will take effect as soon as we source
our ~/.bashrc referencing our ~/.bash_functions file.
source ~/.bashrc
Celebrating the end of environment struggles
I realize this sort of setup is a bit extreme, and spinning up containers all the time does carry a small amount of overhead, but I do think I'm going to stick with this method for a while and see how far I can take it.
Certainly, I will follow up sometime with an explainer on doing Drupal development on Windows using a container-based setup (because trying to get XAMPP to work is literally the hardest thing I've ever done in my career and I don't ever want to think about it again).
I also intend to add this script to version control so that I can grab-and-go whenever I do need to set up a new environment, although, for the moment, this article itself will suffice.
Top comments (0)