If you've been struggling with this issue and just want an answer, skip to the bottom for the TL;DR. I won't fault you for it.
Docker is a great tool for deploying web services, but one of my favorite uses for it is standardizing toolchains. Instead of everyone on the team needing to setup their development environments identically, and then keep them in sync, you define the build tools in a single place, a docker image. Then everyone, including your build server, uses that single pre-packaged image.
Not only is this great for teams, but it's also fantastic when you have side projects that you only periodically revisit. Without fail, when I come back to some old project, I've since updated my tools for something newer, and I can't build that old project without either upgrading the project or degrading my tools. Leaving a build tools image behind means I can just pick it up and work on it without spending a day getting back up and running.
It's not all sunshine and roses though. I went on quite an adventure today. Last year I put on a TDD for Arduino workshop. I had started to create a Docker image for AVR development, but ran into problems when it came time to flash the program to the board. Exposing a USB port to a docker container on Mac isn't exactly a trivial task (until you know how at least!). For that session we mobbed, so I only had to setup one machine. I just stopped fighting with it and went with a regular install of the tools on my machine.
Recently though, I've taken a renewed interest in getting this to work properly. First, I've been playing with ARM development, but there's a bug in AVR and ARM's compiler packaging that means you can't have both toolchains installed at the same time. Having these toolchains containerized means I can easily keep both readily available. Secondly, I'm now beginning to build on that workshop to turn it into an "Intro to Bare Metal Programming" course. For that, I really need to be able to hand folks an environment I know works, so we're not spending more time working kinks out of dev setups than learning. Also, in order to standardize embedded toolchains for a team or client at work, I really need to know how to get USB working on Mac.
If you're running Linux, this is as simple as adding --device /dev/ttyUSBx
to the docker run command. Needless to say, it's not that simple on OSX or Windows. That's because the docker daemon only runs natively on Linux. For other operating systems it's run in a hypervisor or virtual machine. In order to expose the port to the container, you first have to expose it to the virtual machine where Docker is running. Unfortunately, hyperkit, the hypervisor that Docker-For-Mac uses doesn't support USB forwarding.
Since we can't expose a USB port to the native Mac Docker hypervisor, we have to fallback onto docker-machine
, which uses a Virtualbox VM to host the dockerd
daemon. There are great instructions for setting up docker-machine with a USB filter, but I was getting a lot of mysterious segfaults from docker-machine
that would leave my VM running, but also unable to recover a connection to it through docker-machine. It turns out that several versions of VirtualBox had a bug causing the segfaults. Upgrading to v6.06 solved that problem, but I still couldn't see the device. It took me too long to remember that I had the same trouble with USB 3.0 a few months ago with a Windows guest OS. Dropping down to USB 2.0 fixed that issue.
Okay, let's get down to business and get a fully containerized embedded toolchain running on Mac.
First, download and install VirtualBox version 6.06 or greater. Again, this must be version 6.06 or greater or you'll see segfaults when trying to create a USB filter later. You can optionally install the VirtualBox extension pack. I recommend it though, because it enables USB 2.0, which results in faster programming times.
Next, we can create and setup our docker-machine
(virtual machine).
#! /bin/bash
#create and start the machine
docker-machine create -d virtualbox default
#We must stop the machine in order to modify some settings
docker-machine stop
#Enable USB
vboxmanage modifyvm default --usb on
# OR, if you installed the extension pack, use USB 2.0
vboxmanage modifyvm default --usbehci on
# Go ahead and start the VM back up
docker-machine start
# Official Arduinos and many clones use an FTDI chip.
# If you're using a clone that doesn't,
# or are setting this up for some other purpose
# run this to find the vendor & product id for your device.
# vboxmanage list usbhost
# Setup a usb filter so your device automatically gets connected to the Virtualbox VM.
vboxmanage usbfilter add 0 --target default --name ftdi --vendorid 0x0403 --productid 0x6015
#setup your terminal to use your new docker-machine
#(you must do this every time you want to use this docker-machine or add it to your bash profile)
eval $(docker-machine env default)
Now go ahead and plug in your device and run this command to verify that containers can see it.
docker run --rm -it --device /dev/ttyUSB0 ubuntu:18.04 bash -c "ls /dev/ttyUSB0"
If the command fails, make sure your device is plugged in and visible to the VM. You may have mistyped the vendor and product ids, or the tty may be attached under a different number.
docker-machine ssh default "ls /dev/tty*"
That's it. Like I said, it's really easy once you know how. Unfortunately, there's not official documentation and, considering that both docker-machine
and boot2docker
are in maintenance mode, I'm hoping we get official support for USB on hyperkit in the future. Containerizing build tools is a great way for teams to take advantage of the technology even if you're not using them to deploy services. Now, if you'll excuse me, I need to go update my AVR toochain image with a script to do this and add in avrdude
for uploading programs.
Until next time,
Semper Cogitet
Top comments (0)