Updated April 10, 2022, with current Alpine instructions, Debian/Ubuntu package signing tweaks (no more apt-key), and better guidance for handling...
For further actions, you may consider blocking this person and/or reporting abuse
I'll never understand why developers who write code to run in linux fight with windows. Just run linux native. This isn't the 90's anymore, it is really super easy to run linux on your local dev machine and every program you would want for dev that is worth running already runs on linux.
Usually because I need
But yes, I used WSL2 enough that moved to a second PC with native Linux.
I am familiar with those circumstances.
For me, using WSL isn't a choice against Linux, but a choice to use Linux everywhere. A Linux dev machine is quite desirable. Even with that, I will still run WSL on any Windows machine I can. Because I do a lot from the command line, and I often want that command line to be Linux, no matter the location or network connectivity.
Corporate.
We tried. But I have other things to do than spend my time trying to argue with people that we should be allowed to get Linux machines on our corporate network.
The only option that we had is to run a corporate-managed VM on Azure, with their own "linux" which is a special build from oracle that I never heared of before they mentionned it, and where no open source tools seems to offer any kind of support.
WSL is the only option that I have. HyperV is not stable enough on Linux, and VirtualBox is blocked by corporate rules.
Windows can do a lot of things linux cant and has a lot of cutting edge hardware support. Sometimes you need this simple as that.
However I agree developing linux apps with docker on windows can be a pain I'd recommend just installing linux on a dedicated machine for that purpose if you can.
Here are the problems I had on Ubuntu (note that I really wanted to work on linux since our servers run on linux) :
^^ This.
Stop running Windows unless you really have to.
I work on client/server software. The client is Windows; the server is not.
I will readily admit being a Linux newbie despite I installed Slackware with Linux 0.99pl15 for the first time from a stack of floppies early 1994. I am still running Linux on servers to this day. I ran Linux dual boot from 2000-2004 and then as a daily driver 2004-2017. It was a miserable experience. The choices are running Ubuntu where upgrading every six months shatters your OS so badly you can't work for days or Arch where upgrades often break one of your printer/scanner/Bluetooth. Connecting to any sort of enterprise-y VPN or WiFi just doesn't work. I don't care whether it's the fault of F5 or the community for not working -- if I can't VPN in, I can't work.
So the reason I use Windows is because that's where the driver support is. Plain and simple. And I use WSL2 because Linux excels at CLI and daemons.
Todd, many people are provisioned corporate laptops. In many regions also you cannot get a linux or Mac from the company, only Windows. WSL2 is, generally, better than the old cygwin many of us used decades ago. So no, it's not an automatic that developers get to use the best OS ever made.
I have tried with multiple laptops (and multiple distros) and even with so many customisations, laptops keep heating up on idle.
Not so ideal for development with that heat on my hand . I do wish it'd change some day. Been waiting for years now.
That sounds odd. Ive been running WSL on potato laptops and now I high end one with no heat issues at all.
Fight? There's no fight between Windows and Linux since wsl2. It's a peaceful symbiosis.
somewhat peaceful, but following DOCKER0 from WSL2 out to the host network has issues in some configurations with Docker Desktop where installing docker directly into WSL works.
I have been pulling my hair for 8 days trying to run Docker on WSL2 without too many problems arising from nowhere with limited Linux experience. This article really helped me pull through it all. Kudos to you, Jonathan! Great detailed and crystal-clear article!
So glad you found it helpful!
Hi Jonathan, great article, thank you!
A couple of updates when running in Windows 11H2 (and Ubuntu 22.04 in my case):
1)
systemd
is now native in Windows 11H2, BUT needs an updated WSL2 install (I was using WSL v0.63 and I believe nativesystemd
support is in v0.68 onwards) - otherwise you getUpgrading WSL to latest version means that updating
/etc/wsl.conf
withthen works as desired...
2) We also need
containerd
installed - I used the manual steps from here and that worked for me howtoforge.com/how-to-install-cont...Those two steps joined the dots and now docker is running without docker desktop :)
Russ
What!??? When did this happen? No one tells me these things. Except for you, of course, for which I am extremely grateful. Well, this is a game changer. I will work on updating the instructions for systemd, then!
Weird -- containerd is already installed on mine; I can update the instructions accordingly. Thank you so much!
I'm running things with systemd now, I'm just struggling on how to launch that script which ensures the docker.sock is created before launching the actual docker service. So if you could include that in a rewrite that would be nice :)
Brilliant article - thanks for the thorough write up @bowmanjd!
I'm curious why you'd use a custom script to start
dockerd
rather than just usingservice docker start
?Is it just to control the shared docker socket location, or are there other reasons?
(I'm running Ubuntu-22.04)
xref: docs.microsoft.com/en-us/windows/w...
Great point. If using only one distro, and that distro is Ubuntu,
service docker start
should work well. It could be embedded in a script, I suppose, and launched from other distros or Powershell. But I wanted something truly distro-agnostic.Also note that a boot command in
/etc/wsl.conf
is only available on Windows 11.It might be worth mentioning that as of a few months ago, the default WSL2 install (Ubuntu) can be configured to support systemd with a two-line config file. I only just finished the install so I can't confirm that everything works 100% out of the box, but after rebooting the VM,
dockerd
was running as expected.According to this article from Microsoft, systemd is now enabled by default for the default Ubuntu WSL distribution installed with WSL and installable from the Microsoft Store.
Installing the distribution-maintained version of Docker is now as simple as:
Close the Ubuntu Terminal and re-open it (to make the group add take effect) and test that it works:
and Bob's your uncle. All done in less than a minute's work.
Beautiful!
Hi, followed everything but on doing sudo dockered getting this error.
WARN[2021-10-24T16:24:00.993150800+05:30] grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock 0 }. Err :connection error: desc = "transport: Error while dialing dial unix:///var/run/docker/containerd/containerd.sock: timeout". Reconnecting... module=grpc
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.8.4 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgrade
What helped me in similar situation:
dockerd
I got this so I just added "iptables": false to my daemon.json and this error was averted. In the original post it says you only need to do this for Debian but not Ubuntu, and I'm using Ubuntu so I skipped that step originally. But in the end, turned out it was required.
I did that but it did not work for me. Now I have started using docker desktop again.
Hey, great stuff! It is actually possible to expose docker.sock from WSL so that it is accessible by Windows applications. In fact this is what Docker Desktop is doing, allowing all Windows native applications to use npipe docker context. It requires a small proxy application to make it work though. To make it easy to use I have packaged it into a container, so it is easy to deploy with a single docker run.
Read more about this here
So I wonder if Windows 10 wsl Debian changed - I can't use the update-alternatives --config iptables. I know I did before, I'm not sure what I left out - but the iptables-legacy isn't set-able now.
Interesting; I just did this successfully last weekend. I wonder what is different. Do you have iptables installed?
I did. I even uninstalled and installed it back. Still had no "update-alternatives" for iptables which I believe is part of the problem I was having with Docker trying to run the "Computer Language Drag Racing" suite.
I removed the Debian WSL for now. Ubuntu works correctly, I think because they still use iptables and not the nftables in Debian that Docker apparently doesn't really understand unless you configure nftables just right.
I reinstalled the Debian WSL. Unless I missed a step above, when I got to "update-alternatives --config iptables" it's still broke on my system. Here is what I get:
$ update-alternatives --config iptables
update-alternatives: error: no alternatives for iptables
I'm pretty sure using the nftable subsystem is eventually what is making things not work - if I could get iptables-legacy it might be different.
Just double-checking: are you sure you have
iptables
installed?Yes. I did "sudo apt-get install iptables" to be sure. It seems like there is another package that adds the iptables-legacy links. I even removed and installed fresh wsl. Either Windows is remembering somewhere that it doesn't add the iptables-legacy rules, or I'm missing a package (or more than one) somewhere.
Well, let's check. On your Debian install, what is the result of
dpkg -S /usr/sbin/iptables-legacy
?dpkg shows:
$ dpkg -S /usr/sbin/iptables-legacy
dpkg-query: no path found matching pattern /usr/sbin/iptables-legacy
iptables is installed:
$ iptables --version
iptables v1.6.0
I'm not sure what happened to the previous reply:
$ dpkg -S /usr/sbin/iptables-legacy
dpkg-query: no path found matching pattern /usr/sbin/iptables-legacy
$ iptables --version
iptables v1.6.0
I think iptables installs when Debian itself is installed. It just isn't setting up the legacy rules. Searching around google, the answer that keeps popping up is to use the update-alternatives, which is the whole problem
I probably sound like I am quite fixated on the iptables package, but would you try reinstalling it? Using apt install --reinstall iptables
So I looked in /usr/sbin... I only have one entry if I look for iptables:
$ ls /usr/sbin/iptable*
/usr/sbin/iptables-apply
I believe there should be nearly a dozen links to other objects there.
$ sudo apt install --reinstall iptables
Reading package lists... Done
Building dependency tree
Reading state information... Done
0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 0 not upgraded.
Need to get 288 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 deb.debian.org/debian stretch/main amd64 iptables amd64 1.6.0+snapshot20161117-6 [288 kB]
Fetched 288 kB in 0s (2,349 kB/s)
(Reading database ... 36399 files and directories currently installed.)
Preparing to unpack .../iptables_1.6.0+snapshot20161117-6_amd64.deb ...
Unpacking iptables (1.6.0+snapshot20161117-6) over (1.6.0+snapshot20161117-6) ...
Setting up iptables (1.6.0+snapshot20161117-6) ...
$ update-alternatives --config iptables
update-alternatives: error: no alternatives for iptables
I agree it must be something in iptables too. It just doesn't set the default links in the install process to be able to switch to the legacy rules.
Debian 9, I see. I honestly haven't tried this with older versions of Debian. Did 9 even use nftables? Pretty sure there is no legacy version because iptables wasn't legacy then. Does dockerd work?
I didn't notice the 9. It is the latest from Microsoft - or so I thought. Dockerd does work. Maybe the project I'm trying to compile doesn't like Debian 9! Here I thought it was because the iptables didn't follow the instructions. Strange my Debian is so far behind.
For some reason I can't get internet connection inside the container.
Hi, I have exactly the same issue... @bowmanjd can you share any hint about how to get Internet connection working on docker containers running on WSL2?
Interesting... What sort of errors are you seeing? Is it all internet connectivity, or just DNS?
It is all internet connectivity: I cannot ping 1.1.1.1 but I can ping the docker host from a container.
BTW I solved this issue switching from Debian to Ubuntu as WSL2 distro.
I'm having same issue, using Debian 11 on WSL2. With a Dockerfile containing only:
FROM centos:7
RUN yum -y install httpd
I was getting yum errors not resolving the name of the mirror server:
Determining fastest mirrors
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container error was
14: curl#6 - "Could not resolve host: mirrorlist.centos.org; Unknown error"
Since I could resolve the name of the server from Debian WSL2 with no issue, I knew my DNS was working there. So I added some sleuthing to the Dockerfile:
FROM centos:7 RUN cat /etc/resolv.conf && ping -v -c2 host.docker.internal && ping -v -c2 1.1.1.1 && ping -v google.com && ping -v mirrorlist.centos.org RUN echo "timeout=30" >> /etc/yum.conf && cat /etc/yum.conf && yum -y install httpd
and run docker build with
--add-host=host.docker.internal:host-gateway
, I can see that I can ping the host from the container, but the container cannot seem to ping any external ip, even the cloudflare dns 1.1.1.1 or google's 8.8.8.8.I've played around with setting DNS in the container explicitly using the /etc/docker/daemon.json with things like
"dns": ["1.1.1.1", "8.8.8.8"]
, but if the container can't even get connectivity to these ips that's not going to work..My Debian environment does not have any iptables configured. I'm flummoxed.
I found my debian environment is configured to use iptables-nft:
$> sudo update-alternatives --config iptables
There are 2 choices for the alternative iptables (providing /usr/sbin/iptables).
But I was getting no rules generated by iptables-nft-save, and several rules generated by iptables-legacy-save, so I explicitly update-alternatives to iptables-legacy and rebooted (host and wsl2/debian). (Will report back with results..)
Still same error after switching explicitly to iptables-legacy in debian 11. FWIW, I'm also passing the following dns servers to my containers via docker daemon.json:
I've tried putting the google and cloudflare dns first in this order, to no avail.
The issue is more easily reproduced on my system by just running ping commands inside the latest alpine image:
The problem was that even though I had reverted to iptables-legacy in Debian, I still had
iptables: "false"
in my docker daemon.json. On removing that, docker can use its default iptables impl and work with Debian Bullseye. Now, my containers can access "the internet".I realize that your post indicated to use iptables: false as a way to get debian wsl2 instances to work with docker. But that never worked for me for some reason.
Yeah, I have actually changed the instructions, removing the iptables:false, as using iptables-legacy seems like the right way to do it.
I had the same issue with Ubuntu in WSL2. Removing iptables: "false" from the daemon.json and switching to iptables-legacy did the trick. No full reboot was necessary, running "wsl --shutdown" in powershell + reopening the ubuntu shell did the trick. Thanks!
Hi,
Thanks for this post, very useful previously. For information, we can now install Podman desktop (and podman with MSI file), experimental but interressing.
I will definitely try that, and update the article. Thank you!
I got this error, I solved it by running WSL itself with admin privileges when opening the WSL window to run
sudo dockerd
.EDIT: It turned out that the eventual root cause of my issue was that my distribution was still on WSL1. Even after upgrading WSL to 2 and running
wsl --set-default-version 2
, my distribution was still WSL1 as it was created before the upgrade. So I had to runwsl --set-version Ubuntu 2
(where my distribution was called "Ubuntu") and this converted the distro to WSL2. Then this issue just went away, regardless of whether I ran WSL as admin.Thanks so much for this @jonathan Bowman, was really helpful, don't forget to do another article on installing docker-compose on a WSL Distro without passing through Docker Desktop, might be minimal but it would be a decent supplement to this awesome article of yours
Thank you for the encouragement!
Awesome post! Thanks!
What an excellent write-up. Thank you!
I do have one question though. My understanding of the inner-workings of WSL is still rudimentary. Why do we place the docker socket in the
\mnt\wsl
folder? What is the significance of\mnt\wsl
?Hey Derek, I believe the \mnt\wsl location is chosen so multiple Linux installations can share the same docker daemon. If you only run one it doesn't hurt, but you could use Docker's default location, /var/run/docker/containerd/containerd.sock
Exactly!
Jonathan, thank you for the incredibly detailed description of setting up Docker for use in WSL2 without Desktop. I'm sure a lot more people will be visiting this page now that Docker has changed their license terms.
For anyone struggling with using this behind a proxy, I found the only configuration file that
dockerd
looks at is/etc/environment
, so set the likes ofHTTP_PROXY
,HTTPS_PROXY
, andNO_PROXY
in there before starting Docker.Additionally, I found this to be helpful for configuring
dockerd
to start when opening a new terminal (if it hasn't already been started).Lastly, if you are working behind a proxy and need access to a private container registry, and get an x.509 certificate error with
docker login
, grab the root certificate of the proxy from your browser (export as base-64) and drop it into the docker certs directory related to your private registry/etc/docker/certs.d/{private_reg_name}:{private_reg_port}/ca.crt
(private_reg_port
is optional if you're using a standard port). The next time you dodocker login
, theauth
section of~/.docker/config.json
will be updated.I love you. WSL2 Ubuntu+Windows10.
Goodbye Docker Desktop.
Thanks for putting this together. After spending 20+ hours trying to get Docker Desktop to work with flakey results at best I thought I'd give this a try. The instructions are fantastic. However, when I execute, docker run I'm getting a toomanyrequests: error.
When I try this on a Linux machine, I have not problems.
I've tried both Docker Desktop and WSL2 Docker on another laptop and had no issues.
Time to reinstall windows I guess :)
It sounds like you have a working docker setup; however, you have performed too many requests to the Docker Hub: docs.docker.com/docker-hub/downloa...
Incredibly detailed and helpful post: thank you!!
I stumbled on the same issue as df-seagate:
"$ docker -H unix:///mnt/wsl/shared-docker/docker.sock run hello-world
Unable to find image 'hello-world:latest' locally
docker: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: docker.com/increase-rate-limit."
I have rarely used docker, and this is a clean install on a machine on which I've never done anything with docker.
Shutting down my corporate VPN fixed the issue. I have no idea why. Also worth noting that I'm running wsl-vpnkit: github.com/sakai135/wsl-vpnkit
I just swapped out Docker Desktop for the native Docker engine on WSL2 and my only question to myself is why I didn't do this sooner. My guess is the way the Docker docs are structured influenced my original decisions (their section on "Get Docker" primarily focuses on "Docker Desktop", with a tiny sentence for "Docker Engine" pushed all the way to the bottom). Anyhoo, my life just got seriously upgraded.
So glad you found a good path. Any tips for others, or was it fairly straightforward?
It was pretty straightforward.
After using this with my WSL for a few weeks, I would like to bring up that I found issues using the k3d tool to run k3s clusters in WSL. Unfortunately it looks like k3d does not support moving the docker socket to a custom location.
Just leaving this info here for anyone considering setting up a shared docker socket!
github.com/rancher/k3d/issues/762
Thanks for the article. It really helps.
My need was to have a docker command recognized by the windows "system". Indeed, I run programs that run docker, etc... but sometimes they tell me something like "you don't have docker installed" because they don't manage to "find" and run your docker powershell "function".
So installing the windows docker-cli.exe is one of the solution but :
Is it possible to use the windows docker-cli (docker.exe) to connect to that dockerd using the socket things ?
I followed all your instructions but i just changed the path to the docker.sock to unix:///mnt/c/path/docker.sock. but dockerd doesn't start with this folder. Any thought ?
Yes, it is indeed possible using TCP, but I have yet to work through that and add an article...
I found after my question an article based on yours, on that topic at dev.to/_nicolas_louis_/how-to-run-.... And Indeed, it works (I also needed to disable my VPN, i don't know exactly why..).
Thanks !
Oh my god.. I now get why I should definitely try a tutorial from the beginning. I simply dismissed the warning about having to ensure WSL 2 is used... instead WSL 1 was used - and docker did not want to work with Ubuntu 20.04 installed in WSL 1 (various errors regarding not having permissions for "bridge" or missing some "iptables").
After converting to WSL 2 (yes - converting), it finally seems to work. Thank you for this brilliant article!
Ive looked this up a few times now and have to pick out what I want.
Here is the short version for those who only use the default Ubuntu distro.
Follow this: docs.docker.com/engine/install/ubu...
Then run this:
sudo usermod -aG docker $USER
And then install docker compose:
sudo apt install docker-compose
Run docker:
start service:
sudo dockerd
Test it:
docker run hello-world
I have done everything,
First It did not run, but when disabled VPN it started working but listen on something else.
Then I stopped dockerd and tried to restart but I failed, so after hour what is going on I simply run killall -9 docker and start dockerd again.
And it was working
BUT
When I started powershell and Ubunt in another cmd and run
docker -H unix:///mnt/wsl/shared-docker/docker.sock run --rm hello-world
I do have an error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:722: waiting for init preliminary setup caused: EOF: unknown
???
In my case this was because my linux distro was still on wsl version 1. you have to
Yes! Worked for me too. I had upgraded my system-wide WSL to 2 and set the default version to 2, but what I hadn't done was set the version for my specific installed distribution using
--set-version
.After setting up Docker on Debian WSL2, I couldn't connect to the Internet from inside containers.
Switching back to legacy iptables (and removing
"iptables": false
from daemon.json) solved the issue for me.The commands are listed here: wiki.debian.org/nftables#Reverting...
I followed the instructions on your page, when I ran
I got an error saying arp and eb were not registered and weren't going to be set.
I still don't have any communication from the container out to the internet which is an issue since I need them to run containers for vscode which installs stuff from special scripts during container creation.
I don't know if arp works on wsl... github.com/Microsoft/WSL/issues/2279. Curious if you find any solutions, though
After I reboot Windows it started working. So no entry with iptables and setting the iptables and ip6tables to legacy worked. It just needed to restart everything.
What are some elegant ways to restart your WSL docker daemon (e.g. after changing your daemon.json)? I'm not interested in "reboot windows". I'm currently using a script to run dockerd as a shell command passed to wsl.exe, run on user login, as described in this (excellent) article. I'm pretty novice at powershell and managing windows processes, and have pulled out a lot of hair just trying to get the equivalent of
ps aux | grep dockerd
andkill <pid>
in windows. The output oftasklist
(and even things like sysinternals ProcessExplorer, amazingly) don't show the arguments passed to the command running in the process.What I'm thinking now is, how about wrapping the dockerd wsl script in a windows service or scheduled task. Anyone doing this and have any recommendations?
Thanks for the effort you put into this article. In my case when I modified
/etc/docker/daemon.json
, docker service was unable to restart, because of the error I could find in the journal:"unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: hosts: (from flag: [fd://], from file: [unix:///mnt/wsl/shared-docker/docker.sock])"
The help here was this post that suggested to remove duplicate host flag from the start command.
Thanks for asking! This is an interesting problem. Curious what is in your /etc/docker/daemon.json?
Hey, thank you so much for writing this, it's been very helpful!
What I can't figure out at the moment is how to run docker-compose from Windows in the same way as docker, could anyone offer some direction?
Good question. My hunch is that this would require sharing the docker daemon over TCP (or using named pipes in Windows, as one commenter pointed out).
From the step " Set default user", I am still logged in as root user.
My OS build version is
19042.2486
and I tried the first step to add[user]
section in/etc/wsl.conf
.I tried another approach to change the registry key but got the error for the command:
Then I tried to see the registry key manually and found that
DefaultUid
is already present with correct value. But somehow after this I am able to login with mydefaultUser in WSL.Glad it is working for you now! It could be that WSL hadn't completely terminated and restarted when you tried it at first? Hard to tell, but good job working through it.
I found this document on github:
WSL2 distributions run as containers
It hasn't been updated in a while though, so I'm wondering if anyone knows whether or not Microsoft has published a technical document yet that describes the presumed implementation of WSL2 distributions as containers in more detail.
Thank you very much for the detailed instructions!
Let me ask you some questions.
Did I understand correctly that installing docker directly into wsl is not supposed to get any benefits or improvements, but just a way to avoid installing a heavy docker desktop application in windows that consumes a lot of RAM?
The Ubuntu development team recently announced the implementation of systemd support in Ubuntu 22.04 - bugs.launchpad.net/ubuntu/+source/....
Some enthusiasts have already tried this and it seems to work: github.com/microsoft/WSL/issues/51...
The developers of Fedora Remix for WSL also announced a new version of the distribution with systemd support a couple of weeks ago — whitewaterfoundry.com/blog/2022/4/...
Can you tell me, please, if activating this feature (wsl-systemd) - will it require changes to this tutorial of yours?
And in general, will it change anything in the way docker was used in wsl before? Would it be a game changer?
You are correct. In my opinion, installing Docker this way is appealing for its lightweight and flexible (pick-your-distro) approach. It is also a good learning experience!
Thanks for the links about Ubuntu and systemd! What this will change: the startup script can be replaced with a simple
systemd enable docker
Thanks a lot for this post, I made docker up and running on my windows laptop.
We also use a lot docker-compose, do you think it's possible to configure it like docker?
Are you planning to write an article how to use docker-compose on wsl?
Best, Max
I didn't understand a little about docker-service service / file. My solution works for me:
sudo mkdir ~/bin
sudo nano ~/bin/docker-service.sh
insert the contents of this file: github.com/bowmanjd/docker-wsl/blo...
. ~/bin/docker-service.sh
But I haven't found a way to stop. I will be glad if you can tell me. And if it was useful for you.
Hi,
How I can use --password-stdin along with wsl?. I mean, can able to login with docker using the below command
wsl -d Ubuntu-18.04 docker -H unix:///mnt/wsl/shared-docker/docker.sock login -u AWS -p "" .dkr.ecr..amazonaws.com
But, with the above example I required to set the token with the command line. And, it gets stored for future use. So, I need use the --password-stdin instead. But, I'm not sure how to do that. Any suggestion?
Hi , thanks for this great article !!
is there a way accessing the docker container from windows host ?
from the wsl distro i can ping the container
from windows i cannot ping or see the container
i can ping the wsl distro with no problem
i have tried the web app from docker but cannot access from windows host
thanks again
Thanks for this. I skipped all the shared mount stuff tho as I was trying to avoid the setup that docker desktop manages for us normally to see if there are any performance gains.
Without the MySQL data and project volume mounts operating in a seperate distro behind the scenes I saw almost my database heavy unit tests in a monolith sized project finish in half the time wich is now starting to reflect native linux performance.
I'll try podman next see if it gets any faster 😉.
Excellent how-to Jonathan.
You can expect a surge of traffic with the Docker Desktop license change yesterday.
Hi Jonathan, thanks for your article.
Did you manage to get VSCode + Remote Containers working?
I assigned the path of docker.bat to the remote extension but it it fails building the devcontainer with error message: "unable to prepare context: path "d:foobar.devcontainer" not found.
I guess there is something wrong with the translation of windows full path to wsl path.
If I execute the docker build command outside of VSCode with relative paths, the container is build successfully.
anwering to myself: yes it works, but the files, which should be opened inside the devcontainer must already lie in WSL space. Also see github.com/microsoft/vscode-remote...
@jonathan Bowman
I am lucky I have found your articles, but I still need a time to digest the knowledge you shared with us.
I am not sure if I am exactly tired of the whale in the system tray, but let's say I can imagine it might have been a perfect solution for me if only the systray part of Docker Desktop wasn't so deeply integrated.
In the post on docker forum, I am trying to describe what I would like to achieve.
forums.docker.com/t/docker-desktop...
I still haven't lost hope, that it is achievable to run both windows and wsl2 containers on one machine and sharing the resources.
It would be great if you find some time to look on it and post what do you think about it.
I pretty sure I am just missing some small thing to make it work :)
It might be a lack of knowledge is hiding some workaround for me.
thanks in advance
Lubomir
Interesting use case. Can you run Docker Desktop for windows containers, then use the methods described in this article for linux containers?
Receives this information during installation:
Some system requirements are missing
✅Windows 64bit
✅Windows Version
✅RAM
✅Virtual Machine Platform Enabled
❌WSL version should be >= 1.2.5. Call 'wsl --version' in a terminal to check your wsl version.
✅WSL2 Installed
cmd: wsl -l -v
NAME STATE VERSION
I don't understand what's wrong?
Thanks for this article!
For our application, we want to include several docker containers as part of our Windows installation package. Since we are not including Docker Desktop, we'll include a script using WSL (as you have mapped out here.)
Are there any any other licensing issues that we should be concerned about?
works good, but: am i doing smth. wrong or it is a common WSL-distro "feature" - one can not login in his docker-hub account from the distro? (Error response from daemon: Get "registry-1.docker.io/v2/": unauthorized: incorrect username or password)
Is this still an issue after you
docker logout
anddocker login
again?@bowmanjd nice article! But what is the reason to use custom name for unix:///mnt/wsl/shared-docker/docker.sock instead of the default one - unix:///var/run/docker.sock?
for example testcontainers (and many other apps) expect it at the default location.
Good question. As noted above, "If sharing the Docker daemon between WSL instances is desired, configure it to use a socket stored in the shared
/mnt/wsl
directory."Of course, if you only have one WSL instance (such as Ubuntu alone, and no other distros), then sharing may not be needed, and you can leave the default socket alone.
I hope this helps!
Thanks for the article. I had given up docker from the very beginning. May be I'll try it tonight... ** Off to find how to install alpine **
Thanks for this great article, it works like a charm !
Except one thing, Docker for desktop provides host.docker.internal alias to point to host IP, that allows me to tell xdebug(inside docker container) that phpstorm is listening xdebug requests. I didn't find the way to get the right Host IP (I'm using as you suggested generateResolvConf = false). I wonder that maybe there are some windows defender rules or something like that that blocks this kind of communication.
Any ideas ?
Hmmm... that is a good point, and I haven't dwelt on that question very much. So, the internal WSL address does not work for this purpose?
Interesting discussion of this at StackOverflow: stackoverflow.com/questions/638984...
I followed the steps (I used Ubuntu 22.04 LTS) but get this error while I try to launch dockerd ($sudo dockerd) :
error obtaining controller instance: unable to add return rule in DOCKER-ISOLATION-STAGE-1 chain: (iptables failed: iptables --wait -A DOCKER-ISOLATION-STAGE-1 -j RETURN: iptables v1.8.7 (nf_tables): RULE_APPEND failed (No such file or directory): rule in chain DOCKER-ISOLATION-STAGE-1
Very well explained article by the way...
Edit :
I found that the error is related to the ubuntu version I use : more details here : askubuntu.com/questions/1402272/ca...
I just updated iptables version used by the system using the following command :
$update-alternatives --config iptables
then chose the legacy one (choise 1 for me)
thanks for the instaction
if you can add the instructions to enable wasm-wasi it will be great.
since this feature is only documented for desktop version
Interesting idea. I have not experimented with this yet, but I am curious if you have. Have you tried enabling it in the dockerd config? docs.docker.com/storage/containerd/
Thank you for this awesome and detailed description. What i liked about Docker Desktop is the Kubernetes Integration. How can I get Kubernetes running without Docker Desktop on WSL2. Is it much more complicated?
Thanks for this, worked almost perfectly; the one thing that didnt work for me: I couldnt get the docker container to connect to the WSL host's SSH tunnels; I'm running an SSH tunnel to a remote network with a mysql server. With docker desktop, I could just connect to host.docker.internal; with docker engine in WSL I need to add 0.0.0.0 as listening device in the SSH tunnel setup (e.g. -L 0.0.0.0:3306:[remote server]:3306) otherwise it won't connect. Might be useful for others if facing the same issue; local services on the docker host seem to get exposed in a different way to this engine.
-bash: /mnt/c/Windows/System32/wsl.exe: Permission denied when launched from ~/bin/docker-service
problem solved
wsl.conf
options = metadata,uid=1000,gid=1000,umask=000,fmask=000,case=off
Thank you kind sir
⭐⭐⭐⭐⭐
Thank you! A quite useful article now, after the new Docker Desktop terms for the enterprise =)
Check your wsl version
wsl -l -v
and if it is not 2, change version to 2 this command:
wsl --set-version 2
I was sure that wsl version 2 because 2022 year already.
I was wrong!!!
Thank you for great article!
Unfortunately, there is an issue - you can't forward UDP ports, for instance local DNS server in docker :(
Switched back to Docker Desktop due to this issue
excellent post! very clear
I can pull images from docker hub but my containers can not connect to internet. Do you have any idea? @bowmanjd
This article will be really helpful
Thanks! This post saved my life in the first paragraph...just run Docker desktop, and I was able to run Docker Daemon.
Thanks for the great article! I'm curious, if we run docker's daemon process in the background, will it be able to detect the changes made to daemon.json automatically?
I am pretty sure it would require a restart, but I would love to be wrong! What does your experimentation suggest?
Just followed the guide, still works.
However on more recent versions of WLS2/Ubuntu, the docker daemon seems to be started for you, so no need for any scripts to start it, or to start it manually.
Correct; I certainly need to update the article to adapt to the addition of systemd. Thank you!