If you’re a software developer, you know the deal: we’re always installing tools. Databases, message brokers, Redis—whatever the project calls for, we grab it. But here’s the thing: a lot of these tools only get used occasionally. Maybe I need RabbitMQ for a project one month, then it sits there unused until the next time I need it. Or maybe I just following a tutorial that used database that I rarely used like PostgreSQL.
Now, if all of these tools are installed directly on my laptop, they start to hog resources, slowing things down even when I’m not using them. It’s annoying to have my laptop feeling sluggish just because I’ve got background services running for stuff I don’t even need every day.
Enter Docker: My On-Demand Toolbox
To deal with this, I started using Docker to handle the tools I don’t need constantly running. With Docker, I can spin up a tool only when I need it and shut it down when I’m done. Super easy and way more efficient!
I also set up a few Docker Compose files for my go-to tools, so whenever I need, say, RabbitMQ or MongoDB, I just type docker compose up
, and boom—my tool’s ready. And when I’m done, I just stop the container. I can even delete the image if I want to. Clean and simple.
Docker Compose Files for Common Tools
Here are the Docker Compose files I put together for a few popular tools. Just create a folder for spesific tools, like rabbitmq
. Then create a docker-compose.yaml
it that folder. Whenever you want to use it, just go to that directory, then run docker compose up
. If you finished, just press Ctrl + C
to stop the container. Feel free to use them if you’re looking to keep your dev environment streamlined!
1. RabbitMQ
Notes:
- For GUI, use
guest:guest
to login
services:
rabbitmq:
image: rabbitmq:3-management
container_name: rabbitmq
ports:
- 5672:5672 # for sender and consumer connections
- 15672:15672 # for serve RabbitMQ GUI
volumes:
- ${HOME}/Dev/docker/rabbitmq/data/:/var/lib/rabbitmq
- ${HOME}/Dev/docker/rabbitmq/log/:/var/log/rabbitmq
networks:
- rabbitmq-network
networks:
rabbitmq-network:
driver: bridge
2. MongoDB
services:
mongodb:
container_name: mongodb
image: mongo:latest
ports:
- 27017:27017
volumes:
- ${HOME}/Development/docker/mongodb/data/:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: your_username
MONGO_INITDB_ROOT_PASSWORD: your_password
networks:
- mongodb-network
networks:
mongodb-network:
driver: bridge
3. PostgreSQL
Notes:
- From
pgadmin
GUI, please usehost.docker.internal
as host. - You can remove
pgadmin
if you want.
services:
postgres:
image: postgres
container_name: postgres
environment:
POSTGRES_PASSWORD: your_password
volumes:
- ${HOME}/Development/docker/postgresql/postgres/data/:/var/lib/postgresql/data
ports:
- 5432:5432
networks:
- postgres-network
pgadmin:
image: dpage/pgadmin4
container_name: pgadmin
depends_on:
postgres:
condition: service_completed_successfully
ports:
- 8888:80
environment:
PGADMIN_DEFAULT_EMAIL: your_email@gmail.com
PGADMIN_DEFAULT_PASSWORD: your_password
volumes:
- ${HOME}/Development/docker/postgresql/pgadmin/data/:/var/lib/pgadmin
networks:
- postgres-network
networks:
postgres-network:
driver: bridge
4. MySQL
Notes:
- You can remove
phpmyadmin
if you want. - From
phpmyadmin
GUI, please usehost.docker.internal
as host/server
services:
mysql:
image: mysql:latest
container_name: mysql
environment:
MYSQL_ROOT_PASSWORD: your_password
volumes:
- ${HOME}/Development/docker/mysql/data/:/var/lib/mysql
ports:
- 3306:3306
networks:
- mysql-network
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin
depends_on:
mysql:
condition: service_completed_successfully
environment:
MYSQL_ROOT_PASSWORD: your_password
PMA_HOST: mysql
PMA_PORT: 3307
PMA_ARBITRARY: 1
ports:
- 8080:80
networks:
- mysql-network
networks:
mysql-network:
driver: bridge
5. Redis
Notes:
- From
redisinsight
GUI, please usehost.docker.internal
as host - You can remove
redisinsight
if you want.
services:
redis:
image: redis:latest
container_name: redis
volumes:
- ${HOME}/Development/docker/redis/redis/data/:/data
ports:
- 6379:6379
networks:
- redis-network
entrypoint: redis-server --appendonly yes --requirepass granite97
redisinsight:
image: redis/redisinsight:latest
container_name: redisinsight
depends_on:
redis:
condition: service_completed_successfully
volumes:
- ${HOME}/Development/docker/redis/redisinsight/data/:/data
ports:
- 5540:5540
networks:
- redis-network
networks:
redis-network:
driver: bridge
6. SonarQube
services:
sonarqube:
image: sonarqube:community
container_name: sonarqube
environment:
SONAR_ES_BOOTSTRAP_CHECKS_DISABLE: true
volumes:
- ${HOME}/Development/docker/sonarqube/data/:/opt/sonarqube/data
- ${HOME}/Development/docker/sonarqube/extensions/:/opt/sonarqube/extensions
- ${HOME}/Development/docker/sonarqube/logs/:/opt/sonarqube/logs
ports:
- 9000:9000
networks:
- sonarqube-network
networks:
# Create a new Docker network.
sonarqube-network:
driver: bridge
Wrapping Up
Using Docker for these tools has made my life so much easier. Now, instead of bogging down my machine with stuff I only need once in a while, I just spin things up as needed. No more unnecessary background processes, and no more cluttered dev environment.
So if your machine’s feeling the weight of too many tools, give this Docker approach a try. Hopefully, these Docker Compose files can help make your setup a little lighter and faster. Cheers!
Side Notes
- Please change the volume path to your desire. In my case, the volume of each tools will be in
/Users/granitebps/Development/docker/{tools_folder}/
. If you want it as is, thats fine. Just make sure folderDevelopment/docker
is exists in your$HOME
folder. - If you want to connect to the service from other docker container, please change from
localhost
or127.0.0.1
tohost.docker.internal
. Basically, you cannot accesslocalhost
of the host machine from docker container, so to tell docker container to usedlocalhost
of the host machine, we changed it tohost.docker.internal
. - If you want to use lighter version of every tools, you can append
alpine
on the image name. But make sure the image havealpine
version in docker hub. Also, please note thatalpine
version have less functionality than the normal version.
Top comments (7)
Hey, thanks for the information share!
Have you used dev containers? They are also quite good for this. But more on project related side rather than global.
Too bad that they have issues on Windows (my workstation) in some constellations. No issues on UNIX systems so far 😜
Running applications in containers also makes switching between projects seamless and ensures consistency across different setups, ultimately simplifying both development and debugging processes.
If anyone is interested to know about how to calculate test score visit here
devilbox.org/ everything ready
I agree. This way it is separating dev from operations people who should know end to end of how a service should run.
good article. My whole dev setup is in docker, works quite well.
I agree that this is a good way to install tools which we might use only once in a while. Thanks for sharing this and I am following this approach directly.
can you add vhosts with docker compose?