Today I’m going to show you how I managed to migrate from Heroku to Kamal, with a focus on Postgres. The schema below describes our goal for today :
We want :
- A Preprod application, on its own host and its own Postgres instance.
- A Production application, with its own Postgres as well. Production also needs another feature : save DB backups on an S3.
We will use my previous article as a base for today. You can find every information here.
Table of Contents
1.1. Edit deploy.yml
1.2. Edit secrets
1.3. Edit database.yml
2.1. Edit deploy.preprod.yml
2.2. Edit secrets
2.3. Edit database.yml
3.1 Download the backup
3.2. Upload the backup to S3
3.3. Run the migration script
4. Conclusion
Add Postgres to Production
We already have Production and Preprod applications working perfectly without any additional services. Let’s add and configure our first accessories : Postgres and Backups !
Edit deploy.yml
# config/deploy.yml
[...]
service: my-app
env:
clear:
POSTGRES_USER: postgres
POSTGRES_DB: my_app_production
POSTGRES_HOST: my-app-db
secret:
- RAILS_MASTER_KEY
- POSTGRES_PASSWORD
accessories:
db:
image: postgres:16.4
host: 190.0.0.0
port: 5432:5432
env:
clear:
POSTGRES_USER: postgres
POSTGRES_DB: my_app_production
secret:
- POSTGRES_PASSWORD
directories:
- data:/var/lib/postgresql/data
db-backup:
image: eeshugerman/postgres-backup-s3:16
host: 190.0.0.0
env:
clear:
SCHEDULE: '@daily' # I want a backup every day for the last 30 days
BACKUP_KEEP_DAYS: 30
S3_REGION: eu-west-3
S3_PREFIX: backups
S3_BUCKET: my-app-backups
POSTGRES_HOST: my-app-db
POSTGRES_DATABASE: my_app_production
POSTGRES_USER: postgres
secret:
- POSTGRES_PASSWORD
- S3_ACCESS_KEY_ID
- S3_SECRET_ACCESS_KEY
As my service name is my-app
, Kamal will automatically prefix accessories with it. This is why I can freely call my-app-db
. Kamal understands that you’re talking about the accessory db
linked with the service my-app
. This mechanism is very powerful!
For the database backups, I’ll use the marvelous image that does all the work for you : https://github.com/eeshugerman/postgres-backup-s3
Basically, you just need to set up an S3 with the correct permissions, and every day you’ll get your backups there. postgres-backup-s3
also provide a useful feature that allows you to restore an old backup. We will use it later on !
Edit secrets
We now need to ensure that the new secrets are stored in the secrets file:
# .kamal/secrets
[...]
POSTGRES_PASSWORD=fakepassword
S3_ACCESS_KEY_ID=xxxxxxxxxxxxxxx
S3_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxx
Edit database.yml
You also need to help Rails connect to your database by providing the same environment variables:
# config/database.yml
default: &default
adapter: postgresql
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
timeout: 5000
host: <%= ENV["POSTGRES_HOST"] %>
username: <%= ENV["POSTGRES_USER"] %>
password: <%= ENV["POSTGRES_PASSWORD"] %>
[...]
production:
<<: *default
database: <%= ENV.fetch("POSTGRES_DB") { "my_app_production" } %>
Once it’s done, you can run :
$ kamal setup
This will install the Postgres container in Production. You can confirm that everything will work by accessing the dbconsole using : kamal app exec -i 'bin/rails db'
Now that we have Postgres working in Production, let’s do the same for Preprod.
Setup Postgres for Preprod
Well, the steps are exactly the same. We will just provide different env variables. Let’s do it!
Edit deploy.preprod.yml
# config/deploy.preprod.yml
[...]
service: my-app-preprod
[...]
env:
clear:
POSTGRES_USER: postgres
POSTGRES_DB: my_app_preprod
POSTGRES_HOST: my-app-preprod-db
secret:
- RAILS_MASTER_KEY
- POSTGRES_PASSWORD
accessories:
db:
image: postgres:16.4
host: 190.0.0.1
port: 5432:5432
env:
clear:
POSTGRES_USER: postgres
POSTGRES_DB: my_app_preprod
secret:
- POSTGRES_PASSWORD
directories:
- data:/var/lib/postgresql/data
The only changes from Production are :
— service property.
— POSTGRES_DB and POSTGRES_HOST env variables.
— Host domains for accessories
Edit secrets
We still need to update .kamal/secrets.preprod
to provide a POSTGRES_PASSWORD
value :
# .kamal/secrets.preprod
[...]
POSTGRES_PASSWORD=fakepassword
Edit database.yml
We need to add the preprod
environment and confirm that it takes the right environment variable.
# config/database.yml
[...]
preprod:
<<: *default
database: <%= ENV.fetch("POSTGRES_DB") { "ride_back_preprod" } %>
And, we’re good to run :
$ kamal setup -d preprod
We now have Preprod and Prod applications working with their own database 🎉
Now that everything is set up, let’s say adios to Heroku and migrate our database once and for all!
Migrate the database
The first thing you want to do is cut access to your application to avoid losing any data.
To do so, you can enable the “Maintenance Mode” in the settings of your Heroku application.
Once it’s done, we can get back to work !
Download the backup
You’ll need to find your Postgres public URL to download a backup locally. To do so, go on your application Dashboard and :
“Resources” > “Heroku Postgres” > “Credentials” > “Default” > “URI”
The URI will normally look like this : postgres://{random-letters-and-numbers}ec2[...].eu-west-1.compute.amazonaws.com:5432/[...]
Once you’ve found it, let’s download the database locally using pg_dump
:
pg_dump --format=c --no-acl --no-owner {paste URI here} > database.dump
You should now see a database.dump
file in your directory.
Upload the backup to S3
Now, you need to upload your file to the S3 bucket you use to store your backups. Use the exact same directory the backups will go.
Run the migration script
Once your file is uploaded, you can now run this script:
kamal accessory exec db-backup -i "sh"
source ./env.sh
s3_uri_base="s3://${S3_BUCKET}/${S3_PREFIX}"
aws $aws_args s3 cp "${s3_uri_base}/database.dump" "database.dump"
conn_opts="-h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DATABASE"
pg_restore $conn_opts --clean --if-exists --no-acl --no-owner database.dump
rm -rf database.dump
This script will:
- Access to the database backup container using
sh
- Download your
database.dump
file using aws cli - Restore the database based on
database.dump
- Remove
database.dump
to free space.
That’s it! Your database is now completely self-hosted. Congratulations!
Conclusion
Thanks for reading so far !
If you need help with your Kamal configuration, do not hesitate to comment out or reach me by DM 👋
Top comments (0)