The post Boilerplating a New Go Program (Microservice) first appeared on Qvault.
I lead a team that is responsible for anywhere from 15-25 Go microservices at any given time. We are constantly creating new services and libraries, so it’s become important to streamline the boilerplating process. I’m mostly writing this article for myself as a kind of self-documentation, but hopefully, you’ll find it useful as you create new Go projects for yourself.
I’ve certainly found it useful in spinning up side projects like the Qvault app.
Step 1 – Create a Remote Repo and Clone It
I’m a sucker for the GOPATH. Even though it’s no longer necessary, I find the single workspace principle neat and orderly. As such, I keep all my projects in:
~/go/src/REMOTE/NAMESPACE/PROJECT
The fastest way to start a new project is to hop into Github, Gitlab, or wherever my source control is located and create a new project with the default README. Then I can simply pull it down using go get
and my remote Git connection is already setup. For example:
go get github.com/qvault/webapp
As long as that command is run outside of an existing go module, the source code is cloned to the proper location in your GOPATH.
Step 2 – Folder Structure
After running cd
into our new project, we create an internal
directory that will store packages that are meant to be scoped to this project. It will usually contain packages like database
, kafka
, bayesian
, etc. If you didn’t know, the internal
directory is a Go convention, and doesn’t allow the Go compiler to accidentally use its packages in an external project.
Next, we create a cmd
folder where we will store all the executable main
packages that will be built at compile time. Each executable will have its own directory in cmd
.
We also need a Makefile
that will automatically build all the executables in cmd
, run tests, do linting etc. It looks something like this:
test:
go test ./...
vet:
go vet ./...
lint:
test -z $(gofmt -l internal cmd)
build:
go build ./cmd/api
clean:
rm api
Now we have simple commands to run from the repo’s root that will do most of the common tasks we care about. (The above assumes a single executable named api
in the cmd
directory)
make build # build the code
make test # test the code
make vet # vet the code
make lint # lint the code
make clean # remove old build
All of our CI/CD and infrastructure files should reference this Makefile
. That way if the way we build and test our code changes in the future, we only need to change configurations in one place.
Step 3 – Go Mod and Vendoring Dependencies
We always vendor dependencies on my team. This means we use the go mod
tool to download all of our dependencies and commit them to source control alongside our code. There is debate about whether or not this is considered best practice, but we’ve found comfort in the fact that we have copies of the remote source in case something happens to it.
And it’s not like vendor
is on the same order of magnitude as node_modules
…
In order to get started, and assuming you actually have a program in your cmd
folder, just run:
go mod init
go mod tidy # unnecessary first time but good habit
go mod vendor
This will create three new things in your project root:
- go.mod – list of dependencies and versions
- go.sum – hashes of dependencies for security reasons
- vendor directory – Source code of all dependencies
Step 4 – .gitignore
I don’t like to pre-built .gitignore
files, I like project-specific ones so I start from scratch. This is my typical starting point:
.env # ignore local environment variable file
/api # ingore built executables
Step 5 – Dockerfile (Optional)
If you are containerizing your apps (you probably should), then you will need a Dockerfile. From here on out everything we do assumes you are using Kubernetes or something similar to deploy apps.
A minimal Dockerfile
could look something like this:
FROM debian:stretch-slim
WORKDIR /
RUN apt-get update && apt-get install -y ca-certificates
ADD bin /bin/
CMD ["/bin/sh"]
We use a lightweight OS, Debian stretch-slim. Remember, we are deploying built executables! Go is a compiled language, we don’t need to include the whole Go toolchain in our deploys.
After that, we just add our executables to the /bin
directory and launch a shell.
Step 6 – Github Actions
So I’m going to get a bit sloppy here. And the reason for that is Github Actions configurations are VERY dependent on the exact tools you use to deploy your apps. We use Kubernetes, Helm, Helm Secrets, and AWS ECR.
That said, the test file is universally useful. In github/workflows/tests.yml
:
name: Tests
on:
pull_request:
branches: [master]
jobs:
test:
name: Test
runs-on: ubuntu-latest
env:
GOFLAGS: -mod=vendor
GOPROXY: "off"
steps:
- name: Set up Go 1.14
uses: actions/setup-go@v1
with:
go-version: 1.14
id: go
- name: Check out code into the Go module directory
uses: actions/checkout@v1
- name: Format
run: make lint
- name: Vet
run: make vet
- name: Test
run: make test
- name: Build
run: make build
Then, for your deploys you might use something like this in github/workflows/deploy.yml
:
name: Deploy
on:
push:
branches: [master]
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
env:
GOFLAGS: -mod=vendor
GOPROXY: "off"
steps:
- name: Print
run: echo "Deploying to $DEPLOY_ENV"
- name: Set up Go 1.14
uses: actions/setup-go@v1
with:
go-version: 1.14
id: go
- name: Check out code into the Go module directory
uses: actions/checkout@v1
# Only necessary because we are using Helm and need to pull template files
# from a centrally controlled remote repository
- name: Check out helm templates
uses: actions/checkout@v2
with:
repository: {NAMESPACE}/{PROJECT}
path: helm-templates
ref: v1.3.1
token: ${{ secrets.REPO_PAT }}
- name: Move templates to local project directory
run: mv {PROJECT}/templates helm/templates
- name: Build Go code
run: make build
# We use AWS ECR to store built images, then Helm instructs Kubernetes to pull them down
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Build, tag, and push image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/${GITHUB_REPOSITORY#*/}:$IMAGE_TAG .
docker push $ECR_REGISTRY/${GITHUB_REPOSITORY#*/}:$IMAGE_TAG
- name: Logout of Amazon ECR
if: always()
run: docker logout ${{ steps.login-ecr.outputs.registry }}
- name: Install aws-iam-authenticator
uses: prepor/action-aws-iam-authenticator@master
- name: Deploy Helm
run: |
curl -LO https://git.io/get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
mkdir ~/.helm
mkdir ~/.helm/plugins
helm plugin install https://github.com/futuresimple/helm-secrets
helm secrets upgrade --install ${GITHUB_REPOSITORY#*/} ./helm --namespace ${GITHUB_REPOSITORY#*/} -f helm/values.yaml -f helm/secrets.yaml --debug --set deployment.image.tag=$GITHUB_SHA
kubectl get deploy -n ${GITHUB_REPOSITORY#*/} --no-headers -o custom-columns=":metadata.name" | xargs -n 1 kubectl rollout status -n ${GITHUB_REPOSITORY#*/} deploy
I’ve totally left out all the Helm and ECR details in this tutorial, mostly because you either:
- Are already familiar with them so you don’t care
- Are using something else so you don’t care
- Can easily find resources dedicated to those larger topics. Here is a quickstart for you if you need it.
Example Project Directory
If you’ve stuck with it, then you will have a project directory that loosely resembles this one:
Thanks For Reading!
Follow us on Twitter @q_vault if you have any questions or comments
Take some coding courses on our new platform
Subscribe to our Newsletter for more programming articles
Top comments (0)