I have done quite a few website migrations over the years. My blog like many first started out on WordPress. After several attempts at optimisation, I ended up generating a static version of my WordPress website.
On my static site generation journey, I discovered Gatsby.js and completely redesigned my website around it. However, writing posts locally in markdown has its own downsides and lacks the discoverability you get from blogging on Hashnode.
So I decided to move my blog posts over to Hashnode but I still wanted my posts to appear on my main blog.
Migrating the Blog Posts
The first step was to import all my blog posts to Hashnode. There are a few methods you can use to import your blog posts.
I decided to use the Dev.to importer as I repost all my posts on dev.to anyway. Unfortunately, the bulk import didn’t work for me so I had to import my posts one at a time.
This was handy though as you need to set the following for each post manually:
- Post slug
- Tags
- Canonical URL
Once all my posts were imported it was time to edit my Gatsby website to start pulling down the posts.
Gatsby plugins
If you do a search for gatsby-source-hashnode
you will find a couple of plugins on the Gatsby website.
I decided to go with the first one (gatsby-source-hashnode) as it has a lot more downloads and still appears to be maintained.
However, I quickly noticed when testing out the plugin that it was only pulling down the first 6 posts from my Hashnode blog. This might be a recent change to the API which hasn’t yet been picked up by the plugins.
Luckily, the plugin is open source so I raised a PR for the plugin owner to review.
In the meantime, I copied my modified code to a folder in my gatsby repository called plugins/gatsby-source-hashnode
. I then used the plugin as described in the documentation.
GraphQL queries
I am using GraphQL queries in a few places to pull down posts:
- Blog Feed
- Blog Post
- Latest Posts
- RSS Feed
For reference, this is what a couple of my queries look like for pulling from Hashnode.
Blog Feed
query pageQuery($skip: Int!, $limit: Int!) {
site {
siteMetadata {
title
description
}
}
allHashNodePost(
sort: { fields: [dateAdded], order: DESC }
limit: $limit
skip: $skip
) {
edges {
node {
brief
slug
title
coverImage {
childImageSharp {
gatsbyImageData(
width: 920
height: 483
layout: CONSTRAINED
transformOptions: {cropFocus: CENTER}
)
}
}
}
}
}
}
Blog Post
query BlogPostBySlug($slug: String!) {
site {
siteMetadata {
title
author
siteUrl
}
}
hashNodePost(slug: { eq: $slug }) {
_id
brief
childMarkdownRemark {
htmlAst
}
slug
title
dateAdded
coverImage {
publicURL
childImageSharp {
gatsbyImageData(
width: 920
height: 483
layout: CONSTRAINED
transformOptions: {cropFocus: CENTER}
)
}
}
}
}
I am not yet pulling down tags or reading time from the API. That is something I will try out later.
Automation with GitHub actions
I already have GitHub actions set up for my website so that any new commits to the main
branch trigger a website build and push to S3. I wrote a post about it if you want to find out how.
However, now that I am not committing articles to that repo I needed a way to automatically trigger builds when I publish a new post on Hashnode.
Luckily, Hashnode has an option to backup your blog posts to GitHub which we can use as the trigger point.
Step 1: Add workflow_dispatch to your GitHub action
To be able to remotely trigger a build on GitHub you need to add workflow_dispatch
to the on:
section of your workflow file.
This is what the top of mine looks like:
name: Deploy Blog
on:
workflow_dispatch:
push:
branches:
- main
Step 2: Create a personal access token
Next, we need to create a personal access token that we will use as the API Key for triggering the remote build.
You can do this in GitHub under Settings > Developer Settings > Personal Access Tokens
I set my access token with the following permissions.
You will need to set an expiry date as well and set yourself a reminder to generate a new token before it expires.
Make sure you copy the access key as you won’t get a chance again!
Step 3: Add the key as a secret to your backup repository
To be able to use this access key we need to add it as a secret to the repository where Hashnode is backing up your posts.
On the backup repository go to Settings > Secrets and add a new repository secret called ACCESS_TOKEN
and put in the key from the previous step.
Step 4: Add a workflow file
Next, we need to add a workflow that gets triggered every time a commit is added to this repository. In our case, this will be whenever we publish a post on Hashnode (Note: Drafts aren’t saved in GitHub).
Add the following file to your backup repository .github/workflows/workflow.yml
---
name: Trigger Deploy
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- run: |
curl -X POST \
-H "Authorization: Bearer ${{secrets.ACCESS_TOKEN}}" \
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/repos/GITHUB_USERNAME/WEBSITE_REPO/actions/workflows/workflow.yml/dispatches \
-d '{"ref": "main"}'
Make sure you change GITHUB_USERNAME
to match your GitHub username, WEBSITE_REPO
to the repository with your Gatsby.js website and workflow.yml
to match the name of your deploy workflow file.
To test, try updating one of your posts in Hashnode and you see this workflow get triggered then followed by your website workflow.
Top comments (0)