My experience with the AWS Cloud Resume Challenge was challenging but fun. I learned more than I thought I would and overall I'm very pleased that I took it on. I'd like to give a big thanks to Forrest Brazeal for giving back to the community by creating this challenge.
Certification
Getting certified was my first objective. I mainly relied on FreeCodeCamp's free YouTube series covering the AWS Certified Cloud Practitioner and some free and paid practice tests. Mostly this felt like getting familiar with the broad array of terms, services, and capabilities of the AWS Cloud. I came away from my studies impressed with just how much AWS offers. From renting time on Satellites, to AI development and testing, to Data Warehousing, I gained a greater appreciation for just how far cloud computing has come. As this was mostly a "1000 Foot View" of AWS, I scheduled the test for a month out and studied after work and on weekends. It was a straightforward exam and I passed by a comfortable margin.
HTML & CSS
The last time I seriously used HTML or CSS was editing the background on my Myspace page. So I followed some quick refresher videos and built a bare bones webpage. Building the webpage was more of a moral victory because I knew after I finished my simple version that I would just find a nice template and edit that instead. Didn't see a point in re-inventing the wheel here. The extent of my CSS was different header levels, playing with background color, and centering my photo.
Static Website
Hosting the site using an AWS S3 Bucket was pretty straightforward. The important learning here for me was setting up proper access controls. Because I knew I would be setting up a Cloudfront Distribution I blocked all public access to the bucket and disabled static hosting. Then it was just a matter of uploading my HTML code and any other relevant resources such as images or javascript functions to the bucket.
HTTPS
Configuring the Cloudfront Distro for HTTPS was more involved. I first needed to specify the domain I purchased using Amazon Route 53. Next, I needed to generate a custom SSL certificate using AWS Certificate Manager. This certificate would be needed to establish secure connections to site. I used the latest TLS version for the security protocol. I set the origin to the name of my S3 Bucket and set the viewer protocol to redirect HTTP to HTTPS.
DNS
Wanting to keep as much as possible within the AWS ecosystem I opted for Amazon Route 53 to purchase my domain. The user experience here was pretty polished. All I had to do was enter the domain address I wanted and AWS automatically checked for its availablity. They also listed the pricing for available subdomains. I purchased the www.andrewscloudres.com domain for $14. Then I created two A records to be used as aliases with the Cloudfront Distribution. This way, users would be able to reach the page from both www.andrewscloudres.com and andrewscloudres.com.
Javascript
Part of the challenge called for implementing a visit counter that would be displayed on the webpage and increment every time a new visitor came to the site. For this is created a "counter update" function that's triggered every time someone loads the webpage. The function makes a fetch request (HTTP GET) to a Lambda Function (discussed later) and displays the response (received in JSON) to the site.
Python
Of course there are several APIs already available for visitor counters but I thought it would be more fun to actually write some code here. I used AWS Lambda as a lightweight solution to implement the API. The Lambda function simply reads the dynamodb table, increments the value, then updates the table with the new value. This called for the invoking the boto3 library which is an AWS SDK for Python. This gave me access to the variables I would need for the function. I only needed to define the resource type (dynamodb) and specify the name of the table. Then,
using 'get_item' the function gets the value from the table. It increments the value by one, and using 'put_item' updates the value the of the table and returns the new value. So the entire workflow is USER > andrewscloudres.com > Javascript function makes fetch request to Lambda > Lambda function updates DynamoDB table > Javascript receives updated value and updates the site.
Database
For my visit counter I would need a database to store the the count. For this I used DynamoDB. I used the default read/write settings which, for the scope of this project, is essentially free. I only needed to create a table with one value to represent the hits to the site. This is the same table and value I referenced in my Python code. DynamoDB was the clear choice here as the simplicity of the table called for a NoSQL implementation.
Infrastructure-as-Code
The next part of the challenge was to implement my AWS resources as infrastructure-as-code using either AWS SAM or Terraform. Here I chose for Terraform because I wanted to learn using something that was vendor-agnostic. My strategy here was to first get familiar with the resource using the AWS Console GUI. After I got a good idea of what the important parameters were I would then go and re-create the same resource in Terraform. Having some experience in college with Ansible I found the Terraform syntax to be very similar to an Ansible playbook. Also AWS provides extensive documentation how to implement each of their resources in Terraform. So this part wasn't quite as difficult as I thought it would be. After I configured AWS CLI on my host machine to interact with the my AWS account I was able implement all of my changes without having to click around the AWS Console GUI. Infrastructure-as-Code is also great because not only is it fast, everything I create is now also a template for the next time I need to create a similar resource. I can just copy the code, adjust the parameters and create the resource. I did this for the DynamoDB Table, the S3 Bucket, and the Cloudfront Distribution.
Source Control & CI/CD
The final step for me was implementing a CI/CD Pipeline for ease of updating. This was one of the more challenging parts for me. I opted for a git repository as I already have some other projects there. The first step was creating a new repository and establishing connectivity between my IDE (Visual Studio Code) and GitHub. This would allow me to push changes to the repo. This was the first challenging part. But I eventually realized I was making it more difficult than it needed to be. In short, to be able to remotely access my repository via SSH I needed
to create an SSH key on my local host and share the public key with GitHub. This is easy enough. But for some reason this approach kept failing. After some frustrated Google, Reddit, and ChatGPT sessions I came to realize that because I was using WSL on my local host (I prefer Linux for navigating directories), the file path to my SSH key was different. So the SSH-Agent on my host device was looking for the key in a different directory. So I switched from WSL for the key generation and resolved that error, but then came the problem of setting permissions to
the SSH keys. I learned that if the permissions aren't restrictive enough the SSH-Agent will ignore the keys and refuse to use them. So finally, I was able to set the correct permissions on the .ssh directory and successfully reach GitHub. After more trial-and-error (I will spare the reader my process of understanding github branches) I was able to successfully update the GitHub repository with changes directly from Visual Studio Code. Woohoo!!
The second part of the pipeline was pushing the updates from GitHub to AWS. This called for creating a GitHub Action. To implement this I needed to create a special directory in my Github Repo and create a YAML file with instructions to update the S3 bucket. Again, having done some Ansible before, I was happy to see that I was creating a simple playbook here. The workflow was triggered once I pushed a change from VS Code to Github. The GitHub Action would first checkout the code, then it would configure credentials to access my AWS account. To achieve this I created a role on the AWS side and gave it permissions to Read/Write to my S3 Bucket. In the playbook, I supplied the ARN (Amazon Resource Number) of the IAM role to the AWS-Action "configure-aws-credentials" so the GitHub action would be directed to that role.
I bounced back and forth between this approach, known as OIDC, which is recommended by AWS and the other approach which was configuring secrets and storing them in GitHub and referencing the secrets in the playbook. Both approaches are good options and keep the credentials safe. The important part to keep in mind with OIDC is to limit the scope of permissions to only what is needed. Once the GitHub Action successfully configured credentials, the final step in the playbook is to sync the contents of the GitHub Repository with the contents of the S3 Bucket and
delete everything in the S3 bucket that doesn't match the GitHub Repo. After a handful of failed deployments I was finally able to get the workflow functioning properly. I tested the workflow with some minor page updates to see if the site would update automatically once I pushed the changes. Sure enough (after clearing my browser cache) it worked! I could finally take a sigh of relief.
Final Thoughts
Overall, this was a great challenge. I learned some completely new things (Terraform, Javascript, CI/CD, AWS) and gained a deeper understanding of some things I was already familiar with (Git, DNS, HTTPS, SSH Key Management). There were some moments of frustration but that always comes with learning new things. The feeling of satisfaction at the end was well worth it. If you've stuck around this long thanks for following me on my journey. I hope you took something useful from it.
-Andrew
Top comments (0)