DEV Community

Roddy Ezell
Roddy Ezell

Posted on

Self-taught to JPMorgan Chase: My Cloud Resume Challenge Journey

How it Started

In January 2023, I learned about the Cloud Resume Challenge. A project that challenges people considering a career in the cloud to jump into the deep end: build a static resume site and get it up and running with one of the popular vendors (AWS, Azure, or GC). There’s little guidance given, just a spec that, if followed, necessitates an exploration of the services involved with a cloud deployment; forcing a confrontation with some of the real concerns a DevOps engineer faces.

For some context, my education and job at the time did nothing to officially prepare me for the task. I’d been working as a mechanical engineer for a medical device company for the last 6 years; primarily focused on the many ways of testing how a suture anchor might tear out of your shoulder or whether a sterile pouch can sit on a shelf for a long time and still maintain its aseptic barrier. While there was a felt weight of importance to that work, software development always interested in me, and I would spend a lot of my spare time clumsily building web apps with React and Django. It was a hobby I would find time for here and there, hit a roadblock or get too busy with my real job, and then put down.

A few years of this and it became evident that I should seriously consider making the part-time developer thing my full-time thing. I just didn’t want that to involve going back to college or paying for an expensive bootcamp.

Now, imposing those constraints doesn’t leave a ton of options. I’d de facto chosen the self-taught route for which there is no clear path and no one has any reason to give a damn. And as an outsider with a penchant for doomscrolling r/cscareerquestions, it appeared that I may have already missed the golden age of hiring, where even the dregs of the talent pool couldn’t bat away the recruiters. This would be an uphill battle.

I looked for opportunities at my job to get involved with any projects requiring some form of development. That was ultimately limited to building an app with Microsoft PowerApps to capture tribal knowledge and a ServiceNow app to triage requests between the R&D test lab and model shop. Both gave me a proper excuse to code during work hours, but it was obvious that neither would help me land a development role that wasn’t bound to one of those platforms.

So I did an accounting of the various projects I’d toyed with over the years and realized none truly represented a completed work.

That’s where the Cloud Resume Project comes in. Its objective became an organizing principle for me to combine all my disparate knowledge into something tangible; simultaneously offering a way out of tutorial hell and establishing demonstrable proof of my capacity to build something of value. You can check out the site here: https://www.roddyezell.com/

How It's Going

Soon after completing the project, I stumbled across JPMorgan’s Emerging Talent Software Engineering program: a talent pipeline to bring people in from non-traditional backgrounds. They just so happened to care that their developers have a good footing with AWS at the same time I just so happened to have built my resume site and deployed it to AWS. That led to an invitation to their 12-hour hackathon which then lead to an offer in April 2023.


For the rest of this article, I'll walk you through the overall process – what services were used, a bit of how they were set up, and the lessons I learned along the way.

Getting Started with AWS Identity and Access Management

I started by setting up secure access to my AWS account:

  • Securing the Account:
    I enabled Multi-Factor Authentication (MFA) for the root user to add an extra layer of security.

  • Creating an IAM User:
    Next, I created a separate IAM user with programmatic access. This means the user could interact with AWS using APIs and the AWS CLI, rather than just through the web console. I limited this user’s permissions to only what was needed (like full access to S3) to follow the principle of least privilege.

  • Downloading Credentials:
    AWS generates an access key and secret key for the user—these are crucial for automation, so I saved them in a CSV file because you won’t be able to see them again.

Making Life Easier with AWS Vault

To avoid entering credentials repeatedly, I installed AWS Vault. This tool securely stores your credentials and lets you run AWS CLI commands without hassle. After setting it up, I could simply execute commands like

aws-vault exec my-user -- aws s3 ls
Enter fullscreen mode Exit fullscreen mode

to see my S3 buckets or log in with

aws-vault login my-user
Enter fullscreen mode Exit fullscreen mode

Deploying Infrastructure with AWS SAM

I used the AWS Serverless Application Model (SAM) to define and deploy the resources needed for the project. Think of SAM as a blueprint for your cloud setup, written in a template file (template.yaml). It lets you manage everything from S3 buckets to Lambda functions in one place.

  • Project Initialization: I started with
sam init
Enter fullscreen mode Exit fullscreen mode

to create a new project template and then ran

sam build 
Enter fullscreen mode Exit fullscreen mode

to package everything.

  • Deploying: With AWS Vault handling my credentials, I deployed the project using
sam deploy --guided
Enter fullscreen mode Exit fullscreen mode

which walks you through a few settings. This process created two CloudFormation stacks that housed all the resources defined in the SAM template.

Hosting the Website on S3 and Making It Public

For hosting the website:

  • S3 Bucket Setup: I created an S3 bucket (named something like ezell-website to ensure global uniqueness) to store the static files (HTML, CSS, JavaScript). The bucket was configured for website hosting, meaning it directly serves content like a traditional web server.
  • Public Access: I updated the bucket’s policy to allow public read access so anyone on the internet could view the site.

Speeding Up Delivery with CloudFront

To improve the performance and reliability of the website:

  • Setting Up CloudFront: CloudFront, AWS’s content delivery network (CDN), was configured to cache the content from the S3 bucket and deliver it to users faster.
  • Cache Invalidation: When I updated the website, I needed to clear out the cached content on CloudFront. This was automated using an AWS CLI command that invalidates the cached files so visitors always see the latest version.

Lambda, API Gateway, and DynamoDB

While the website itself is static, the spec required that a visitor counter needed to be implemented.

  • Lambda Functions & API Gateway: I set up two Lambda functions—one to read the visitor count from a DynamoDB table (GET) and another to update it (PUT). These functions were triggered via API Gateway, which handles the HTTP requests.
  • CORS Configuration: Since the API and website are on different domains, I had to configure CORS (Cross-Origin Resource Sharing) in the Lambda functions so that the website could fetch data from the API without security issues.
  • DynamoDB: A DynamoDB table was created to store the visitor count. DynamoDB is a NoSQL database, which means it handles data without a fixed schema, making it flexible for projects like this.

Automating Deployments with a Makefile and GitHub Actions

Automation played a big part in keeping the project manageable:

  • Makefile: I wrote a Makefile with commands to build and deploy both the infrastructure and the site. For instance,
make deploy-infra
Enter fullscreen mode Exit fullscreen mode

builds and deploys the SAM template and

make deploy-site
Enter fullscreen mode Exit fullscreen mode

syncs the website files to the S3 bucket.

  • GitHub Actions: I set up workflows to automatically deploy changes when I pushed new commits. One workflow handled backend changes (using SAM), and another handled frontend updates. Additionally, an action was configured to invalidate the CloudFront cache whenever new code was deployed. ________________________________________

What I Learned

Here are some of the key takeaways:

  • AWS Vault:
    It’s a lifesaver for securely managing AWS credentials. No more manual entry every time you need to run a command!

  • Boto3:
    This AWS SDK for Python made interacting with DynamoDB straightforward.

  • CloudFront Invalidation:
    Automating cache invalidation ensures users always see the latest content without waiting for the CDN to refresh on its own. This small step makes a big difference in user experience. I used this GitHub Workflow Action for invalidating CloudFront distribution paths.

  • Infrastructure as Code (IaC):
    Defining everything in a SAM template not only provides consistency but also makes it easy to track changes and collaborate. It’s like having your infrastructure in version control — if something goes wrong, you can quickly roll back or redeploy.

  • Automation Tools:
    Using a Makefile for local tasks and GitHub Actions for continuous deployment streamlined the workflow and reduced manual steps. This allowed me to focus more on the project itself rather than the deployment process.


Conclusion

This project wasn’t just about building a website – it was a hands-on lesson in using AWS services together as part of a real-world application. From securely managing credentials with AWS Vault and automating deployments with GitHub Actions, to using AWS SAM for defining a serverless architecture and getting the visitor counter to work with Lambda and DynamoDB, each step taught me something that still impacts my work today.

Hopefully this gives you a clearer picture of how all these AWS services work together and maybe some encouragement to try it yourself!

Top comments (1)

Collapse
 
leob profile image
leob

Impressive! So you're employed at JP Morgan now? Well done!