You know how it goes, you start with a simple Python script to automate some AWS tasks, then another one to parse some CloudWatch logs, and before you know it, you're asking Claude to explain what a metaclass is because the documentation might as well be written in Ancient Greek. That was me, Three months ago.
How I Got Here
After years of writing Python scripts for AWS automation (and yes, I'm guilty of having one of those "does everything" scripts), I decided to build something proper. Between my own historic scripts, Stack Overflow, and Claude helping me understand Python's more esoteric features, I started to feel like I was finally getting this whole "software development" thing.
Screenshot of the app. This is Seed data, not my real investments. (I wish)
You see, I got tired of managing multiple Excel spreadsheets tracking various investment fund portfolios. Every time I bought or sold shares, I had to manually update prices, calculate returns, and track dividends. As an AWS professional, the idea of manual updates made me cringe. We automate infrastructure, why not this?
So I built my first real Python application: An Investment Portfolio Manager. Thanks to some help from our AI friends that did the heavy lifting on the front-end, via trial and error I learned about proper project structure, SQLAlchemy relationships and even some unit testing!
The application handled everything: portfolio management, transaction tracking, dividend processing, and even automated price updates. I had it running beautifully in Docker on my home server, with separate containers for the Flask back-end, React front-end (yes, I learned a bit of JavaScript too, thank you Mr. LLM), and a SQLite database.
The "Your hobby is also your job" Dilemma
Being an AWS professional, running this on my home server felt wrong. I mean, I live and breath AWS at my day job, why am I managing containers on my own hardware? Plus, every time my home internet hiccuped, my wife would complain that she couldn't check her investment returns. (Not really. She's very happy with the app and understanding of "the process" <3)
The easy way: ECS. I already had the docker-compose file:
services:
backend:
build: ./backend
container_name: investment-portfolio-backend
environment:
- DB_DIR=/data/db
- LOG_DIR=/data/logs
- DOMAIN=${DOMAIN:-localhost}
volumes:
- /path/to/your/data:/data
networks:
- app-network
frontend:
build:
context: ./frontend
args:
- DOMAIN=${DOMAIN:-localhost}
- USE_HTTPS=${USE_HTTPS:-false}
container_name: investment-portfolio-frontend
environment:
- DOMAIN=${DOMAIN:-localhost}
- USE_HTTPS=${USE_HTTPS:-false}
ports:
- "80:80"
depends_on:
- backend
networks:
- app-network
But then I started thinking like an AWS architect (and looking at the AWS pricing calculator):
- The price update function only needs to run once a day.
- Same goes for our accessing it. We check it a few times a week. so why run (and pay for) 24/7 containers?
- The frontend is just static files, sounds like an S3 website to me
- API Gateway and Lambda to handle the back-end API calls
- Aurora Serverless for the relational data (portfolios, transactions)
- DynamoDB could store the price history (better than my SQLite price table) Spoiler: I never got to this step.
So, that's when I fell down the serverless rabbit hole.
I've gone in the shallows of this specific diving pool before. I've built a small serverless app, a temperature tracking project that was a collaboration with my wife. It pulls temperature data from KNMI (Dutch meteorological institute) and generates a color-coded table that my wife then used to create a temperature rug. Literally turning weather data into a crafting project!
| Date | Min.Temp | Min.Kleur | Max.Temp | Max.Kleur |
----------------------------------------------------------------
| 2023-03-01 | -4.1°C | darkblue | 7.1°C | lightblue |
| 2023-03-02 | 1.3°C | blue | 6.8°C | lightblue |
...
The app could run either locally or through Lambda via API Gateway, taking parameters for start date, end date, and date ordering. It was a perfect starter project, combining AWS services with a practical (and creative!) real-world use case.
Going from that simple Lambda function to a full Flask application with SQLAlchemy models, background jobs, and complex relationships is like trying to explain IAM policies to a developer; Make time in your agenda because it won't be quick and there's going to be some confusion along the way.
The Serverless Temptation
So, bringing it back, I couldn't help but look at my beautiful containerized application humming along and think "this could be more... cloudy." I mean, we have all these amazing serverless services, and here I am, managing containers like it's 2015.
The application was working great, don't get me wrong. But every time I was working in the AWS Console, those Lambda and API Gateway services were just sitting there, judging me. "Why aren't you using us?" they seemed to ask. "We could scale automatically, you know. No more container management..."
So I did what any reasonable AWS admin would do. Completely re-architect my application into that of a serverless architecture. Because why make small, sensible changes when you can do a complete architectural overhaul? The original project only took 2 months, I'm sure this will be a breeze.
The Database Dilemma
First up was the database. I had the backend use SQLite, which worked great locally, but then reality hit: SQLite and Lambda aren't exactly best friends.
Sure, you could use SQLite in Lambda (and part of me really wanted to try, Only way I'm ever getting into Corey Quinn's blog.), but let's be reasonable here. Plus, I'd then need to maintain two code bases. One for the docker-based one and another for the Lambda one.
Pass.
I needed something that would play nice with all the SQLAlchemy knowledge I'd just painfully acquired and after much contemplation (and several cups of coffee) later, I realized I could attempt to actually use both! SQLAlchemy speaks PostgreSQL Aurora Serverless seems like a good fit here. So, we make a dual-handler.
@contextmanager
def db_session():
"""
Environment-aware database session manager.
Why this approach:
1. Flask App: Uses Flask-SQLAlchemy session management
2. Lambda: Uses direct AWS database connections
3. Connection pooling optimization
4. Automatic cleanup of resources
"""
if is_flask_context(): # Check if running in Flask
# Flask benefits:
# - Integrated with Flask-SQLAlchemy
# - Handles application context
# - Uses Flask configuration
from ..models import db
try:
yield db.session
db.session.commit()
except:
db.session.rollback()
raise
finally:
db.session.close()
else: # Running in Lambda
# Lambda benefits:
# - Direct database connections
# - Optimized for serverless
# - No Flask dependency overhead
session = create_aws_db_session()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()
def is_flask_context():
"""Check if code is running in a Flask context"""
try:
from flask import has_app_context
return has_app_context()
except ImportError:
return False
The Lambda Learning Curve
Next up was converting my Flask application to Lambda functions. How hard could it be, right? I mean, I've written plenty of Lambda functions before:
def lambda_handler(event, context):
return {
'statusCode': 200,
'body': json.dumps({'message': 'Hello World!'})
}
But converting a full Flask application? That was a different story. My first attempt looked something like this:
# My first attempt at converting a Flask route
@app.route("/portfolios", methods=["GET"])
def get_portfolios():
portfolios = Portfolio.query.all()
return jsonify(portfolios)
# So, I'm not even trying to keep the codebase singular and I'm already at this monstrosity
def lambda_handler(event, context):
try:
with db_session() as session:
portfolios = session.query(Portfolio).all()
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*' # CORS... *sigh*
},
'body': json.dumps([{
'id': str(p.id), # UUID needs string conversion
'name': p.name,
'description': p.description,
'is_archived': p.is_archived
} for p in portfolios])
}
except Exception as e:
print(f"Error: {str(e)}") # CloudWatch, my old friend
return {
'statusCode': 500,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps({'error': str(e)})
}
So, this is terrible. And not maintainable. So, not wanting to repeat myself (The DRY principle see, I'm learning developer stuff!), I wrote a decorator:
def lambda_response(func):
@wraps(func)
def wrapper(event, context):
try:
result = func(event, context)
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*'
},
'body': json.dumps(result)
}
except Exception as e:
# Log to CloudWatch because that's where I live now
print(f"Error in {func.__name__}: {str(e)}")
return {
'statusCode': 500,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps({'error': str(e)})
}
return wrapper
Now my Lambda functions started looking much cleaner:
@lambda_response
def get_portfolios(event, context):
with db_session() as session:
portfolios = session.query(Portfolio).all()
return [p.to_dict() for p in portfolios]
But that will only work for lambda. Now the original flask routes are broken. So, I wrote a decorator that would allow me to use the same routes for both Flask and Lambda:
def dual_handler(route_path, methods=None):
"""
Decorator that supports both Flask routes and Lambda handlers.
Benefits:
1. Single source of truth for route definitions
2. No conditional code in business logic
3. Transparent handling of environment differences
4. Easy testing of both modes
"""
def decorator(f):
# Register Flask route if in Flask context
if current_app:
f = current_app.route(route_path, methods=methods)(f)
@functools.wraps(f)
def wrapper(*args, **kwargs):
# Detect environment and adapt accordingly
if len(args) == 2 and isinstance(args[0], dict):
event, context = args
with current_app.test_request_context():
flask_request = create_flask_request(event)
flask_response = f(*args, **kwargs)
return create_lambda_response(flask_response)
return f(*args, **kwargs)
return wrapper
return decorator
with the following combined functions to return the correct response, no matter what the environment:
def create_lambda_response(flask_response):
"""
Convert Flask response to Lambda response format.
Why this matters:
1. Maintains API Gateway integration
2. Preserves HTTP semantics
3. Handles binary responses correctly
"""
return {
'statusCode': flask_response.status_code,
'headers': dict(flask_response.headers),
'body': flask_response.get_data(as_text=True)
}
def create_flask_request(event):
"""
Convert Lambda event to Flask request.
Why this matters:
1. Allows reuse of Flask route handling code
2. Maintains compatibility with Flask extensions
3. Enables gradual migration of functionality
"""
http_method = event['requestContext']['http']['method']
path = event['requestContext']['http']['path']
headers = event.get('headers', {})
query_string = event.get('queryStringParameters', {})
body = event.get('body', '')
# Convert query parameters to string format
query_string = '&'.join([f"{k}={v}" for k, v in query_string.items()]) if query_string else ''
builder = EnvironBuilder(
path=path,
method=http_method,
headers=headers,
query_string=query_string,
json=json.loads(body) if body else None
)
return Request(builder.get_environ())
making it possible to use the same routes for both Flask and Lambda:
@dual_handler("/portfolios", methods=["GET"])
def get_portfolios(event, context):
with db_session() as session:
portfolios = session.query(Portfolio).all()
return [p.to_dict() for p in portfolios]
The Static Simplicity
The front-end was actually the easiest part. All dynamic components are in the backend. S3 static website hosting and CloudFront? That's bread and butter.
A simple script like this can upload your frontend to S3 and invalidate the CloudFront cache to force a refresh of the content you just uploaded:
# This part I understand!
aws s3 sync build/ s3://${BUCKET_NAME} \
--delete \
--cache-control 'public, max-age=31536000' \
--exclude 'index.html'
aws cloudfront create-invalidation \
--distribution-id ${CF_ID} \
--paths "/*"
The Results
After weeks of learning, coding, and occasionally questioning my life choices, I had successfully transformed my containerized application into a fully serverless architecture. I don't think I'll keep it online as I don't want to build the security around it or suffer a Denial of Wallet attack if the URL ever comes out, but I learned a lot and I'm proud of what I've built.
Here's what I learned:
- Python isn't just for scripts anymore (but my shell scripts still come in handy. Come see: https://github.com/ndewijer/CodeSnippets
- The AWS Free Tier is your friend during development
- CloudWatch Logs are still the best debugging tool
- Sometimes the "proper" way isn't the AWS way (and that's okay)
Would I do it again? Oh god no. But I'm glad I did it. My investment portfolio manager runs perfectly fine and secure on my private network but I loved the journey. I learned a ton about both Python and dual stack development.
Top comments (0)