In this article, I’ll walk you through the process of simulating an AWS S3 bucket in your local environment — no AWS account required. By the end, you’ll know how to create S3 buckets locally, upload files to them, and download the uploaded files with ease.
Prerequisites
LocalStack Setup
First, set up your LocalStack environment. If you’ve already done this, feel free to skip this section
LocalStack CLI
The first step is to install the LocalStack CLI(Command Line Interface). This link contains an easy guide on how to install the CLI depending on the OS of your machine.
LocalStack account
After successfully installing the CLI, create a LocalStack account here. This will give you access to your own dashboard, where you can manage your account, retrieve your auth token, manage subscriptions, monitor stack insights, and explore other features.
Note: You don't have to actually pay for any subscription if you are not using LocalStack for commercial purposes. LocalStack offers a Hobby Subscription for enthusiasts like myself. If you're not building for commercial purposes, you can select this option through the Subscriptions page in your dashboard. This will give you access to a wide range of open source as well as pro services and features.
LocalStack Desktop
Next I highly recommend you install the LocalStack Desktop application. This will be give you a visual display of your created services, logs, uploaded files e.t.c. This will be really helpful for conducting sanity checks.
LocalStack Authentication Token
You can find your authentication token on the Auth Tokens
page of your dashboard. Make sure to set this variable before starting LocalStack:
export LOCALSTACK_AUTH_TOKEN="your-auth-token"
Configure your environment
Next, set the environment variables in your shell:
export AWS_ACCESS_KEY_ID="test"
export AWS_SECRET_ACCESS_KEY="test"
export AWS_DEFAULT_REGION="eu-west-1"
If your region is different (e.g., eu-central-1, us-east-1, etc.), be sure to update the AWS_DEFAULT_REGION to match.
Note: The variables set using this approach will only persist for the duration of the current shell session and will be cleared when the session ends. If you want to avoid repeating this process every time, you can configure a custom profile to use with LocalStack.
With everything in place, you're good to go! For a quick sanity check, make sure Docker is running on your machine (either using the CLI or Docker Desktop), and then run the following command in your shell to ensure everything is configured correctly:
DEBUG=1 localstack start
The DEBUG=1 command enables debug-level logging for LocalStack, allowing you to easily monitor and troubleshoot what's happening behind the scenes. While you'll see a lot of verbose information during startup, don't worry about it. Simply scroll to the bottom of the logs and look for this:
Ready.
If you see this message, it means LocalStack has started successfully. Now, open the LocalStack Desktop application, and you should see the new container in the list.
Local Environment Setup
Follow the following steps to properly set up your local environment.
1. Create a new directory:
mkdir localstack
2. Move into the new directory:
cd localstack
3. Create a virtual environment:
It’s recommended to create a virtual environment to avoid installing dependencies globally, which might conflict with dependencies for other projects on your machine.
For mac/unix users:
python3 -m venv env
For windows users:
py -m venv env
After creating the environment, activate it by running :
For mac/unix users:
source env/bin/activate
For windows users:
.\env\Scripts\activate
4. Install the LocalStack AWS CLI(awslocal):
pip install 'awscli-local[ver1]'
This will install AWS CLI v1 in your new environment. Using a virtual environment ensures there’s no clash with an existing AWS CLI v2 installation (if you already have it installed, which is likely). The LocalStack documentation mentions certain limitations with AWS CLI v2, so sticking with v1 is recommended for this setup.
5.Verify the installation:
You can verify that the installation was succesful by running.
aws --version
# OR
awslocal --version
You should see something similar to this
aws-cli/1.36.34 Python/3.9.6 Darwin/24.1.0 botocore/1.35.93
If your output is similar, you're on the right track—let’s move on!
Create an S3 bucket in the LocalStack container
Even though we’re using LocalStack, remember that it’s designed to completely mock AWS services. This means we’ll still use AWS CLI commands to interact with our services.
The key difference is that instead of using the aws command, we’ll use the awslocal command. The reason for this is that awslocal is a thin wrapper around aws—it automatically appends the endpoint URL (your LocalStack URL) to every command you run.
If you decide to use the aws prefix instead, you’ll need to either configure the endpoint URL in your AWS profile or append it manually to every command. This approach is tedious and unnecessary, so I strongly recommend sticking with awslocal.
Create Bucket
Run the command below to create an S3 bucket with the name my-new-bucket
awslocal s3 mb s3://my-new-bucket
You should see the following response:
make_bucket: my-new-bucket
Verify the Bucket Creation via the CLI
To ensure the bucket has been created, list all available buckets using this command:
awslocal s3 ls
The response should display the date, time, and name of the bucket, confirming its creation.
Verify Using LocalStack Desktop
For a visual confirmation, you can use LocalStack Desktop. After all, seeing is believing! 😊
- Open the LocalStack Desktop application.
- Look for your active container in the container list.
- Select the container
- At the top center of the application, you’ll see four buttons. Hover over the last button to reveal its name: Resource Browser.
- Click Resource Browser, and a list of supported AWS services will appear.
- Locate S3 in the list and select it.
- Choose your region from the bottom-right corner of the screen.
- Refresh the view using the button at the top-right corner. You should now see your created bucket listed.
Congratulations! 🎉 You’ve just created an S3 bucket without needing an AWS account — pretty amazing, right? 😊
Next Steps
Now that your bucket is ready, let's test it by uploading a file to verify that it works as expected. We are going to create a simple python script to interact with out s3 bucket using the boto3 package.
1. Create your script file in the same directory localstack
where your virtual environment was created:
touch base.py
2. Import required libraries:
import boto3
import logging
from io import BytesIO
from botocore.exceptions import ClientError
3. Configure the logger
module:
logging.basicConfig(
level=logging.DEBUG, # Set the minimum log level (DEBUG, INFO, WARNING, etc.)
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", # Log format
)
logger = logging.getLogger(__name__)
4. Define Configuration Variables:
AWS_ACCESS_KEY_ID = "test"
AWS_SECRET_ACCESS_KEY = "test"
S3_BUCKET_NAME = "my-new-bucket"
LOCALSTACK_HOST = "http://localhost:4566" # Default LocalStack endpoint
The AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
variables are needed to create the client with boto3 but their values are not really important in this situation so you can assign any random value to them. The value assigned to the S3_BUCKET_NAME
variable should be the actual name of the bucket you created above. If this is wrong, you'll get this error:
botocore.errorfactory.NoSuchBucket: An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist
Finally, ensure the LOCALSTACK_HOST
variable is be set to your LocalStack endpoint url.
5. Create the S3 Client:
To run a sanity check, let's try to create a new boto3 client and use that to list out s3 buckets.
def create_S3_client():
s3_client = boto3.client(
"s3",
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
endpoint_url=LOCALSTACK_HOST, # Point to LocalStack endpoint
)
return s3_client
def main():
s3Client = create_S3_client()
response = s3Client.list_buckets()
print("Buckets:", response["Buckets"])
main()
Run the script using python3 base.py
. Your response should contain a List of your created buckets, in this case it should contain only one item , an object with the Name
and CreationDate
of the bucket. now that we have validated this. Let's go ahead and finish up the python script.
6. Create the upload
method:
We will be uploading an image so please move any jpg image of your choice into the localstack
directory where the base.py
file is also located. And change the name of the image to file.jpg
def upload_to_s3(file_bytes, filename, mimetype, object_name=None):
"""
Uploads a file to an S3 bucket
:param file_bytes: Bytes object of the file to be uploaded
:param filename: Name of the file
:param mimetype: MIME type of the file
:param object_name: Name of the object in the bucket
:return: True if the file was uploaded, else False
"""
s3_client = create_S3_client()
if object_name is None:
object_name = filename
try:
# Wrap the bytes object in a BytesIO object
file_obj = BytesIO(file_bytes)
# Upload the file object to S3 bucket
response = s3_client.upload_fileobj(
file_obj, S3_BUCKET_NAME, object_name, ExtraArgs={"ContentType": mimetype}
)
logger.info(f"{object_name} uploaded to {S3_BUCKET_NAME} bucket")
logger.info(response)
return True
except ClientError as e:
logger.error(e)
logger.exception(e)
return False
7. Read File Data:
def read_file():
file_bytes = None
filename = None
mimetype = None
with open("file.jpg", "rb") as file:
file_bytes = file.read()
filename = f"images/{file.name}"
mimetype = "image/jpeg"
return file_bytes, filename, mimetype
Note: By setting the file name like this above f"images/{file.name}"
means our file will be uploaded into the images/
directory in the s3 bucket and if it does not exist it will first be created.
8. Create the main
method
def main():
file_bytes, filename, mimetype = read_file()
status = upload_to_s3(file_bytes, filename, mimetype)
if status:
logger.info("File uploaded successfully!")
else:
logger.error("File upload failed.")
if __name__ == "__main__":
main()
The final code should look like this:
import boto3
import logging
from io import BytesIO
from botocore.exceptions import ClientError
logging.basicConfig(
level=logging.DEBUG, # Set the minimum log level (DEBUG, INFO, WARNING, etc.)
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", # Log format
)
logger = logging.getLogger(__name__)
AWS_ACCESS_KEY_ID = "tester"
AWS_SECRET_ACCESS_KEY = "test"
S3_BUCKET_NAME = "my-new-bucket"
LOCALSTACK_HOST = "http://localhost:4566" # Default LocalStack endpoint
def upload_to_s3(file_bytes, filename, mimetype, object_name=None):
"""
Uploads a file to an S3 bucket
:param file_bytes: Bytes object of the file to be uploaded
:param filename: Name of the file
:param mimetype: MIME type of the file
:param object_name: Name of the object in the bucket
:return: True if the file was uploaded, else False
"""
s3_client = create_S3_client()
if object_name is None:
object_name = filename
try:
# Wrap the bytes object in a BytesIO object
file_obj = BytesIO(file_bytes)
# Upload the file object to S3 bucket
response = s3_client.upload_fileobj(
file_obj, S3_BUCKET_NAME, object_name, ExtraArgs={"ContentType": mimetype}
)
logger.info(f"{object_name} uploaded to {S3_BUCKET_NAME} bucket")
logger.info(response)
return True
except ClientError as e:
logger.error(e)
logger.exception(e)
return False
def create_S3_client():
s3_client = boto3.client(
"s3",
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
endpoint_url=LOCALSTACK_HOST, # Point to LocalStack endpoint
)
return s3_client
def read_file():
file_bytes = None
filename = None
mimetype = None
with open("file.jpg", "rb") as file:
file_bytes = file.read()
filename = f"images/{file.name}"
mimetype = "image/jpeg"
return file_bytes, filename, mimetype
def main():
file_bytes, filename, mimetype = read_file()
status = upload_to_s3(file_bytes, filename, mimetype)
if status:
logger.info("File uploaded successfully!")
else:
logger.error("File upload failed.")
if __name__ == "__main__":
main()
9. Run the script:
Run the script using:
python3 base.py
If your file is uploaded successfully you should see this in your logs.
10. Verify Upload:
Now that you’ve uploaded your file to the S3 bucket my-new-bucket
, let’s verify that the file exists..
Option1 - Verify via the CLI:
Run the following command in your terminal:
awslocal s3 ls s3://my-new-bucket/images/
This will return a list of all files in the images/
directory of the my-new-bucket
bucket. You should see file.jpg
in the response, similar to:
2025-01-07 12:00:00 12345 file.jpg
Option2 - Verify via LocalStack Desktop:
- Open
LocalStack Desktop
and navigate to theS3 service
. - Select the bucket
my-new-bucket
. - Refresh the view, and you should see the
images/
directory. Inside, you’ll find yourfile.jpg
file. - To double-check, click on the file row to download it to your local machine. Open the file to ensure it matches the original image you uploaded.
Congratulations! 🎉
You’ve successfully created an S3 bucket, uploaded a file, and verified its existence—all without needing an AWS account. I hope you’ve experienced your wow moments already! This guide demonstrates how simple, straightforward, and fast it is to work with AWS services on your local machine using LocalStack.
With LocalStack, you can truly keep local development local, saving time and resources while ensuring a smoother development workflow.
In the next part of this series, I’ll show you how to mock AWS Lambda functions locally using the LocalStack platform. Stay tuned — it’s going to be another exciting dive into the world of local AWS development! 😊
If you have any questions, feel free to drop them as a comment or send me a message on LinkedIn and I'll ensure I respond as quickly as I can. Ciao 👋
Top comments (0)