DEV Community

Joseph Thomas
Joseph Thomas

Posted on • Edited on

How to fake AWS locally with LocalStack

If you're anything like me, you prefer to avoid logging into the AWS console as much as possible. Did you set up your IAM root user with 2FA and correctly configure the CORS and ACL settings on your S3 bucket?

๐Ÿคทโ€โ™‚๏ธ nah.

I also prefer to keep my local development environment as close as possible to how it's going to work in production. Additionally, I'm always looking for new ways to fill up my small hard drive. I can't think of a better away to achieve all of the above than putting a bunch of S3 servers inside my computer.

This tutorial will cover setting up Localstack within a node app. Localstack allows you to emulate a number of AWS services on your computer, but we're just going to use S3 in this example. Also, Localstack isn't specific to Node - so even if you aren't working in Node, a good portion of this tutorial will still be relevant. This also covers a little bit about Docker - if you don't really know what you're doing with Docker or how it works, don't worry. Neither do I.

You can see the demo repo for the finished code.

A few benefits of this approach are:

  • You can work offline
  • You don't need a shared 'dev' bucket that everyone on your team uses
  • You can easily wipe & replace your local buckets
  • You don't need to worry about paying for AWS usage
  • You don't need to log into AWS ๐Ÿ˜›

Initial Setup

First, we'll need to install a few things.

  1. Install Docker if you haven't already.
  2. Install the AWS CLI. Even though we aren't going to be working with "real" AWS, we'll use this to talk to our local docker containers.
  3. Once the AWS CLI is installed, run aws configure to create some credentials. Even though we're talking to our "fake" local service, we still need credentials. You can enter real credentials (as described here), or dummy ones. Localstack requires that these details are present, but doesn't actually validate them. Thanks to @alexiswilke for pointing out in the comments that I missed this step!
  4. Make a few files. Create a new directory for your project, and within it: touch index.js docker-compose.yml .env && mkdir .localstack
  5. Add an image to your project directory and rename it to test-upload.jpg
  6. npm init to set up a package.json, then npm install aws-sdk dotenv

Docker

(disclaimer: I'm not a docker expert. If anyone has any suggestions on how to improve or better explain any of this, please let me know in the comments!)

Docker Config

You can run Localstack directly from the command line, but I like using Docker because it makes me feel smart. It's also nice because you don't need to worry about installing Localstack on your system. I prefer to use docker-compose to set this up. Here's the config:

docker-compose.yml



version: '3.2'
services:
  localstack:
    image: localstack/localstack:latest
    container_name: localstack_demo
    ports:
      - '4563-4599:4563-4599'
      - '8055:8080'
    environment:
      - SERVICES=s3
      - DEBUG=1
      - DATA_DIR=/tmp/localstack/data
    volumes:
      - './.localstack:/tmp/localstack'
      - '/var/run/docker.sock:/var/run/docker.sock'


Enter fullscreen mode Exit fullscreen mode

(August 10, 2019 edit: LocalStack now has a larger range of ports, the yaml above has been updated to reflect this. Thanks to @arqez for mentioning this in the comments)

Breaking some of these lines down:

image: localstack/localstack:latest

Use the latest Localstack image from Dockerhub

container_name: localstack_demo:

This gives our container a specific name that we can refer to later in the CLI.

ports: '4563-4599:4563-4599' and '8055:8080':

When your docker container starts, it will open up a few ports. The number on the left binds the port on your localhost to the port within the container, which is the number on the right. In most cases, these two numbers can be the same, i.e. 8080:8080. I often have some other things running on localhost:8080, so here, I've changed the default to 8055:8080. This means that when I connect to http://localhost:8055 within my app, it's going to talk to port 8080 on the container.

The line '4563-4584:4563-4584' does the same thing, but binds a whole range of ports. These particular port numbers are what Localstack uses as endpoints for the various APIs. We'll see more about this in a little bit.

environment

These are environment variables that are supplied to the container. Localstack will use these to set some things up internally:

  • SERVICES=s3: You can define a list of AWS services to emulate. In our case, we're just using S3, but you can include additional APIs, i.e. SERVICES=s3,lambda. There's more on this in the Localstack docs.
  • DEBUG=1: ๐Ÿงป Show me all of the logs!
  • DATA_DIR=/tmp/localstack/data: This is the directory where Localstack will save its data internally. More in this next:

volumes

'./.localstack:/tmp/localstack'

Remember when set up the DATA_DIR to be /tmp/localstack/data about 2 seconds ago? Just like the localhost:container syntax we used on the ports, this allows your containers to access a portion of your hard drive. Your computer's directory on the left, the container's on the right.

Here, we're telling the container to use our .localstack directory for its /tmp/localstack. It's like a symlink, or a magical portal, or something.

In our case, this makes sure that any data created by the container will still be present once the container restarts. Note that /tmp is cleared frequently and isn't a good place to store. If you want to put it in a more secure place

  • '/var/run/docker.sock:/var/run/docker.sock'

Starting our Container

Now that we have our docker-compose.yml in good shape, we can spin up the container: docker-compose up -d.

To make sure it's working, we can visit http://localhost:8055 to see Localstack's web UI. Right now it will look pretty empty:

Empty Localstack UI

Similarly, our S3 endpoint http://localhost:4572 will show some basic AWS info:

Empty S3 bucket

(If you don't see something similar to these, check the logs for your docker containers)

Working with Localstack

AWS is now inside our computer. You might already be feeling a little bit like you are the richest person in the world. (If not, don't worry, just keep reading ๐Ÿ˜›)

Before we start uploading files, we need to create and configure a bucket. We'll do this using the AWS CLI that we installed earlier, using the --endpoint-url flag to talk to Localstack instead.

  1. Create a bucket: aws --endpoint-url=http://localhost:4572 s3 mb s3://demo-bucket
  2. Attach an ACL to the bucket so it is readable: aws --endpoint-url=http://localhost:4572 s3api put-bucket-acl --bucket demo-bucket --acl public-read

Now, when we visit the web UI, we will see our bucket:

Localstack UI with S3 Bucket

If you used volumes in your docker settings, let's pause for a moment to look at what's going on in ./.localstack/data.

Localstack S3 JSON

Here, we can see that Localstack is recording all API calls in this JSON file. When the container restarts, it will re-apply these calls - this is how we are able to keep our data between restarts. Once we start uploading, we won't see new files appear in this directory. Instead, our uploads will be recorded in this file as raw data. (You could include this file in your repo if you wanted to share the state of the container with others - but depending on how much you upload, it's going to become a pretty big file)

If you want to be able to "restore" your bucket later, you can make a backup of this file. When you're ready to restore, just remove the updated s3_api_calls.json file, replace it with your backup, and restart your container.

Uploading from our app

There are a lot of S3 uploading tutorials out there, so this section won't be as in-depth. We'll just make a simple upload function and try uploading an image a few times.

Copy these contents into their files:

.env, our environment variables



AWS_ACCESS_KEY_ID='123'
AWS_SECRET_KEY='xyz'
AWS_BUCKET_NAME='demo-bucket'


Enter fullscreen mode Exit fullscreen mode

Note: it doesn't matter what your AWS key & secret are, as long as they aren't empty.

aws.js, the module for our upload function



const AWS = require('aws-sdk')
require('dotenv').config()

const credentials = {
   accessKeyId: process.env.AWS_ACCESS_KEY_ID,
   secretAccessKey: process.env.AWS_SECRET_KEY,
}

const useLocal = process.env.NODE_ENV !== 'production'

const bucketName = process.env.AWS_BUCKET_NAME

const s3client = new AWS.S3({
   credentials,
   /**
    * When working locally, we'll use the Localstack endpoints. This is the one for S3.
    * A full list of endpoints for each service can be found in the Localstack docs.
    */
   endpoint: useLocal ? 'http://localhost:4572' : undefined,
   /**
     * Including this option gets localstack to more closely match the defaults for
     * live S3. If you omit this, you will need to add the bucketName to the `Key`
     * property in the upload function below.
     *
     * see: https://github.com/localstack/localstack/issues/1180
     */
   s3ForcePathStyle: true,
})


const uploadFile = async (data, fileName) =>
   new Promise((resolve) => {
      s3client.upload(
         {
            Bucket: bucketName,
            Key: fileName,
            Body: data,
         },
         (err, response) => {
            if (err) throw err
            resolve(response)
         },
      )
   })

module.exports = uploadFile


Enter fullscreen mode Exit fullscreen mode

Thanks to @mcmule for the hint about the s3ForcePathStyle option above. If you're getting an ECONNREFUSED error, take a look at his comment

test-upload.js, which implements the upload function



const fs = require('fs')
const path = require('path')
const uploadFile = require('./aws')

const testUpload = () => {
   const filePath = path.resolve(__dirname, 'test-image.jpg')
   const fileStream = fs.createReadStream(filePath)
   const now = new Date()
   const fileName = `test-image-${now.toISOString()}.jpg`
   uploadFile(fileStream, fileName).then((response) => {
      console.log(":)")
      console.log(response)
   }).catch((err) => {
      console.log(":|")
      console.log(err)
   })
}

testUpload()


Enter fullscreen mode Exit fullscreen mode

the testUpload() function gets the file contents, gives it a unique name based on the current time, and uploads it. Let's give it a shot:

node test-upload.js

testing the upload

Copy the URL in the Location property of the response and paste it into your browser. The browser will immediately download the image. If you want to see it in your browser, you can use something like JS Bin:

I love my dog

Then, if you look at .localstack/data/s3_api_calls.json again, you'll see it filled up with the binary data of the image:

Image binary data

Finally, let's restart the container to make sure our uploads still work. To do this, run docker restart localstack_demo. After it has restarted, run docker logs -f localstack_demo. This will show you the logs of the container (the -f flag will "follow" them).

After it initializes Localstack, it will re-apply the API calls found in s3_api_calls.json:

Localstack Logs

When you reload your browser, you should see the image appear just as before.

๐ŸŽ‰ That's it! Thanks for sticking around. This is my first tutorial and I'd love to know what you think. If you have any questions or suggestions, let me know in the comments!

Top comments (43)

Collapse
 
codegino profile image
Carlo Gino Catapang

Great article, kudos to you mate. 1 issue in my case, image localstack/localstack did not have the web ui localhost:8080 during my testing. I needed to use localstack/localstack-full to enable web ui.

Collapse
 
mcmule profile image
Marc Muller

Many thanks for the article!

I had troubles: my app is running in a docker container and I was getting connection refused (ECONNREFUSED) when trying to reach the localstack container. After trying a lot of config variants, I ended up with this:

AWS.config.update({
    accessKeyId: '123',
    secretAccessKey: 'xyz',
    endpoint: 'host.docker.internal:4572',
    s3ForcePathStyle: true
});
Enter fullscreen mode Exit fullscreen mode

Both endpoint and s3ForcePathStyle are important here. s3ForcePathStyle tells the sdk to use url of the from hostname/bucket instead of bucket.hostname.

Besides being the only configuration working for me, with this, I don't need to prepend bucketName in my params:

Key: `${bucketName}/${name}`
Enter fullscreen mode Exit fullscreen mode

becomes

Key: `${name}`
Enter fullscreen mode Exit fullscreen mode

This was not the intended effect (I was just trying to get a working connection between my app and the s3 container). Hope this can help someone else.

Thanks again for your article!

Collapse
 
goodidea profile image
Joseph Thomas • Edited

Hi Marc, thanks for pointing this out! I've updated the article and the demo repo with these changes.

It's still working for me when I use http://localhost:4572 as the endpoint, so I left that as-is.

Collapse
 
mcmule profile image
Marc Muller

Awesome!

With a few more experimentation, I'm now using both variants:

  • host.docker.internal when my app is dockerized (which happens when I'm TDD'ing, most of the time)
  • localhostwhen my app is not dockerized (happening on my CI testing job)
Collapse
 
marcosdiasdev profile image
Marcos Dias

I also got the ECONNREFUSED error, but while running my application outside a container. Turned out that using 127.0.0.1 instead of localhost in the endpoint param was the solution. I'm on MacOS.

Collapse
 
arqex profile image
Javier Marquez • Edited

Thanks Joseph for this article, it really got me started with localstack.

It seems that the number of ports that localstack uses has grown since this article was written, and I found myself if problems because localhost:4592 wasn't forwarded in the example (STS in localstack's aws).

It took me a while to realize it was that! Could you update the example to forward all the new ports? Something like:

ports:
      - '4563-4599:4563-4599'
Enter fullscreen mode Exit fullscreen mode

Cheers!

Collapse
 
goodidea profile image
Joseph Thomas

Just updated, thanks for letting me know!

Collapse
 
targei profile image
qiang

Nice solution. However I met a problem , localhost:4572/ can show the bucket, but nothing on localhost:8055/



bcaf1ffd86f41161ca5fb16fd081034f
webfile



demo-bucket
2006-02-03T16:45:09.000Z


another problem is , when test the test-upload.js
it report below error :
"UnknownEndpoint: Inaccessible host: demo-bucket.localhost'. This service may not be available in theus-east-1' region."

I'm new novice of the S3 and docker, Could you point out how to set region for localhost s3 service.

Collapse
 
michaelpablo13 profile image
Michael Pablo Gomes da Silva

Hi there!
I know that is 2020, but better later than never. So my solution for this problem was:
.env
AWS_ACCESS_KEY_ID='default'
AWS_SECRET_KEY='default'
AWS_BUCKET_NAME='demo-bucket'
AWS_REGION='us-west-2'
AWS_S3_ENDPOINT='localhost:4572'

aws.js
const AWS = require('aws-sdk')
require('dotenv').config()

const credentials = {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_KEY,
}

const useLocal = process.env.NODE_ENV !== 'production'

const bucketName = process.env.AWS_BUCKET_NAME

const s3client = new AWS.S3({
credentials,
endpoint: process.env.AWS_S3_ENDPOINT,
region: process.env.AWS_REGION,
s3ForcePathStyle: true
/**
* When working locally, we'll use the Localstack endpoints. This is the one for S3.
* A full list of endpoints for each service can be found in the Localstack docs.
*/
})

const uploadFile = async (data, name) =>
new Promise((resolve) => {
s3client.upload(
{
Bucket: bucketName,
/*
include the bucket name here. For some reason Localstack needs it.
see: github.com/localstack/localstack/i...
*/
Key: ${bucketName}/${name},
Body: data,
},
(err, response) => {
if (err) throw err
resolve(response)
},
)
})

module.exports = uploadFile

Hope that might be helpful for someone else in the future.

Collapse
 
salhernandez profile image
Salvador Hernandez • Edited

Great Doc! Had issues creating s3 bucket, had to configure aws configure with

accessKeyId: '123'
secretAccessKey: 'xyz`
Enter fullscreen mode Exit fullscreen mode

otherwise it would throw the following error:
make_bucket failed: s3://demo-bucket Unable to locate credentials

Collapse
 
richarddrury profile image
RichardDrury

This is brilliant, so well written and completely idiot proofed (for the likes of me). We won't be using it for the JS app you have created but more to have a local dev environment where we can spin up a docker container with a database and another with the S3 stack on it so we can develope data feeds.

You have contributed to my learning.
Thanks

Collapse
 
martinzone2000 profile image
Jeff Martin

Nice article, Thanks for the work. To access the web ui you need to use the full version of localstack ie:
image: localstack/localstack-full:latest
The localstack github also warns that the web ui has been deprecated and may be removed at some point.

Collapse
 
rkbadatya profile image
rkbadatya • Edited

It's no longer available

Collapse
 
nikola_janicijevic_ef1bf7 profile image
Nikola Janicijevic

Nice work on this tutorial. I think that the last update of the localstack image move all ports to the 4566. Also there is a info that the dashboard will be removed in some of the upcoming releases.

Thanks for the work

Collapse
 
mojtabapourmirzaei profile image
Moji • Edited

hi friend, thanks for your Great article
i have also problem with accessing the ui of localStack with both Docker images (on post and localstack-latest) 2022-10-23
but i run my container with
localstack/localstack-full:0.11.6
and it works
thank you

Collapse
 
sbilello profile image
Sergio Bilello • Edited

@Joseph great job but I was wondering why we get a warning. Do you have the same message in the log?
I have also to refresh the browse localhost:8055 2-3 times to see all the resources created.
Do you have the same problem?

localstack_demo_orig | 2019-06-26T17:57:57:DEBUG:localstack.services.s3.s3_listener: Found no <LastModified>(.*)</LastModified> inside response_content: list index out of range
localstack_demo_orig | WARNING: Unable to get details for bucket: [Errno 1] Operation not permitted: '/tmp/cache.fbd52520e8bfc4938a74ed8204e80533.json'
localstack_demo_orig | 2019-06-26T17:57:59:DEBUG:localstack.services.s3.s3_listener: Found no <LastModified>(.*)</LastModified> inside response_content: list index out of range
localstack_demo_orig | 2019-06-26T17:58:04:INFO:werkzeug: 127.0.0.1 - - [26/Jun/2019 17:58:04] "GET / HTTP/1.1" 200 -
localstack_demo_orig | 2019-06-26T17:58:04:DEBUG:localstack.services.s3.s3_listener: Found no <LastModified>(.*)</LastModified> inside response_content: list index out of 
Collapse
 
goodidea profile image
Joseph Thomas

Hi Sergio,

I'm not sure why this is coming up - I don't remember seeing messages like this. It might be that your /tmp/ directory is not readable or writeable? This might be something to ask on Stack Overflow or on the Github page for localstack. Sorry I can't be of help!