DEV Community

Murphy Randle
Murphy Randle

Posted on • Originally published at mrmurphy.dev on

On media uploads, and annoying S3 APIs.

While working on Storytime Studio a couple of weeks ago I decided to upgrade my media uploading approach. I had started with the naive "I'll just upload everything to the server and put it in a folder on the hard drive" approach, because that was simple. And it worked really well until I tried to upload 15 minutes of audio (a few hundred megabytes) at once. The server crashed, every time. Even after upgrading the box from 1GB of memory to 2GB.

The server is written in Adonis JS right now (an experiment, I've never used that framework at all before, let alone in production) and it's taking care of parsing the multipart form body I was using. I assume that the body parser is streaming the content to a temporary file on disk. It shouldn't be a big memory hit, even with a file that's hundreds of megabytes. But instead of taking time to figure out why the server wasn't surviving even a moderately heavy upload, I decided it was time to switch to the direct-to-s3 upload model that I would eventually be moving to anyway.

Before I move on, let me make sure you know what S3 is. Amazon's S3 is a low-cost, durable option for storing even very large files indefinitely. It's a fantastic place to put media files and raw data. I'm using S3 as a loose term here, because I'm actually hosting my server on Linode, and I'll be using Linode's Object Storage, which implements the same API as S3. Most cloud platforms have some service like this.

What is the direct-to-S3 approach?

Short and sweet: The server generates a special URL for putting data directly in S3 and hands that back to the client. The client can use that URL to upload or download media for a limited time. After that time, the URL expires. The media itself doesn't have to go through the server, and since the URLs are time-limited, it's secure enough for most purposes.

The requests look like this when uploading audio:

sequenceDiagram participant Phone participant Server participant S3 Phone->>Server: Upload story meta Server->>Phone: Signed URL Phone->>S3: Upload story audio file

The audio content never actually touches my server, only the metadata. The audio goes straight to S3. This pattern drastically increases the amount of traffic my little application server will be able to handle, because it doesn't have to worry about buffering, writing or reading any of the media data coming from the clients. And even a small server can handle a considerable number of requests for little chunks of metadata. Thus, this is a "scalable" way to handle media upload.

This is what the requests look like for downloads:

sequenceDiagram participant Phone participant Server participant S3 Phone->>Server: Get story content Server->>Phone: Redirect to signed URL Phone->>S3: Follow redirect and download content

When the phone asks for story content from the server, the server just sends back a 302 Found status code, and sets the Location header to the newly generated S3 URL. The client then automatically visits that URL and downloads the content. Again, nothing has streamed through my little inexpensive server, and my scalability goes up.

API Pain

The current version of Amazon's S3 API was designed in 2006, and it shows its age. Though doing simple things isn't too uncomfortable, and there's not much drama if you don't make any mistakes, I've found debugging mistakes when they do happen to be a real pain. Switching to this direct-to-s3 approach took me multiple days, when it should have taken me an hour or two, because I was busy debugging APIs that didn't feel like they should have been breaking.

Initially I used the "aws-sdk" node module to generate signed URLs for Linode's object storage. I do this all the time at Day One (where we are using the real S3), and I've never had a problem with it. But no matter what I tried, I kept getting a response from Linode that said SignatureDoesNotMatch, and that's pretty much all it said ๐Ÿ˜ก. Yeah, it seems they use that error to cover a number of possible mess-ups. So it took me a lot of experimentation to finally get anything working.

I ditched the aws-sdk module and followed their docs for their HTTP API instead. These docs taught me two important things:

  • When using Linode's object storage, the content-type of the media being PUT to the URL must be specified when creating the URL, and the upload must match that type.
  • When doing a GET or a DELETE, the content-type must not be specified when creating the URL.

Guess what happens if you mess one of those things up? SignatureDoesNotMatch. Ultimately that error message does make sense, once you understand that the content-type is part of the signature, but the information returned in the response is bad at actually helping the user to fix the problem.

Even though I ditched the S3 API library, I think now that I probably could have stuck with it if I had included and excluded the content-type attribute in the right places.

Peace

It's done now. I probably won't have to touch this part of the code again for a very long time, if ever. So even though the work was frustrating, I can close my eyes, take a deep breath in, let it out, and move on to other frustrating problems ๐Ÿง˜โ€โ™‚๏ธ.

Top comments (0)