Introduction
In this post I'm going to describe how you can upload content to an S3 bucket with the AWS SDK for Rust.
I'm going to assume that you have already installed Rust, if not you can follow this guide.
Setup
Start by creating a new package with Cargo:
cargo new s3_uploader --bin
This initialises a new package with the following structure:
s3_uploader
βββ Cargo.toml
βββ src
βββ main.rs
The Cargo.toml
is the manifest file for the project and contains all metadata necessary to compile the project. If you are familiar with Node.js it's the Rust version of the package.json
file.
Now that we have a basic project to work with we are ready to start coding.
AWS and Rust
AWS have somewhat recently released a SDK for Rust that is currently a developer preview. It's easy to get started with and can be found on GitHub.
To be able to use the AWS SDK we need to include it under [dependencies]
in our Cargo.toml
. To be able to easily authenticate against AWS services we will use the aws-config crate and to be able to execute asynchronous code we will also add Tokio to our dependancies.
Your Cargo.toml
should now look something like this:
[package]
name = "s3_uploader"
version = "0.1.0"
edition = "2018"
[dependencies]
aws-config = "0.3.0"
aws-sdk-s3 = "0.3.0"
tokio = { version = "1", features = ["full"] }
Implementation
Start with creating a new file in the src
folder and name it s3_uploader.rs
. In the newly created file import the following:
use std::path::Path;
use std::process;
use aws_sdk_s3::{ByteStream, Client, Error};
We are now ready to start implementing the function to upload content.
create a new async function:
pub async fn upload(path: &str, bucket: &str, key: &str) -> ResultResult<aws_sdk_s3::output::PutObjectOutput, Error> {...}
This function will take a path, name of the input bucket and the name of the file when uploaded to S3. The return type is an enum with either output from the PutObject
action or an error.
Start by loading the necessary environment variables that we need via aws_config
. After that is done we can create a new client and load in the source file from the path
.
let config = aws_config::load_from_env().await;
let client = Client::new(&config);
let file = ByteStream::from_path(Path::new(path)).await;
The next step is to create a variable in which we can store the response from AWS.
let resp;
Now we have everything we need to upload the file to S3:
match file {
Ok(f) => {
resp = client
.put_object()
.bucket(bucket)
.key(key)
.body(f)
.send()
.await?;
},
Err(e) => {
panic!("Error uploading file: {:?}", e);
}
};
Ok(resp)
The match
expression implies that we expect the put_object()
to either return a successful response or an error when we upload content. If everything works as expected we will return the response from AWS.
The upload function is now finished and we can use it as shown below.
mod s3_uploader;
#[tokio::main]
async fn main() {
let upload = s3_uploader::upload(
"path-to-file",
"S3-bucket",
"filename-in-bucket",
).await;
println!("{:?}", upload);
}
The #[tokio::main]
macro is used to make main async.
Conclusion
This small function can of course be extended further but I hope that this small introduction have sparked some curiosity to continue to use and explore the Rust language.
If you need assistance in the development and implementation of this, our team of video developers are happy to help you out. If you have any questions or comments just drop a line in the comments section to this post.
Top comments (1)
Hm. I don't think you ran your code. It should
let mut resp
, but ideally it should beAlso, there is no need for Debug ({:?}) for
e
because it implements Display which is expanded with{}
. That line should look like this:panic!("Error uploading file: {}", e);