In this article, I will discuss how to implement LiveView file uploader from client directly to the DigitalOcean's Spaces (S3) bucket using Elixir and LiveView in your Phoenix application (but you can also use this tutorial to upload to AWS S3 or any S3-compatible bucket).
This article is not endorsed or paid by DigitalOcean. It is just the lack of resources on integration with other object storage buckets (than AWS S3). But also, DigitalOcean’s Spaces (disclaimer: in my opinion) a lot easier to use and has more straightforward pricing.
My configs are:
Elixir 1.15.7
OTP 26
LiveView 0.20.1
Phoenix 1.7.10
Phoenix LiveView 0.18.18
Ecto 3.11
Dockerized PostgreSQL
All running on my MacBook Air M2 8GB Ram
If you want to upload files from the server, please check out ex_aws, ex_aws_s3, and AWS S3 in Elixir with ExAws. Uploading from the server is very straightforward compared to uploading from the client mainly because ex_aws
already includes all necessary request settings. If you want the user to first upload the file to the server (maybe you want to change the file a bit before storing it or generating thumbnail of the picture), then consider also using Waffle that comes with seamless integration with ex_aws
.
However, if we want user to upload files and then store it in the S3 bucket without any changes to the file, it would be more efficient to enable users upload the file directly
to the S3 bucket (user -> s3) instead of uploading it to the server so that the server uploads it to the S3 (user -> our server -> s3). And this is what this article focuses on.
This article is built entirely on the following resources:
- External uploads guide from LiveView docs
- Phoenix LiveView Uploads Deep Dive — 2020 article by Chris McCord (creator of Phoenix framework)
- Liveview File Uploads to S3 — YouTube video by LiveView Mastery
- Slightly modified S3 uploader (originally coming from Chris McCord) by LiveView Mastery. gist
Uploading file directly from the client (with LiveView) is challenging because we need to do multipart HTTP request and add security verification parameters that are stored in ENV of our Phoenix application. Luckily, Chris McCord has got us covered.
Let’s do initial setup.
- Create an account in DigitalOcean and create Spaces Object Storage bucket. You will need to choose the location of the server, name of the bucket.
- Inside of Spaces bucket page, go to “Settings” and add CORS configuration. For the sake of tutorial, you can put
*
to the origin to allow everyone an access then allow all methods, but you must immediately change the CORS configuration to the one that suits your needs best, and NEVER have*
in the origin field - Then generate access key and store it somewhere safe. We will need access key so that our Phoenix application can connect and access the bucket.
- Add the Spaces (S3) bucket credentials to the
config.exs
:
config :your_app,
access_key_id: “fake”,
secret_access_key: "fake",
bucket: "fake",
region: "your_s3_region"
Note: NEVER store your credentials in plain text or variable. You must handle the secrets through ENV variables, but that is not the focus of this article. Please refer to official docs on how to handle secrets in Elixir.
Now, let’s talk about overall architecture of things. We are going to upload the file to the Spaces (S3) bucket and will receive an URL to the file, then we are going to store that URL in database in the string
field of the schema. For example, let’s say I am building an online menu for the restaurant. I will have menu_items
schema and I want to upload pictures of prepared menu item there that the consumer will get. I will have to add field :image, :string
(that will store the image URL from my Spaces bucket) to the schema menu_items
.
When it comes implementing the file upload to the Spaces (S3) bucket in LiveView, there are several steps we have to do:
- Get secrets from our Phoenix application and pass it to the client
- Implement the file upload to the Spaces (S3) bucket from the client by adding some JavaScript
- Store the file URL in database
Let’s jump into the code.
First, inside of liveview, we are going to enable the user to upload the file in the form and add a function that will fetch the secrets from Phoenix application and prepare required headers for the request client-side JavaScript will do.
In the LiveView, we are going to use allow_upload
to allow the user upload. Then, we will add a function to allow_upload
that will pass our secrets from Phoenix application to the client. We are going to call that function presign_upload
.
def mount(_, _, socket) do
socket =
socket
|> allow_upload(:image,
accept: ~w(.jpg .jpeg .png),
max_entries: 1,
external: &presign_upload/2
)
{:ok, socket}
end
defp presign_upload(entry, %{assigns: %{uploads: uploads}} = socket) do
meta = S3Uploader.meta(entry, uploads)
{:ok, meta, socket}
end
presign_upload
will basically fetch secrets from out Phoenix application and pass them to the client. It will also prepare the headers that the S3 requires.
There is a dependency-free S3Uploader written by Chris McCord. If you go to the official LiveView docs on uploading directly to S3, you will notice that presign_upload()
there has some logic which is not ideal to have on the LiveView. So, we are going to use LiveView mastery’s S3Uploader which is functionally the same as Chris McCord’s but handles the logic so that we in our LiveView our presign_upload
looks like this:
defp presign_upload(entry, %{assigns: %{uploads: uploads}} = socket) do
meta = S3Uploader.meta(entry, uploads)
{:ok, meta, socket}
end
Instead of this:
defp presign_upload(entry, socket) do
uploads = socket.assigns.uploads
bucket = "phx-upload-example"
key = "public/#{entry.client_name}"
config = %{
region: "us-east-1",
access_key_id: System.fetch_env!("AWS_ACCESS_KEY_ID"),
secret_access_key: System.fetch_env!("AWS_SECRET_ACCESS_KEY")
}
{:ok, fields} =
SimpleS3Upload.sign_form_upload(config, bucket,
key: key,
content_type: entry.client_type,
max_file_size: uploads[entry.upload_config].max_file_size,
expires_in: :timer.hours(1)
)
meta = %{uploader: "S3", key: key, url: "http://#{bucket}.s3-#{config.region}.amazonaws.com", fields: fields}
{:ok, meta, socket}
end
NOTE: To upload to the DigitalOcean’s Spaces bucket, you need to change the URLs on lines: 95, 113, 127 from "https://#{bucket()}.s3.#{region()}.amazonaws.com”
to https://#{bucket()}.#{region()}.digitaloceanspaces.com/
.
We add file uploading functionality for the user using <.live_file_input upload={@uploads.image} />
So, our LiveView HEEx will minimally look like this:
def render(assigns) do
~H"""
<div>
<form
phx-submit="save"
phx-change="validate"
>
<label for="name">name</label>
<input
type="text"
name=“menu_item[name]"
phx-update="ignore"
id="name"
/>
<label for={@uploads.image.ref}>image</label>
<.live_file_input upload={@uploads.image} />
<button
type="submit"
>
Submit
</button>
</form>
</div>
“””
end
Important: You must bind phx-submit and phx-change on the form.
Otherwise the file upload will not work. Refer to docs.
When user will hit “Submit” button, client must get the meta data (with our secrets) and initiate the upload directly to the Spaces (S3) bucket. According to the official external upload guide from docs, we can implement that by adding this code to the app.js
:
let Uploaders = {}
Uploaders.S3 = function(entries, onViewError){
entries.forEach(entry => {
let formData = new FormData()
let {url, fields} = entry.meta
Object.entries(fields).forEach(([key, val]) => formData.append(key, val))
formData.append("file", entry.file)
let xhr = new XMLHttpRequest()
onViewError(() => xhr.abort())
xhr.onload = () => xhr.status === 204 ? entry.progress(100) : entry.error()
xhr.onerror = () => entry.error()
xhr.upload.addEventListener("progress", (event) => {
if(event.lengthComputable){
let percent = Math.round((event.loaded / event.total) * 100)
if(percent < 100){ entry.progress(percent) }
}
})
xhr.open("POST", url, true)
xhr.send(formData)
})
}
let liveSocket = new LiveSocket("/live", Socket, {
uploaders: Uploaders,
params: {_csrf_token: csrfToken}
})
Now, we have a LiveView with the form where you can upload a file, and on submit button, client will automatically upload the file to the Spaces (S3). Now let’s handle the part how to actually store the file url.
Let’s handle validations and submission of the form. In the LiveView we need to add:
def handle_event("validate”, _, socket) do
{:noreply, socket}
end
def handle_event("save", %{“menu_item” => menu_item_params}, socket) do
uploaded_files = consume_uploaded_entries(socket, :image, fn _, entry ->
{:ok, S3Uploader.entry_url(entry)}
end)
menu_item_params = case Enum.empty?(uploaded_files) do
true -> menu_item_params
false -> Map.put(menu_item_params, "image", uploaded_files |> List.first())
end
socket =
case Context.create_menu_item(menu_item_params) do
{:ok, _} ->
socket
|> put_flash(:info, "menu_item created")
|> push_patch(to: “/success”)
_ ->
socket
|> put_flash(:error, "Could not create a new menu_item")
end
end
When the user hits the submit button, image is uploaded by the client-side JS code with metadata handed over by S3Uploader
via presign_upload()
. The metadata from S3Uploader
also includes a name of the file (uuid) to use for saving it to the bucket for the file uploaded. On our LiveView part of handling the submission, we are using consume_uploaded_entries()
and generating the URL the file that would have if uploaded to the Spaces S3 bucket. Because the file is the same and its uuid
is the same, S3Uploader.entry_url(entry)
will result in the URL (with the name of the file) that is identical to the one passed to client-side JS.
Next, we are going to check if any file was uploaded, then update the menu_item_params
map. uploaded_files
will be a list of URLs if we had many files uploaded, but as we permit only one file upload in allow_upload
, our list will be either empty or have only one item.
Then, we are going to pass the parameters to the Context to save it in our database and handle the result of operation.
If the user has uploaded the file, menu_item_params
will look something like this:
%{
“name” => “delicious uzbek plov”,
“image” => “https://tutorial-bucket.ams3.digitaloceanspaces.com/2d1622d5-328f-4b8b-b41d-44370bafe222.jpg”
}
There are also couple of small improvements we can add.
First, we can enable auto-upload. When the user chooses the file, our client-side JS will start uploading to the Spaces (S3) bucket right away. We can implement that with one line of code (thanks Chris). We will need to pass auto_upload: true
to allow_upload
function on mount:
def mount(_, _, socket) do
socket =
socket
|> allow_upload(:image,
accept: ~w(.jpg .jpeg .png),
max_entries: 1,
auto_upload: true,
external: &presign_upload/2
)
{:ok, socket}
end
Next, we can show the preview as well as progress of file upload to the user, by adding these lines to the view:
<%= for entry <- @uploads.image.entries do %>
<.live_img_preview entry={entry} width="75" />
<div class="py-5">
<%= entry.progress %>%
</div>
<% end %>
What to do with the progress is up to your imagination.
If you have any suggestions or critique, please let me know.
Top comments (0)