How to Upload Images to AWS S3 Bucket from a TinyMCE Editor in a Phoenix Liveview Project.

Obute Moses
Elemental Elixir
Published in
9 min readJan 28, 2024
Image by evening_tao on Freepik

Introduction

This is a follow-up to previous tutorials that focus on the integration of the TinyMCE editor into a Todo App, and the integration of a formula plugin into the TinyMCE editor.

In this tutorial, we will implement image upload from the TinyMCE editor to an AWS S3 bucket. Before we start, you need to set up the project if you did not follow the previous tutorials.

Project Setup

Follow the steps in any of the previous tutorials: the integration of the TinyMCE editor into a Todo App, or the integration of a formula plugin into the TinyMCE editor to clone the project from GitHub, install the dependencies and launch the project to ensure things are working as expected.

NOTE: Checkout to the formula-plugin-integrationbranch to start from a clean slate.

After the initial setup, we need to add the aws-elixir dependency to the mix.exs file. The aws-elixir is dependent on hackney as described in its installation section.

def deps do
[
{:phoenix, "~> 1.7.7"},
...
{:aws, "~> 0.13.0"},
{:hackney, "~> 1.16"}
]
end

Then install the dependencies by running.

$ mix deps.get

Adding AWS Credentials

Create two files in the config/ directory, custom_config.sample.exs and custom_config.secret.exs file. These two files are necessary so that the .secret.exs file can be ignored by GitHub. Copy the configs below to both files.

# Contains custom configs and settings
# Follow instructions in the README.md

import Config

config :phoenix_todo_app,
aws_bucket_name: "", #!!!REQUIRED!!!
aws_access_key_id: "", #!!!REQUIRED!!!
aws_secret_access_key: "", #!!!REQUIRED!!!
aws_region: "" #!!!REQUIRED!!!

Now provide values for the configs in the custom_config.secret.exs file.

Please ensure that your AWS Bucket policy allows the files to be publicly accessible. But restrict write access. Also configure the CORS properly.

To ensure our .secret.exs file is not pushed to GitHub, let’s add it to the .gitignore file. We will be targeting any files that end with that extension.

...

# any secret.exs files
*.secret.exs

In the config/config.exs file, import the (secret) custom config file just before the last line.

...
import_config "custom_config.secret.exs"
...

Helpers Module

Before we create the controller that will handle file upload, we will create a Helpers module to hold generic functions, for reusability. Inside the lib/phoenix_todo_app_web directory, create a helpers.ex file and copy the code below into it.

defmodule PhoenixTodoAppWeb.Helpers do
def generate_unique_file_name(file_name) do
current_timestamp = System.os_time(:second)
file_extension = file_name |> Path.extname()

file_base_name =
file_name
|> String.split(file_extension)
|> List.first()
|> String.replace(" ", "-")

"#{file_base_name}-#{current_timestamp}#{file_extension}"
end
end

In the Helpers module above, currently, we have only the generate_unique_file_name/1 function. As the name suggests, this function will create unique file names for the files we plan to upload.

We are using the System.os_time/1 to get the current timestamp in Unix seconds. The Path.extname/1 function is used to get “the extension of the last component of the given path”, in this case, our file_name. Then, we are using String.split/1 to separate the base file name from its extension by giving the extension as the pattern for separation. We are using the List.first/2 function to pick the base file name and using the String.replace/4 function, we are replacing every space in the file name with a dash/hyphen.

Finally, using interpolation, we are constructing the unique file name that should be returned.

S3 Uploads Controller

We are going to create a controller that will handle the uploading of our images to the S3 bucket. In the lib/phoenix_todo_app_web/controllers/ directory, create a file named s3_uploads_controller.ex. Copy and paste the code below into the file. We will discuss it afterwards.

defmodule PhoenixTodoAppWeb.S3UploadsController do
use PhoenixTodoAppWeb, :controller
alias PhoenixTodoAppWeb.Helpers
alias AWS

@aws_access_key_id Application.compile_env!(:phoenix_todo_app, :aws_access_key_id)
@aws_secret_access_key Application.compile_env!(:phoenix_todo_app, :aws_secret_access_key)
@aws_region Application.compile_env!(:phoenix_todo_app, :aws_region)
@aws_bucket_name Application.compile_env!(:phoenix_todo_app, :aws_bucket_name)

def create(
conn,
%{
"Content-Type" => content_type,
"file" => %Plug.Upload{
path: tmp_path,
content_type: _,
filename: file_name
}
} = _params
) do
file_path = "public/" <> Helpers.generate_unique_file_name(file_name)

file = File.read!(tmp_path)
md5 = :crypto.hash(:md5, file) |> Base.encode64()

aws_response =
AWS.S3.put_object(get_client(), @aws_bucket_name, file_path, %{
"Body" => file,
"ContentMD5" => md5,
"Content-Type" => content_type
})

case aws_response do
{:ok, _, %{status_code: 200}} ->
file_url = "https://#{@aws_bucket_name}.s3.#{@aws_region}.amazonaws.com/#{file_path}"
send_resp(conn, 201, file_url)

_ ->
send_resp(conn, 400, "Unable to upload file, please try again later.")
end
end

def create(conn, _) do
conn
|> put_resp_content_type("text/plain")
|> send_resp(400, "Bad request")
end

defp get_client() do
AWS.Client.create(@aws_access_key_id, @aws_secret_access_key, @aws_region)
end
end

The code above is a Phoenix controller which will handle requests for our file uploads to the AWS S3 bucket.

  • We have aliased the AWS library we previously added to the mix.exsfile and a Helpers module we created earlier.
  • We have also created module attributes to fetch and hold the values of our AWS credentials as defined in the custom_config.secret.ex file. We are using the Application.compile_env!/2 function to fetch these values.
  • Next, we have the create/2 action, which will handle the actual upload of the file. “An action is a regular function that receives the connection and the request parameters as arguments. The connection is a Plug.Conn struct, as specified by the Plug library.”
  • The request parameters of thecreate/2 action is the second argument given. A proper request parameter for this action is a map with the keys “Content-Type” and “file”. The value of the “file” key must be a valid Plug.Upload struct.
  • If the request parameters don’t match the defined pattern, the second create/2 action definition will always match, and it will return a 400 Bad Request response.
  • With the proper request parameters, the create/2 action will use the generate_unique_file_name/1 function in the Helpers module to construct a unique filename for the file to be uploaded. The generated filename will be concatenated to the desired folder path we wish to upload the file into, “public/” in this case.
  • We are using the File.read!/1 function to get the binary of the contents of the file stored at tmp_path. tmp_pathis a temporary storage location where the contents of the uploaded file are stored by the client until it is no longer needed.
  • Next, we are creating a digest of the file using :crypto.hash/2 and Base.encode64/2. This is to assist Amazon S3 to check and “ensure that data is not corrupted traversing the network. Amazon S3 checks the object against the provided MD5 value and, if they do not match, returns an error.”
  • The AWS.S3.put_object/5 function is used to upload the file. This function requires the client, the name of the S3 bucket, the key (file_path), and the input. The client is gotten by calling the AWS.Client.create/3 function and providing the Amazon S3 credentials stored in the module attributes. The input is a map that contains the file (“Body”), the MD5 digest (“ContentMD5”), and the content type (“Content-Type”).
  • If the file upload is successful, the response will be an :ok tuple with status_code of 200. Anything else is a failure with either an error or a warning. You can explore the different responses to customize the behaviour of your response.
  • When the upload is successful, we construct a valid file URL for the uploaded file by concatenating the bucket name, region and file path as shown above. And then return a 201 Created response to the client with the constructed file URL as the response text.
  • For a failed upload, we are returning a 400 Bad Request response.

Before we move to creating the JS function that will send the request for file upload, we need to make the S3UploadsController available from the router. In the lib/phoenix_todo_app_web/router.ex file, just before live “/”, TodoLive, add a post route to the controller as shown below.

...

scope "/", PhoenixTodoAppWeb do
pipe_through :browser

post "/s3-file-uploads", S3UploadsController, :create
live "/", TodoLive
end

...

TinyMCE File Uploader

The TinyMCE image_upload_handler option allows us to specify a function to handle image uploads. “The upload handler function takes two arguments: the blobinfo and a promise callback, and returns a Promise that will resolve with the uploaded image URL or reject with an error. The error can be either a string or an object.”

In the assets/js, let’s create a helpers.js file to hold the client's reusable code and functions. We will create an uploadFileToS3Bucket function, which will be responsible for sending the requests to our controller to upload files. Copy the code below into the helpers.js file.

export const uploadFileToS3Bucket = async (
file,
resolveCallback,
rejectCallback
) => {
let form = new FormData();

form.append("Content-Type", file.type);
form.append("file", file);

// Get CSRFToken from the meta tag
let csrfToken = document
.querySelector("meta[name='csrf-token']")
.getAttribute("content");

let xhr = new XMLHttpRequest();
// Send a POST request to the route `/s3-file-uploads`
// provided in the router and assigned a controller to
// perform the request
xhr.open("POST", "/s3-file-uploads", true);
xhr.setRequestHeader("X-CSRF-Token", csrfToken);

xhr.onerror = function (event) {
rejectCallback("Unable to upload the file.");
};

xhr.onload = function () {
if (xhr.status === 201) {
// Get the full path of the uploaded file
const fileUrl = xhr.responseText;
resolveCallback(fileUrl);
} else {
rejectCallback("Unable to upload the file.");
}
};

return xhr.send(form);
};

The uploadFileToS3Bucket function is an async function because the images_upload_handler callback is expected to return a Promise. The uploadFileToS3Bucket function expects the file to be uploaded, a resolve callback, and a reject callback as arguments. The resolve callback will be called when the file is successfully uploaded, and the reject callback will be called when the file upload fails.

In the uploadFileToS3Bucket function, we initialised a FormData object and appended the content type and file as expected by our controller.

To successfully make our request, we need to send the X-CSRF-Tokenheader along with the request. The csrf-token is gotten from the meta tag of the webpage.

Then we initialized an XMLHttpRequest object, set our CSRF Token, and sent a POST request to the route (“/s3-file-uploads”) we just added to the router.ex file to reach our controller by calling xhr.send(). The xhr.onload callback checks for the status_code of the response, if it is 201 then it is successful, anything else is a failed response. When it succeeds, we derive the file URL sent by our controller and then call the resolve callback with the file URL as an argument.

TinyMCE Image Upload Option

Finally, we need to add the options that enable users to upload images in the TinyMCE hook. In the assets/js/tinyMCEHook.js file, let’s import the assets/js/helpers.js file by adding the code below at the top of the file.

import { uploadFileToS3Bucket } from "./helpers";
...

Then let’s add the image upload option into the TinyMCE options object.

{
selector: `#${elID}`,
...
image_uploadtab: true,
image_advtab: true,
images_upload_handler: (blobInfo, progress) => {
return new Promise((resolve, reject) => {
uploadFileToS3Bucket(blobInfo.blob(), resolve, reject);
})
},
}

After doing that, you can run the app on the terminal to see the result.

$ mix phx.server

When you click on the image icon on the editor, and select the Upload option, you should have the screen below, and be able to select and upload your images to the defined bucket.

TinyMCE Image Upload dialog

After selecting and successfully uploading an image. The URL will be displayed, and you have the choice of resizing the image and adding some borders.

TinyMCE Successful Image Upload

The screen below shows the image displayed inside the editor after you click the Save button.

TinyMCE successfully uploaded image showing inside the editor.

The screen below shows an error message if the file was not successfully uploaded.

TinyMCE failed image upload showing an error message.

Conclusion

This tutorial focuses on the implementation of image upload to an AWS S3 Bucket from a TinyMCE editor in a Phoenix Liveview project. It is a sequel of previous tutorials that focus on the integration of the TinyMCE editor into a Todo App, and the integration of a formula plugin into the TinyMCE editor, in that order.

This has also been a learning process for me. Following this will be tutorials on how to delete the uploaded files (both single and bulk deleting) from the AWS S3 bucket as we do not want to retain unnecessary files in our storage.

You can access the code here on GitHub https://github.com/mosiac05/phoenix_todo_app. Cheers!

--

--

Obute Moses
Elemental Elixir

Developer at Refined Solutions Systems using PETAL stack and React.