exanubes
Q&A

S3 Signed URLs #1 Upload and download files using signed urls

Often enough applications want users to upload some files, a report, a confirmation of some kind or whatever else. One thing’s for sure, we do not want to handle the files ourselves. So in this article we will go over uploading and downloading files using S3 without ever touching the file on the server. We’re gonna send the files from user’s computer straight into our private S3 Bucket without forgoing security.

You can download the code from Github and I’ve also recorded a video

What is a signed URL?

A signed url contains authentication information which – when valid – provides temporary credentials to anyone who has access to that URL for a limited amount of time. This way we can provide access to bucket’s objects without giving explicit access to particular users.

S3 Bucket

import { Bucket } from 'aws-cdk-lib/aws-s3';

new Bucket(scope, `exanubes-bucket`, {
	bucketName: 'unique-bucket-name-320f',
	blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
	cors: [
		{
			allowedMethods: [HttpMethods.POST],
			allowedOrigins: ['http://localhost:5173'],
			allowedHeaders: ['*']
		}
	]
});

Even though we want to be able to upload and download files from the bucket, we still want to keep it private so our data is more secure. We’re gonna upload new files straight from the client not our server so we need to configure CORS as browsers cannot make cross origin requests without it.

Uploading

Uploading is the more complicated one of the two. It involves at least two steps sometimes three. First we need to generate the signed url, second we have to use it on the client to upload the selected file, and third we would probably want to save a reference to the uploaded files somewhere so we don’t have to query s3 all the time and so that we can create relationships with our other data types. So we’re gonna have to send a confirmation request to do that.

Generate POST Signed URL

import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

const client = new S3Client();

export async function createPostUrl(props) {
	const contentDisposition = `attachment; filename=${props.fileName}`;
	return createPresignedPost(client, {
		Bucket: props.bucket,
		Key: props.key,
		Expires: 60,
		Fields: {
			'Content-Disposition': contentDisposition
		}
	});
}

Here we generate the Post URL which has its own dedicated module. We provide a bucket name and a UUID key. The link will expire after a minute and optionally we can add Content-Disposition header when saving the file, which is responsible for setting the file name of the object. Otherwise it will use key as the file name. We can also set this when creating a download URL which is much easier to use.

Send file to AWS

export function uploadToS3(file: File, url: string, fields: Record<string, string>) {
	const formData = new FormData();

	Object.entries(fields).forEach(([key, value]) => {
		formData.append(key, value);
	});

	formData.append('file', file);

	return fetch(url, {
		method: 'POST',
		body: formData
	});
}

The presigned post url comes with a url and a fields object. AWS expects to receive those fields as form data in the request when uploading the file. We append the file last and use form data as request body. This is important because if we append the file first, the upload will fail.

File should be appended last to form data, otherwise the request will fail

Once this request is successful, we can send a backend request to save the object reference in our database. I’ll skip this implementation detail to keep this short.

You can checkout my video which covers this topic in more detail if you’re interested.

Download

Downloading is much simpler as all we have to do is generate a url and use it

Generate Download URL

export async function createDownloadUrl(props) {
	const input = {
		Bucket: props.bucket,
		Key: props.key,
		ResponseContentDisposition: `attachment; filename=${props.name}`
	};

	const command = new GetObjectCommand(input);

	return getSignedUrl(client, command, { expiresIn: 60 });
}

Similar to POST signed URL, we have to provide a bucket and key of the object we want to upload. This is why it’s a good idea to save an object reference in the database so we can have easy access to object keys.

We can set ResponseContentDisposition prop to control what will be the behaviour and file name of the object should the user download it. attachment will download the file whereas inline will open the file in a browser tab.

Save object keys in your database for easy access

Download File

function downloadS3File() {
	const response = await fetch(`/mock-endpoint`);
	const data = await response.json();
	const link = document.createElement('a');
	link.href = data.url;
	document.body.appendChild(link);
	link.click();
	document.body.removeChild(link);
}

The simplest way to download a file that I know of is to create an anchor link, set href to be the signed url, append the link to the document, click it and remove the element.

Fetching should just leave you with a Buffer and you’d still have to figure out a way to save it to user’s file system which as far as I know, cannot be done because javascript does not have access to the filesystem.

Summary

That’s it! We’ve covered how to handle uploading and downloading files from our s3 bucket without making it public. We were able to upload the files from client’s computer straight into aws without any intermediary servers.