How to Manually Invalidate or Enable One-Time Uploads with Amazon S3 Presigned URLs

How to Manually Invalidate or Enable One-Time Uploads with Amazon S3 Presigned URLs
Amazon S3 (Simple Storage Service) is a scalable and secure object storage service from AWS. One of its powerful features is the ability to generate presigned URLs. These URLs provide temporary access to S3 objects, ideal for sharing private content or enabling direct-to-S3 uploads in a secure way. But what if you want to limit the use of these URLs even further, like making them valid for a single upload only or invalidating them manually? This post will guide you step-by-step.
What Are Amazon S3 Presigned URLs?
Presigned URLs are generated by an AWS user who has access to an S3 object. They contain all the necessary information for accessing an object, including the bucket name, object key, AWS access key ID, expiration time, and a signature.
They’re typically used to provide temporary access to an object without requiring AWS security credentials or complex access management. When a presigned URL is generated, the user can specify an expiration time. After this time, the URL will no longer work.
One-Time Uploads with Presigned URLs
Unfortunately, Amazon S3 does not inherently support single-use presigned URLs. Each URL is valid until its specified expiration time, regardless of how many times it’s used. However, you can emulate one-time uploads with a bit of extra code.
To create a one-time upload URL, generate a presigned URL as usual, but with an additional step: track its usage.
Here’s how to do it in Python using Boto3:
import boto3
from botocore.client import Config
import uuid
s3 = boto3.client('s3', config=Config(signature_version='s3v4'))
# Generate a unique filename for each upload
unique_filename = str(uuid.uuid4())
presigned_url = s3.generate_presigned_url(
ClientMethod='put_object',
Params={'Bucket': 'your_bucket', 'Key': unique_filename},
ExpiresIn=3600, # URL is valid for 1 hour
HttpMethod='PUT'
)
# Store the URL and its usage status in your database
db.save_url(presigned_url, "unused")
Whenever a file is uploaded using this URL, update its status in your database to “used”. Before each upload, check the URL’s status. If it’s “used”, do not allow the upload.
Manual Invalidation of Presigned URLs
Again, AWS does not support direct invalidation of presigned URLs. However, you can achieve this by using a similar method as above.
Generate the presigned URL and store its status in your database. Whenever you want to invalidate the URL, update its status. Before each access, check the URL’s status. If it’s “invalidated”, do not allow access.
Here’s the Python code:
# Generate and save the URL as before
presigned_url = s3.generate_presigned_url(...)
db.save_url(presigned_url, "valid")
# To invalidate the URL
db.update_url_status(presigned_url, "invalidated")
Conclusion
Although Amazon S3 does not directly support one-time uploads or manual invalidation for presigned URLs, we can emulate these behaviors with a little extra work. By storing URL usage status in a database, we can control access as needed.
Remember, this method requires your application to check the URL’s status before each access, so it does add some overhead to your operations. However, it provides a useful way to enhance the security and control of your S3 objects.
In future, if AWS adds direct support for these features, we’ll be able to simplify the process. But for now, this is a solid workaround for any data scientist or software engineer looking to maximize the utility and security of their AWS S3 service.
About Saturn Cloud
Saturn Cloud is your all-in-one solution for data science & ML development, deployment, and data pipelines in the cloud. Spin up a notebook with 4TB of RAM, add a GPU, connect to a distributed cluster of workers, and more. Join today and get 150 hours of free compute per month.