Direct upload to s3 with cors
Everything detailed in this article has been wrapped up in this gem, you should give it a look !
Anyway, I still advise you to read this article as it will probably help you how everything works !
Since beginning of september, Amazon added CORS support to S3. As this is quite recent, there are not yet a lot of documentation and tutorials about how to set eveything up and running for your app.
If somehow you're working with heroku you might have already faced the 30s limit on each requests. There are some alternatives, such as the extension of the great carrier wave gem, carrierwave direct. I gave it a quick look, but I found it quite crappy, as it forces you to change your carrier wave settings (removing the store_dir method, really ?) and it only works for a single file. So I thought it would be better to handle upload manually for big files, and rely on vanilla carrier_wave for my other small uploads.
Setup your bucket
First you'll need to setup your bucket to enable CORS under certain conditions.
Of course those settings are only for development purpose, you'll probably want to restrict the Allowed Origin rule to your domain only. Documentation about those settings is quite good.
Setup your server
One solution would be to directly write the content of all those variables in the form, so it's ready to be submitted, but I believe that most of those value should not be written in the DOM. So we'll create a new route we'll use to fetch those data.
This example is written with Rails, but writing the same for another framework should be really simple
Now that we have our new route, let's create the controller which will send back our data to the s3 form
The policy and signature method are stolen from the linked blog posts above with one exception, I had to include the "starts-width" constraint, otherwise s3 was yelling 403 to me.
Everything else is quite straight forward, there's just a small detail to consider if you set the acl to 'private', but more on that later.
One last detail, the key value is actually the path of your file on your bucket, so set it to whatever you want but be sure it matches the constraint you set in the policy. Here we're using
That's basically everything we have to do on the server side
Add the jQueryFileUpload files
Next you'll have to add the jQueryFileUpload files. The plugins ships with a lof of files, but I found most of them useless, so here is the list
Now let's setup jQueryFileUpload to send the correct data to s3
Based on what we did on the server, the workflow will be composed of 2 requests, first, it's going to fetch the needed data from our server, then send everything to s3.
Here is the form I'm using, the order of parameter is important.
So quick explanation about what's going on here :
add callback allows us to fetch the missing data before the upload. Once we have the data, we simply insert them in the form
done callbacks are only used for UX purpose, they show and hide the progress bar when needed. The real magic is the
progress callback as it gives you the current progress of the upload in the event argument.
In my example, this form sits next to a 'real' rails form which is used to save an object which has amongst its attributes a file_url, linked to the "big file" we just uploaded. So once the upload is done I fill the 'real' field so my object is correctly created with the good url without having to handle extra things. After submitting the real form my object is saved with the URL of the file uploaded on S3.
If you're uploading public files, you're good to go, everything's perfect. But if you're uploading private file (this is set with the acl params), you still have a last thing to handle.
Indeed the url itself is not enough, if you try accessing it, you'll face some ugly xml like that. The solution I used was to use the aws gem which provides a great method : AWS::S3Object#url_for. With that method, you can get an authorized url for the desired duration with your bucket name and the key (the path of your file in the bucket) of your file
So my custom url accessor looked something like this :
This involves some weird handling with the
CGI::unescape, and there's probably a better way to achieve this, but this is one way to do it, and it works fine.
I'll set up a live example running on heroku, on which you'll be able to upload files in more than 30s coming soon
I changed every access to AWS variables (BUCKET, SECRET_KEY and ACCESS_KEY) by using environment variables. By doing so you don't have to put the variables directly in your files, but you just have to set correctly the variables :
When deploying on heroku you just have to set the variables with