2 Amazon Simple Storage Service (AWS S3) - Reference Documentation
Authors: Lucas Teixeira, Jay Prall
Version: 1.2.12.4
Table of Contents
2 Amazon Simple Storage Service (AWS S3)
2.1 Specific configuration for S3
In your config.groovy, inside grails.plugin.aws closure, you can set some extra configurations for S3 usage. This properties will be used when no respective config is defined when uploading file.For example, you can define a default bucket for file uploads. Everytime you attemp to upload a file without explicitly setting the bucket name, this default will be used.Check now, all default config possibilities you can set.Bucket name
To set a default bucket, that will be used to all file upload use the config belowgrails {
plugin {
aws {
s3 {
bucket = "grails-plugin-test"
}
}
}
}
Bucket location
When creating buckets, you can define the default bucket location using thebucketLocation
config as shown belowgrails {
plugin {
aws {
s3 {
bucketLocation = "EU"
}
}
}
}
ACL (file permission)
The permissions that will be granted on this file, you can use:- public: Allow public access to everyone that attemps to read this file
- private: Sets private access to this file, only your account will read/write on it
- public_read_write: This will make this file wide open to any AWS account, read and write
- authenticated_read: Using this acl string, only logged AWS accounts will have permissions to read the file
grails {
plugin {
aws {
s3 {
acl = "public"
}
}
}
}
grails {
plugin {
aws {
s3 {
acl = "private"
}
}
}
}
RRS - Reduced Redundancy Storage
RRS stored files provides a cheaper storage with 99.99% durability instead of 99.999999999% as the default provided by AWS S3. More information here: http://aws.amazon.com/about-aws/whats-new/2010/05/19/announcing-amazon-s3-reduced-redundancy-storage/This is disabled by default, if you like to set RRS enabled for all uploads, use this config key:grails {
plugin {
aws {
s3 {
rrs = true
}
}
}
}
2.2 Uploading files
The plugin adds support for uploading files to Amazon S3, by adding a s3upload(...) method to File and InputStream classes, so you'll just need to call this method passing a closure with overridden config options, but if you do not want to override it, just leave this and the plugin will catch from the Config.groovy default options.Simple File upload from a File object
def s3file = new File("/tmp/test.txt").s3upload { path "folder/to/my/file/" }
<default-bucket>.s3.amazonaws.com/folder/to/my/file/test.txt
Uploading files directly from its InputStream
This is useful when you don't have the file stored in your filesystem. When user uploads files to your application using a multipart/form-data form, you can upload it directly to s3. Imagine it have an upload form like this:<g:uploadForm action="uploadFromInputStream"> <input type="file" name="photo"> <input type="submit" value="upload"> </g:uploadForm>
def file = request.getFile('photo')
def uploadedFile = file.inputStream.s3upload(file.originalFilename) {
bucket "file-upload-from-inputstream"
}
Uploading from InputStream requires one extra parameters, the filename that this object will have in S3, usually the 'originalFileName'.Note that when you use File.s3upload you just pass the closure that configures it. When using from one inputStream, you SHOULD have to specify the name that file will have and the file size. The above example show exactly how to do it with the correct info from the uploaded file.
2.2.1 Setting file virual path
S3 does not support paths or buckets inside other buckets, to solve this and keep your files organized, you can use the path method inside the config closure. Doing this, the plugin will set a metadata into this file telling AWS that this file is virtually in a folder that does not exist.The effect is exactly like in a regular folder. For example, doing the upload below:def uploadedFile = new File("/tmp/profile-picture.jpg").s3upload { bucket "my-aws-app" path "pictures/user/profile/" }
http://my-aws-app.s3.amazonaws.com/pictures/user/profile/profile-picture.jpg
2.2.2 Overriding AWS credentials
Just call the credentials method inside the upload closure, and this credentials will be used (for this upload only). Example:def uploadedFile = new File("/tmp/test.txt").s3upload { credentials "my-other-access-key", "my-other-secret-key" }
2.2.3 Overriding bucket to file upload
You can call the bucket method and define witch different bucket (from default) will be used. This bucket will be created if does not exist.def uploadedFile = new File("/tmp/test.txt").s3upload { bucket "other-bucket" }
other-bucket.s3.amazonaws.com/test.txt
def uploadedFile = new File("/tmp/test.txt").s3upload { bucket "bucket-not-yet-created-in-europe", "EU" }
2.2.4 ACL (file permission)
The permissions that will be granted on this file, you can use the same values shown in "General Plugin Config" topic on this guide.def uploadedFile = new File("/tmp/test.txt").s3upload { acl "private" }
2.2.5 RRS - Reduced Redundancy Storage
If some specifically file you like to use a different RRS setting, call the rrs method in the closure, passing true or false, as you wishdef uploadedFile = new File("/tmp/test.txt").s3upload { rrs false }
2.2.6 Setting File Metadata
AWS S3 files can store user metadata, doing this is simple as setting a metadata map to file uploaddef uploadedFile = new File("/tmp/test.txt").s3upload { metadata [user-id: 123, username: 'johndoe', registered-date: new Date().format('dd/MM/yyyy')] }
2.2.7 Using Server Side Encryption
AWS S3 files can be stored encrypted in S3, and AWS allows you to specify this as a request header. To use this, simply set the useEncryption property to 'true' in the file upload closure. The plugin will set the header for encryption and the file will be encrypted using AES-256 algorithm by AWS before storing in S3.def uploadedFile = new File("/tmp/test.txt").s3upload { useEncryption true }
2.3 Deleting files
You can delete files from Amazon S3 just knowing the bucket name and full path for it (just file name or path + file name)It is damn simple, like the examples belowYou'll first need to define the aws bean that is provided by the plugin:class MyController { def aws def myAction = { (...) } }
Deleting files stored on the root of some bucket (without path):
To delete the "photo.jpg" file stored under "my-app-bucket-photos" bucket (http://my-app-bucket-photos.s3.amazonaws.com/photo.jpg)aws.s3().on("my-app-bucket-photos").delete("photo.jpg")
Deleting files stored in some path of one bucket (one at a time):
To delete the "avatar.jpg" file stored under "my-app-bucket-avatars" bucket and path "/users/lucastex/" (http://my-app-bucket-avatars.s3.amazonaws.com/users/lucastex/avatar.jpg)aws.s3().on("my-app-bucket-avatars").delete("avatar.jpg", "/users/lucastex/")
Deleting all files inside one bucket
To delete all files stored under "my-app-bucket-avatars" bucket (http://my-app-bucket-avatars.s3.amazonaws.com/*), use the deleteAll() method.aws.s3().on("my-app-bucket-avatars").deleteAll()
2.4 Accessing files
If you are uploading some document to S3, you'll probably need to store information on how to get that file again later.The s3upload operation returns an instance of grails.plugin.aws.s3.S3File. As this plugin uses jets3t (http://jets3t.s3.amazonaws.com/index.html) to handle file upload, the S3File is just a wrapper for a delegated jets3t S3Object instance as you can see below:package grails.plugin.aws.s3import org.jets3t.service.model.S3Objectclass S3File { @Delegate S3Object source S3File(S3Object _source) { source = _source } }
def s3file = … //upload the file def etag = s3file.getETag()
2.4.1 Accessing public files
You can always generate the URL to access your files manually. Specially if you're using your own CNAME for it or if you're using cloudfront in front of S3. But, if you still need some easy and tricky way to generate it, you can do the following:def url = aws.s3().on("my-first-bucket").url("photo.jpg")
def url = aws.s3().on("my-first-bucket").url("photo.jpg", "userphotos")
2.4.2 Accessing private files
In some cases, you may need to access the private file directly (and not using the temporary public URL). For example, if you need to directly stream the file to the action's respose, you can do the following:def downloadFile = { def bucket = params.bucket def path = params.path def name = params.name def fileToDownload = aws.s3().on(bucket).get(name, path) response.setContentType("image/jpeg") //the file content type you're serving response.setHeader("Content-disposition", "inline; filename='${name}'") response.outputStream << fileToDownload.dataInputStream }
2.4.3 Creating public URLs for private files
When you upload a file to S3 with a "private" acl, means that the file won't be accessed directly using the URL, but only with your amazon credentials.For example:def s3file = new File("test.txt").s3upload { bucket "secret-files" acl "private" }
def s3file = new File("test.txt").s3upload { bucket "secret-files" acl "private" }def publicUrl = aws.s3().on("secret-files").publicUrlFor(1.hour, "test.txt")
Defining when public URL will expire
You can set the expires date for the public URL, passing an argument to publicUrlFor method.(...).publicUrlFor(3.hours, "test.txt") //will be available for 3 hours (...).publicUrlFor(10.years, "test.txt") //available for 10 years (...).publicUrlFor(1.second, "test.txt") //you won't get this one on time
1.second or 2.seconds 1.minute or 2.minutes 1.hour or 2.hours 1.day or 2.days 1.month or 2.months 1.year or 2.years
2.4.4 Creating torrent for S3 hosted files
It is possible to generate torrent URLs for S3 hosted files with the plugin.After uploading some file, just call the torrent(...) method on the s3 helper this way.def s3file = new File("test.txt").s3upload { bucket "secret-files" acl "private" }def torrentUrl = aws.s3().on("secret-files").torrent("test.txt")
2.4.5 Accessing only file metadata
If you just want to know some details about an object and you don't need its contents, it's faster to use the getDetails method. This returns only the object's details, also known as its 'HEAD'. Head information includes the object's size, date, and other metadata associated with it such as the Content Type.def downloadFileMetadata = { def bucket = params.bucket def path = params.path def name = params.name def fileDetails = aws.s3().on(bucket).getDetails(name, path) render [md5:fileDetails.getMd5HashAsBase64(), length:fileDetails.getContentLength(), type:fileDetails.getContentType()] as JSON }