Creating custom notification text with Amazon S3 and SNS (via Lambda)

If you’ve set up S3 to email “Event Notifications” via SNS, you probably found the content of the email was a bunch of gobbelty-goop that looked something like this:

Amazon S3 Default Notification Text

Not very easy to read if you just wanted to see what happened “at a glance”, and there’s currently no method of putting in custom options right from S3.

Fortunately, it can be done via Amazon’s Lambda. Basically the process goes like this:

  1. S3 spits out the goop to Lambda.
  2. You have Lambda clean it up via node.js. Pick what you want, do some formatting, get a nice subject line, etc.
  3. Lambda sends it out to SNS, and you get the email.


The Code (js for Node.js in Lambda):

I’m going to make a wild assumption here and guess that you’ve already set up S3 and SNS. So you just need some code for Lambda to get you started. Here it is – a template you can use for node.js when you start setting things up (warning: a bit sloppy):

'use strict';

let aws = require('aws-sdk');
let s3 = new aws.S3({ apiVersion: '2006-03-01' });

exports.handler = function(event, context) {

    // Get the object from the event and show its content type
    const eventn = event.Records[0].eventName;
    const filesize = event.Records[0].s3.object.size;
    const bucket = event.Records[0];
    const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
    var eventText = JSON.stringify(event, null, 2);
    var filesizemod = "-";
    if (typeof filesize == "number") {
        if      (filesize>=1000000000) {filesizemod=(filesize/1000000000).toFixed(2)+' GB';}
        else if (filesize>=1000000)    {filesizemod=(filesize/1000000).toFixed(2)+' MB';}
        else if (filesize>=1000)       {filesizemod=(filesize/1000).toFixed(2)+' KB';}
        else                           {filesizemod=filesize+' bytes';}
    } else if (typeof filesize !== 'undefined' && filesize) {
        filesizemod = filesize;
    var subjecttext="AWS: " + eventn + " -> " + key.replace(/^.*[\\\/]/, '') + " (" + filesizemod + ")";
    var eventText2 = "" + eventn + "\n\nFile: " + key + "\nSize: " + filesizemod + "\n\n\n----\n----\n\n" + eventText;
    var sns = new aws.SNS();
    var params = {
        Message: eventText2, 
        Subject: subjecttext, 
        TopicArn: "The ARN for your SNS Topic (arn:aws:sns:server:numbers:name)"
    sns.publish(params, context.done);

The line you’ll need to change is in blue. You’ll probably want to change a lot of other stuff too, but start with that to begin with. Then feel free to tweak things to suit your needs.

In the above form, It’ll give a basic message similar to the screenshot below:

Amazon S3 notification - cleaned up a bit


  • A subject that tells you what happened -> file name ( file size).
  • The same stuff in the message body.
  • At the end of the email (not in the screenshot) it also dumps the details/goop from before. They do get line breaks though (for some reason).
  • Update: In the Subject line, it will now strip out any file path (leaving only the file name), because exceeding the Subject line character limit becomes much more likely if it includes the full path. This is performed by the key.replace bit.


Other Variables (optional):

If you want other variables, they’re generally accessed with event.Records[0].something (which you see a few times in the code above). For example, if you looked through the code you might have noticed there’s an un-used:

const bucket = event.Records[0];

…which assigns the “bucket name” to “bucket”. Basically, everything that’s in the “goblets-goop” you saw previously can be accessed – you can peruse through the AWS Event Message Structure page and figure them out, but common ones you might want are:

  • event.Records[0].awsRegion
  • event.Records[0].eventTime (time in ISO format)
  • event.Records[0].eventName (ObjectCreated:Put, ObjectRemoved:Delete, etc)
  • event.Records[0].userIdentity (shows the user ID which may be useful if you have multiple users)
  • event.Records[0].requestParameters.sourceIPAddress
  • event.Records[0] (whatever you named your bucket)
  • event.Records[0].s3.object.key (object key which generally contains the file name)
  • event.Records[0].s3.object.size (file size)

The linked page has more, but you get the idea. Assign them to variables, manipulate them, and do what you will with them.

The Code #2 (policy for Lambda set-up)

At some point, it’s also going to ask you for a policy during the Lambda set-up. You’ll have to edit one, and may want to use something like this:

  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Action": [
      "Resource": "arn:aws:logs:*:*:*"
      "Effect": "Allow",
      "Action": [
      "Resource": [
        "The ARN for your SNS Topic (arn:aws:sns:server:numbers:name)"

Cost and Resource Usage:

If you haven’t used Lambda before and are jumping in fresh (like I was), you might be wondering how much of a cost hit it might incur. After all, it would suck to pay through the nose just to get your email formatted nicely!

The script ran in the 1100-1400ms range during my trial attempts and used under 25MB (128MB instance).

The “free” tier of Lambda gives 3,200,000 seconds under the 128MB instance, and 1 million requests. So about 2,285,000 requests before I hit costs for compute time. That’s a lot of emails in a month, and I’m hitting all the 1M thresholds across services by that point anyway. I really just wanted an email confirmation when I do some periodic automated backups to S3, so I won’t be hitting anywhere near that. If you think you will, it’s probably worth looking at to figure it out. You could also try optimizing the script a little bit, although a short benchmark I ran showed that processing all the guts took less than 100ms – the bulk of the compute time seems to be in firing the thing up (or possibly in interacting with S3/SNS).

Keep in mind that you’re still incurring the S3, SNS, and CloudWatch log costs for requests/storage/etc. Lambda does log by default so make sure CloudWatch is expiring logs if you don’t use them frequently.

I’ll leave things there. If you’ve got some tweaks, tuning, or some better ideas for handling S3->SNS, feel free to leave a comment below!