18 min read

Serverless on AWS #1: A serverless video converter

Serverless on AWS #1: A serverless video converter
Wailua Falls, Kauai

A year or so ago, with great enthusiasm, I ordered a copy of the second edition of Serverless Architectures on AWS — and when it finally arrived, I promptly set it aside, and almost forgot about it entirely.

Life, as it happens, has a way of getting in the way. Last week, though, I finally managed to pick it back up, and I'm excited to start working through it chapter by chapter. Serverless, once you manage to get your head around it, really does change the way you think about cloud computing in fundamental ways — and I happen to be one of those people who's about as bought into it philosophically as one could be. Whenever possible, if I can solve a problem serverlessly, I'm going to give that a try first.

As of now, I'm only a couple of chapters into the book, but I'm enjoying it so far — and I love that it covers such a broad range of material. Flipping through the upcoming sections, though (each of which covers a particular serverless use case and architecture), it occurred to me that it might be fun to work through all of the book's examples using Pulumi. (Most of the examples seem to use some combination of Serverless Framework, SAM, and the AWS CLI and/or console, but serverless architectures in general lend themselves surprisingly well to Pulumi — especially when your language of choice, like mine, happens to be JavaScript.)

So in this post, I'll kick off this little series-to-be by jumping right into the first architecture covered in the book: a simple serverless video-conversion pipeline.

Setting the stage

The hypothetical scenario is simple: You've got a video file somewhere, maybe something you shot with your phone, but it's big — several gigs, say, so much too large to just throw up onto the web or drop into a blog post. To make use of this giant raw video file, you need a way to compress and resize it into a web-friendly format — which, these days, probably means converting it into an H.264 encoded MP4.

Of course, you could easily do this with something like iMovie. But let's assume that for whatever reason, you can't; maybe you don't have an iPhone or a Mac, or you do, but you actually need to transcode like a thousand videos for work, or something, or you've found yourself in some other such situation that truly requires a bit of cloud automation. How do you do it? And more importantly, how do you do it serverlessly on AWS?

Serverless Architectures to the rescue

Serverless Architectures on AWS offers a clear and ostensibly simple solution: Wire together a few high-level Amazon services and call it a day. The fundamental idea is that you should be able start with a large video file, throw it at AWS, and wait patiently for the transcoded version (or versions) to emerge through the magic of AWS (and perhaps a little coding on your part).

Turns out that's totally doable, and there really isn't all that much to it — architecturally at least. The book proposes the following simple setup:

Here, the user — e.g., you — begins by uploading a video to an S3 bucket. A Lambda function configured to respond to file uploads handles the event by parsing its metadata (for the name of the file, its location, etc.) and then contacting an AWS managed service called Elemental MediaConvert to request a new conversion job, passing along a few details like the desired file format, bit rate, and name of the bucket where the transcoded files ultimately should be stored. MediaConvert then does its thing, converting the file according to your specs and writing the new video to S3.

And that's pretty much it. And of course, because it's all serverless, it scales automatically (meaning you can throw as many videos at the pipeline as you like), and you only pay for the time it takes AWS to run your Lambdas and convert each video file.

But the approach the book takes to implement this design —  as an example of infrastructure as code — is a surprisingly awkward and laborious one. Here's how it's done:

  1. First, in the AWS console, navigate to the S3 dashboard and create a new bucket for your transcoded video files — the ones MediaConvert will eventually create for you. Be sure to come up with a unique name for this bucket because no two S3 buckets can ever share the same name anywhere in the known universe. And remember the name you choose. You'll need it later.
  2. Then, navigate to the AWS IAM dashboard and create a new role to grant AWS Lambda permission to make calls on the MediaConvert service. Make sure you copy the new role's Amazon Resource Name (ARN) as well — you'll need that later, too.
  3. Next, make another new IAM role, this one to grant MediaConvert permission to write video files to S3. Copy this role's ARN somewhere also.
  4. Navigate to the MediaConvert dashboard and locate your AWS account's assigned MediaConvert API endpoint. (AWS assigns these automatically.) As you do this, doublecheck that the currently selected region (as shown in the upper-right corner of the console) is the same one you're planning to deploy into later, as MediaConvert API URLs are region-specific. Copy the endpoint's unique URL. The Lambda function will need it to submit conversion requests.
  5. Install the Serverless Framework CLI and generate a new aws-nodejs project.
  6. Open the generated YAML file and paste these four strings — destination bucket, Lambda role ARN, MediaConvert role ARN, and MediaConvert API endpoint — into their proper positions in the file. Add a few lines to expose these values as environment variables (so the Lambda can use them at runtime), then fill out the generated JavaScript function to complete the Lambda itself.
  7. Finally, run serverless deploy to provision the upload bucket and Lambda resources (because you've already created everything else by hand).

Easy, right?

So, here's the thing. This totally works, and the architecture is fine; it's definitely the right set of AWS resources to use for an app like this one. All of the ingredients — buckets, IAM roles, function code — are necessary; Lamba needs permission to talk to MediaConvert, MediaConvert needs permission to write to S3, and so on. As far as the architecture itself is concerned, given the various product offerings and technical constraints of AWS, what's presented here makes total sense, design-wise.

But man — all this clicking around in the AWS console, hard-coding of thought-up bucket names, copy-pasting of URLs and ARNs — all of it makes this simple job way more complicated and cumbersome than it needs to be. To me, this is very much not what infrastructure as code is supposed to look like.

Infrastructure-as-code, to me, is all about hands-off automation. The moment you, as practitioner of IaC, find yourself filling out web forms and clicking buttons to make resources or hard-coding or copying and pasting anything should be the moment your brain tells you that something is wrong. All of this stuff should be expressible in code, not just a sliver of it — and all of it within the scope of a single program. On top of that, you — and perhaps more importantly, anyone else — should be able to look at your program code and easily grasp how it all hangs together; you shouldn't have to toggle between your IDE and the AWS console to assemble a mental picture of how an application does its job — particularly an application as simple as this one. And finally, you should also be able to stand up and easily switch between multiple deployment environments (think dev, staging, production) to target whichever environment you need, without having to change anything (like hard-coded S3 bucket names or resource IDs) in the code itself.

How it's done with Pulumi

Now let's see how you might go about implementing this same architecture with Pulumi.

For starters, you'll need a new project. So if you haven't already (assuming you're interested in actually doing this yourself), install Pulumi and configure it for AWS. You can read all about how to do both in the getting-started guide.

Create the project using a starter template. There are many such templates to choose from, but for applications like this one (and for most serverless apps in general), I usually start with the aws-typescript template, as it gives you access (by way of the Node.js SDK) to Pulumi's built-in support for function serialization, which is incredibly handy for wiring up JavaScript Lambda functions (as you'll see in a moment). Start with this:

pulumi new aws-typescript

The template will prompt you for a handful of properties like the project name, stack name, and the AWS region to deploy into. Step through them, then open the project in your editor of choice.

A quick look at what we'll be building

The biggest differences between the program you're about to build and the implementation described in Serverless Architectures are these:

  • In the Pulumi version, there's no need to create anything manually
  • You can do it all within the scope of a single 100-line Node.js program

Every component required by the program can be created or obtained by program itself: buckets, globally unique bucket names, IAM roles, service endpoints, all of it. No visits to the AWS console (or hard-coding of strings) required.

Secondly, the whole application can be written in one language — TypeScript (or if you like, plain ol' JavaScript) — and expressed in a single TypeScript file. Thanks to the magic of Pulumi's Node.js SDK, you can code both application and infrastructure in the same program — a huge advantage that makes writing, debugging, testing, refactoring, reviewing, and most importantly understanding the code way easier than having it all managed in multiple places by hand.

The infrastructure requirements, however, are the same. You'll still need:

  • A bucket for uploads
  • Another bucket for transcodes
  • A Lambda function to invoke MediaConvert when a video is uploaded
  • A couple of IAM roles to grant Lambda and MediaConvert the permissions they need to do the work they need to do

At some point, it might be nice to have AWS tell you when a transcoding job is complete — but we can figure that out later. For now, this is fine; we should have all we need in terms of requirements to get going.

Now let's do some actual building.

Begin by declaring the source and destination buckets

Open index.ts and clear out the boilerplate code that was generated by the new-project wizard, replacing it with the following lines:

import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";

const inputBucket = new aws.s3.Bucket("input", { forceDestroy: true });
const outputBucket = new aws.s3.Bucket("output", { forceDestroy: true });

I'll explain what each line does as we go.

Lines one and two just import the @pulumi/pulumi and @pulumi/aws libraries — nothing too exciting there. (The former contains general-purpose Pulumi APIs for things like configuration, outputs, and stacks; the latter for working with AWS resources specifically.) The two lines that follow declare the program's input and output buckets. Notice that for both buckets, we include the optional forceDestroy argument, which tells Pulumi to delete the bucket even when it's not empty. In real-world scenarios, you might not want to use this setting (it's false by default), as it's there to keep you from accidentally deleting your data, but for us, it's better to use it, as it'll make make things easier to clean up when we're done. If we didn't set this to true, pulumi destroy would fail loudly with a complaint about the buckets being non-empty. Since we want to be able to clean everything up (videos and all) in one step, forceDestroy is the way to go.

Notice as well that we didn't have to think up any globally unique names for the S3 buckets. Pulumi's auto-naming feature takes care of that for us. (As an aside, I must say that I'm a huge and unabashed fan of auto-naming; if you're using Pulumi and not using auto-naming, you almost surely should be. Yes, I get that the random naming can look a bit strange at first — but the quality-of-life improvements you get in exchange for these quirky appearances is so much more than worth it, I promise.)

Grant MediaConvert permission to write to S3

With the buckets defined, create an IAM role to grant MediaConvert permission to write new videos to the output bucket. Add the following lines to the program:

// ...

const convertRole = new aws.iam.Role("convert-role", {
    assumeRolePolicy: {
        Version: "2012-10-17",
        Statement: [
            {
                Effect: "Allow",
                Action: "sts:AssumeRole",
                Principal: {
                   Service: "mediaconvert.amazonaws.com",
                },
            },
        ],
    },
    managedPolicyArns: [
        aws.iam.ManagedPolicy.AmazonS3FullAccess,
    ],
});

This aws.iam.Role gives full (i.e., write) S3 access to any MediaConvert job that assumes it. Next, you'll see how to use this role to post a new transcoding job to the MediaConvert service from AWS Lambda.

Write the Lambda upload handler

Now it's time to write the function that'll be run when a file shows up in the input bucket. This function is essentially where your program logic lives: It's where you codify this: When I upload a video, transcode it with these settings and put the result in this bucket.

There are a few things this function needs to do:

  1. First, it needs to extract the name of the uploaded file from the Lambda's event argument. (It needs this to tell MediaConvert which video file to convert.)
  2. Once it has the name, it can use the AWS SDK for JavaScript (which Pulumi bundles with @pulumi/aws) to instantiate a MediaConvert object and call createJob() — but it'll need a couple of values first. One is the region of the MediaConvert service to use. (Recall that MediaConvert APIs are all region-specific.) The other is the URL of the region's MediaConvert endpoint. Both can be read by the function at runtime — the region by reading the value currently configured on the Pulumi stack (which you set a moment ago when you ran pulumi new) and the URL by using that region to look up the endpoint with a call to describeEndpoints().  
  3. In addition to those two values, the function also needs the ARN of the IAM role you just defined (the one MediaConvert will assume when it runs) and the name of the output bucket to write to. Both can be read similarly using the .get() method available on all Pulumi outputs.
  4. It needs need to specify the settings to use for the transcoding job itself. These, as you might imagine, can get pretty complex, but thankfully AWS provides a few shortcuts for the most common ones. (See Supported Output Codecs and Containers and Working with Output Presets in the AWS docs for details.)
  5. Finally, it needs to be granted permission to make calls on the MediaConvert service. You can do this with the preconfigured IAM policy AWSElementalMediaConvertFullAccess.

Here's the whole block, with comments. Go ahead and add this to index.ts now:

// ...

const region = new pulumi.Config("aws").require("region");

inputBucket.onObjectCreated("handler", new aws.lambda.CallbackFunction("handler", {
    policies: [
        aws.iam.ManagedPolicy.AWSLambdaExecute,
        "arn:aws:iam::aws:policy/AWSElementalMediaConvertFullAccess",
    ],
    callback: async (event: aws.s3.BucketEvent) => {

        // Get the name of the file that was uploaded.
        const key = event.Records![0].s3.object.key;

        // Look up the region-specific MediaConvert endpoint.
        const client = new aws.sdk.MediaConvert({ region });
        const endpoints = await client.describeEndpoints().promise();
        const endpoint = endpoints.Endpoints![0].Url;

        // Submit a new MediaConvert job request.
        const jobRequest = await new aws.sdk.MediaConvert({ endpoint }).createJob({
            Role: convertRole.arn.get(),
            Settings: {
                Inputs: [
                    {
                        FileInput: `s3://${inputBucket.id.get()}/${key}`,
                        AudioSelectors: {
                            "Audio Selector 1": {
                                SelectorType: "TRACK",
                                Tracks: [ 1 ],
                            },
                        },
                    },
                ],
                OutputGroups: [
                    {
                        Name: "File Group",
                        Outputs: [
                            {
                                "Extension": "mp4",
                                "Preset": "System-Generic_Hd_Mp4_Avc_Aac_16x9_1280x720p_24Hz_4.5Mbps"
                            },
                        ],
                        OutputGroupSettings: {
                            Type: "FILE_GROUP_SETTINGS",
                            FileGroupSettings: {
                                Destination: `s3://${outputBucket.id.get()}/${key}`,
                            },
                        },
                    },
                ],
            },
        }).promise();

        // Log the request result.
        console.log({ jobRequest });
    },
}));

A few things to note about what's going on here:

  • The first line just reads the AWS region from the stack configuration, as mentioned above.
  • The next line uses a couple of Pulumi resources to create a Lambda event handler known as a magic function. The Pulumi docs explain how magic functions work in more detail, but the gist is that when they're expressed in this way, Pulumi is able to package up Lambda functions for you (along with their dependencies and any closed-over variables) and create the S3 triggers to invoke the function in response to interesting events — e.g., uploads.
  • The IAM policies to apply to the Lambda are passed as an array of strings — one of which, sadly, lacks a TypeScript constant of its own. At deploy-time, Pulumi will use these policies to create an IAM role for the Lambda to assume when it runs in response to an upload event.
  • The event passed into the function is exposed as a typed aws.s3.BucketEvent — a useful thing for figuring out (with the help of your IDE's built-in type hinting) how to unwrap that event to get at its most useful parts.
  • Because we're working with Node.js, we can also use the syntactic sweetness of async/await to make working with the promise-based AWS SDK a little nicer.
  • The console.log() statement at the end writes an entry to Amazon CloudWatch logs for the function — a convenient way to debug runtime behavior when you need to. (In a moment you'll see how convenient that can be.)

With the buckets, IAM role, and Lambda now written, you're just about ready to deploy to AWS. All you need now are the names of your buckets-to-be.

Export the source and destination bucket names

Add the following two lines to finish the program:

// ...

export const inputBucketID = inputBucket.id;
export const outputBucketID = outputBucket.id;

These last two lines expose the input and output bucket names as Pulumi stack outputs. Technically, they aren't programmatically necessary, but if we left them out, we'd have to do a little digging to find out what the generated (i.e., auto-named) input and output bucket names were in order to work with them. Exporting them as outputs will render them onscreen when the deployment completes — and also give us a handy way of referencing them (as you'll see) without having to copy and paste anything on the command line.

Deploy!

Now it's time to push this program out into the world and start using it. Here, for reference, is what you should now have for index.ts:

import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";

// Read the AWS region from the currently selected stack.
const region = new pulumi.Config("aws").require("region");

// Provision two buckets: one for uploads, one for transcodes.
const inputBucket = new aws.s3.Bucket("input", { forceDestroy: true });
const outputBucket = new aws.s3.Bucket("output", { forceDestroy: true });

// Define a role that grants MediaConvert permission to write to S3.
const convertRole = new aws.iam.Role("convert-role", {
    assumeRolePolicy: {
        Version: "2012-10-17",
        Statement: [
            {
                Effect: "Allow",
                Action: "sts:AssumeRole",
                Principal: {
                   Service: "mediaconvert.amazonaws.com",
                },
            },
        ],
    },
    managedPolicyArns: [
        aws.iam.ManagedPolicy.AmazonS3FullAccess,
    ],
});

// Handle uploads by extracting the video filename and creating a new MediaConvert job.
inputBucket.onObjectCreated("handler", new aws.lambda.CallbackFunction("handler", {
    policies: [
        aws.iam.ManagedPolicy.AWSLambdaExecute,
        "arn:aws:iam::aws:policy/AWSElementalMediaConvertFullAccess",
    ],
    callback: async (event: aws.s3.BucketEvent) => {

        // Get the name of the file that was uploaded.
        const key = event.Records![0].s3.object.key;

        // Look up the region-specific MediaConvert endpoint.
        const client = new aws.sdk.MediaConvert({ region });
        const endpoints = await client.describeEndpoints().promise();
        const endpoint = endpoints.Endpoints![0].Url;

        // Submit a new MediaConvert job request.
        const jobRequest = await new aws.sdk.MediaConvert({ endpoint }).createJob({
            Role: convertRole.arn.get(),
            Settings: {
                Inputs: [
                    {
                        FileInput: `s3://${inputBucket.id.get()}/${key}`,
                        AudioSelectors: {
                            "Audio Selector 1": {
                                SelectorType: "TRACK",
                                Tracks: [ 1 ],
                            },
                        },
                    },
                ],
                OutputGroups: [
                    {
                        Name: "File Group",
                        Outputs: [
                            {
                                "Extension": "mp4",
                                "Preset": "System-Generic_Hd_Mp4_Avc_Aac_16x9_1280x720p_24Hz_4.5Mbps"
                            },
                        ],
                        OutputGroupSettings: {
                            Type: "FILE_GROUP_SETTINGS",
                            FileGroupSettings: {
                                Destination: `s3://${outputBucket.id.get()}/${key}`,
                            },
                        },
                    },
                ],
            },
        }).promise();

        // Log the request result.
        console.log({ jobRequest });
    },
}));

// Export the input and output bucket IDs.
export const inputBucketID = inputBucket.id;
export const outputBucketID = outputBucket.id;

Deploy the program with a single pulumi up, checking the preview to make sure everything looks right:

$ pulumi up
...

Do you want to perform this update? yes
Updating (dev)

     Type                                  Name              Status              Info
 +   pulumi:pulumi:Stack                   mediaconvert-dev  created (3s)        4 messages
 +   ├─ aws:iam:Role                       handler           created (0.93s)     
 +   ├─ aws:iam:Role                       convert-role      created (1s)        
 +   ├─ aws:s3:Bucket                      output            created (1s)        
 +   ├─ aws:s3:Bucket                      input             created (1s)        
 +   │  ├─ aws:s3:BucketEventSubscription  handler           created (0.40s)     
 +   │  │  └─ aws:lambda:Permission        handler           created (0.35s)     
 +   │  └─ aws:s3:BucketNotification       handler           created (0.73s)     
 +   ├─ aws:iam:RolePolicyAttachment       handler-aadec3c3  created (0.34s)     
 +   ├─ aws:iam:RolePolicyAttachment       handler-2cc11edf  created (0.48s)     
 +   └─ aws:lambda:Function                handler           created (13s)       

Outputs:
    inputBucketID : "input-8a99139"
    outputBucketID: "output-01cb7a4"

Resources:
    + 11 created

Duration: 20s

When the deployment completes (which should take only a few seconds), you'll be left with your two uniquely named buckets.

That's it! Now let's transcode some video.

Upload a video to convert

Open a new terminal tab and navigate to the directory containing the project, then run the following command:

$ pulumi stack output inputBucketID

In response, you should get the name of the auto-named input bucket. Here's what I see when I run that command, for example:

input-8a99139

Now, back in the other terminal tab (the one where you just ran pulumi up), run pulumi logs to tail the CloudWatch logs for the Lambda you just created:

$ pulumi logs --follow

Collecting logs for stack dev since 2023-07-07T15:25:22.000-07:00.

Let that run, then switch over to the tab with the input bucket name and upload a video file from your computer with the AWS CLI. Use pulumi stack output to pass the name of the input bucket to aws s3 cp:

$ aws s3 cp ~/Desktop/wailua-falls.mov s3://$(pulumi stack output inputBucketID)/

upload: ../../../Desktop/wailua-falls.mov to s3://input-8a99139/wailua-falls.mov

In a few seconds, your still-running pulumi logs command should show you that the Lambda was invoked and the convert job created. Here's what I see:

Collecting logs for stack dev since 2023-07-08T04:43:55.000-07:00.

2023-07-08T05:53:52.739-07:00[               handler-6a8088f] INIT_START Runtime Version: nodejs:16.v15        Runtime Version ARN: arn:aws:lambda:us-west-2::runtime:ce158dcc19c42286fef86a8dfb67e1efd92d0de18828736a00f3698410aabcb3
 2023-07-08T05:53:52.857-07:00[               handler-6a8088f] START RequestId: ee791b7f-c245-41bd-8a5e-d180847cf65a Version: $LATEST
 2023-07-08T05:54:01.170-07:00[               handler-6a8088f] 2023-07-08T12:54:01.170Z ee791b7f-c245-41bd-8a5e-d180847cf65a    INFO    {
  jobRequest: {
    Job: {
      AccelerationSettings: [Object],
      AccelerationStatus: 'NOT_APPLICABLE',
      Arn: 'arn:aws:mediaconvert:us-west-2:616138583583:jobs/1688820840907-0s24me',
      ClientRequestToken: 'f477b33d-7913-4285-9ae4-23d36282601e',
      CreatedAt: 2023-07-08T12:54:00.000Z,
      Id: '1688820840907-0s24me',
      Messages: [Object],
      Priority: 0,
      Queue: 'arn:aws:mediaconvert:us-west-2:616138583583:queues/Default',
      Role: 'arn:aws:iam::616138583583:role/convert-role-b765342',
      Settings: [Object],
      Status: 'SUBMITTED',
      StatusUpdateInterval: 'SECONDS_60',
      Timing: [Object]
    }
  }
}
 2023-07-08T05:54:01.209-07:00[               handler-6a8088f] END RequestId: ee791b7f-c245-41bd-8a5e-d180847cf65a
 2023-07-08T05:54:01.209-07:00[               handler-6a8088f] REPORT RequestId: ee791b7f-c245-41bd-8a5e-d180847cf65a   Duration: 8352.45 msBilled Duration: 8353 ms Memory Size: 128 MB     Max Memory Used: 86 MB  Init Duration: 117.12 ms

Soon (depending on the length of the video you submitted), you should see a new transcoded video appear in the output bucket:

$ aws s3 ls s3://$(pulumi stack output outputBucketID)/                                                                     
2023-07-08 05:54:10    7954412 wailua-falls.mov.mp4

Copy that file from the output bucket to your computer:

$ aws s3 cp s3://$(pulumi stack output outputBucketID)/wailua-falls.mov.mp4 .       

download: s3://output-01cb7a4/wailua-falls.mov.mp4 to ./wailua-falls.mov.mp4

And behold — the delightfully web-friendly result:

0:00/0:131×

Feel free to submit a few others, fiddle with the conversion settings, etc., to get a feel for how everything works. There's a ton you can with this service — way more than I could ever hope to cover in a post like this one. See the MediaConvert docs for some creative inspiration, and when you've had enough fun for today, read on to learn about how to clean everything up.

Tidying up

Just as you were able to stand up this whole stack with a single pulumi up, you can tear it all down with a single pulumi destroy. Doing so removes everything you created with the program in one go — the Lambda, IAM roles, and both buckets, along with their contents. As before, you'll get a preview of the changes before Pulumi actually applies any of them:

$ pulumi destroy

Previewing destroy (dev)
...

     Type                                  Name              Plan       
 -   pulumi:pulumi:Stack                   mediaconvert-dev  delete     
 -   ├─ aws:iam:RolePolicyAttachment       handler-aadec3c3  delete     
 -   ├─ aws:iam:RolePolicyAttachment       handler-2cc11edf  delete     
 -   ├─ aws:lambda:Function                handler           delete     
 -   ├─ aws:iam:Role                       convert-role      delete     
 -   ├─ aws:s3:Bucket                      input             delete     
 -   │  ├─ aws:s3:BucketNotification       handler           delete     
 -   │  └─ aws:s3:BucketEventSubscription  handler           delete     
 -   │     └─ aws:lambda:Permission        handler           delete     
 -   ├─ aws:s3:Bucket                      output            delete     
 -   └─ aws:iam:Role                       handler           delete     

Outputs:
  - inputBucketID : "input-8a99139"
  - outputBucketID: "output-01cb7a4"

Resources:
    - 11 to delete

Do you want to perform this destroy? 
> yes
  no
  details

Choose yes if it all looks right (which it should):

Do you want to perform this destroy? yes
Destroying (dev)

View in Browser (Ctrl+O): https://app.pulumi.com/christian-pulumi-corp/mediaconvert/dev/updates/29

     Type                                  Name              Status              
 -   pulumi:pulumi:Stack                   mediaconvert-dev  deleted             
 -   ├─ aws:iam:RolePolicyAttachment       handler-aadec3c3  deleted (0.90s)     
 -   ├─ aws:iam:RolePolicyAttachment       handler-2cc11edf  deleted (0.79s)     
 -   ├─ aws:lambda:Function                handler           deleted (0.84s)     
 -   ├─ aws:s3:Bucket                      input             deleted (0.81s)     
 -   │  ├─ aws:s3:BucketNotification       handler           deleted (0.75s)     
 -   │  └─ aws:s3:BucketEventSubscription  handler           deleted             
 -   │     └─ aws:lambda:Permission        handler           deleted (0.45s)     
 -   ├─ aws:s3:Bucket                      output            deleted (1s)        
 -   ├─ aws:iam:Role                       handler           deleted (0.81s)     
 -   └─ aws:iam:Role                       convert-role      deleted (1s)        

Outputs:
  - inputBucketID : "input-8a99139"
  - outputBucketID: "output-01cb7a4"

Resources:
    - 11 deleted

Duration: 6s

And there you have it.

Wrapping up

I must admit, as an amateur photographer and filmmaker, I tend to get pretty excited about this sort of thing; I love how easy these tools make solving these kinds of real-world media-management problems. Dealing with stuff like this can be a huge pain sometimes — especially when you're drowning in terabytes of HD video and all you're trying to do is share a few clips of your kids playing soccer with their grandparents.

And as a developer, it kind of blows my mind how easily I can do this, and with so little operational overhead. It's one thing to provision a virtual machine and throw files at it once a week while it sits there running (and costing you money) 24 hours a day, begging to be compromised. It's quite another to deploy configuration like this that doesn't really do anything at all until it's asked, and then quietly shuts back down when it's finished doing whatever it is you've asked it to do. Even if you aren't quite as sold on serverless as I am (yet!), you've got to admit that it's nice not having to think about — let alone pay for or manage — running infrastructure that you only need every once in a while.

So I guess that's it for this episode. Keep an eye out for more in the weeks ahead, subscribe if you like (right here if RSS is your jam), and check out the repo below for the code. Comments, questions, reach out anytime.

Thanks for reading!

examples/website/serverless-mediaconvert at main · pulumibook/examples
Code snippets and examples from The Pulumi Book and companion website. - pulumibook/examples