How I deploy aws golang lambdas with terraform

These days I’m a bit of a fanboy when it comes to Golang and Terraform, so it’s completely natural that I wanted to make the two play nice together when deploying to AWS Lambda. Which is what I did.

I’ve played around with tools such as Serverless Framework, but I decided would rather do it all in Terraform as Cloudformation (which Serverless Framework utilises when deploying to AWS) is slower and with many limitations. Plus, I think it’s good to use fewer tools where possible.

Most recently, I decided to write a Terraform module that makes an AWS Eventbridge catchall rule for a given service, which triggers a Golang Lambda function that logs the triggering event as json.

The motivation behind this was to be able to deploy infrastructure that can log the events created by the specified service quickly for reference later. This is a developer experience improvement for me as the documentation is spotty to non-existent depending on the service (e.g. aws backup ) and it’s impossible to create logic based on Eventbridge events without documented events.

The code for this module is used as an example in this post, and is located in this Github repo: r

In order to deploy a Golang lambda, one must write some Go code, and produce a zip archive containing a binary produced by the go build command. Since the lambda code itself just logs the event as json and exits, I will skip over what that looks like and instead focus on the terraform code. All said and done, the repository structure looks like this:

├── go.mod
├── go.sum
├── lambdas
│ ├── archive
│ │ └──
│ ├── bin
│ │ └── events_debug_logger
│ └── cmd
│ └── events_debug_logger
│ └── main.go

And the Terraform code looks like this:

There’s a lot to unpack here, so I’ll go through things bit by bit.

The locals block

I’m using the locals block to orchestrate the creation of all of my lambda resources, which all iterate over local.lambdas. While this is overkill for the module — I’m only deploying one lambda — I decided to keep the pattern as I’ve written it before and because the code is more flexible. If I want to add another lambda to the stack, I just need to add a new key-value pair to local.lambdas and all of the related resources will create with minimal code changes.

local.null.lambda_binary_exists is a a bit more interesting, and is something I’ll reintroduce when it makes more sense.

The null_resource block

Null resources come from the Null provider, and while they are quite cool they are also abstract. Basically, they implement the same lifecycle as other resources (e.g. aws_lambda_function), and can be triggered to re-create themselves based on changes to input variables.

In the above example, I’m using null_resource to make Terraform build my lambda so that I have a binary to deploy. This build is triggered every time the lambda source code changes, or if the value a boolean check of if the binary is in it’s destination changes.

The main trigger uses a list comprehension to create a concatenated base64 encoded string of all of the Go files in the source directory for that particular lambda (recall that this null_resource block is using for_each in order to produce multiple lambdas). Because the triggers block forces the null_resource to be re-created every time the value of one of the triggers changes, any change to the Go files will force the lambda to be re-built and re-deployed.

The binary_exists trigger is something that was added in order to make collaboration easier or continuous integration (CI) pipeline deployments possible. This is related to local.null.lambda_binary_exists, but it’s not quite time to revisit that yet.

From there, the provisioner local-exec {} blocks run code locally on whatever machine is deploying terraform to build a binary for linux architectures. This local execution reads the main.go file for a particular lambda, and then dumps the binary in lambdas/bin/${each.key}, where each.key is events_debug_logger—see the repository directory structure above.

The data_archive block

This data object utilises another Terraform provider in order to produce zip archive(s). This is where local.null.lambda_binary_exists comes into play.

In Terraform, data objects run before resource objects.

With that in mind, consider a situation where I deploy the stack. null_resource produces a binary on my laptop, and archive_file produces the zip archive that is deployed. Easy.

I can make changes to the Go code that prompt Terraform to produce a new binary, and things just work because the archive_file object always has a binary.

However, months later my colleague needs to make a change to the lambda code, and isn’t able to because the archive_file data object errors out due to the binary being absent.

Error: error archiving file: could not archive missing file: ../lambdas/bin/events_debug_logger

In order for my colleague to deploy a change, they would need to manually produce a binary, and place it in lambdas/bin/events_debug_logger. Best case scenario, my colleague reaches out to me, or nicks the code to build the lambda from the null_resource, and is able to build the binary. Worst case, they waste time and send me a message. Furthermore, a CI pipeline wouldn’t be able to preform a second apply because the binary would be lost after the first apply. One has options here, but they are messy, static, and a hassle to implement.

Fortunately, it’s possible to automate the minor nitty-gritty with Terraform, which is exactly what local.null.lambda_binary_exists does. This key-value pair uses a map comprehension to produce an object with the below schema:

null = {
lambda_binary_exists = {
events_debug_logger = true

The map comprehension allows me to keep all of my resources for-looped with for_each (I want adding a new lambda to only have one step — writing the new lambda to local.lambdas), and it also gives me a variable that is interpreted at the same time as data objects.

This allows me to reference local.null.lambda_binary_exists["events_debug_logger"] in my null_resource.this["events_debug_logger"] triggers with each.key, conditionally rebuilding null_resource.this["events_debug_logger"], and also makes my data.archive_file depend_on the null_resource.


In lay-speak, my colleague can take over the stack and add to the lambda even if it’s already deployed — local.null.lambda_binary_exists["events_debug_logger"] will be false. This was previously true, which triggers null_resource.this["events_debug_logger"] to re-create, after which point data.archive_file.this["events_debug_logger"] will check for the binary and see it.

With all of that out of the way, my aws_lambda_function is explicitly dependent on data.archive_file, and implicitly dependent on null_resource.

And that’s it! With this, I can deploy new lambdas with writing minimal boilerplate. I prefer this to something like Serverless Framework as Serverless stacks become quite slow if you have multiple lambdas maintained by a stack, as Cloudformation deploys all lambdas every time regardless of if the code has changes. Everything has it’s tradeoffs that said. This “framework” I’ve put together isn’t as well documented as Serverless Framework. Additionally the Terraform plan can become quite verbose if a lambda has multiple Go files. This is because the concatenated base64 string becomes quite long, but the tradeoff there is faster deployment speed as only lambdas that have code changes are deployed.

Devops Engineer and Golang Enthusiast

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store