Rust macros taking care of even more Lambda boilerplate

Sam Van Overmeire
5 min readJan 30, 2024
Bing Image Creator, with prompt: “The Rust crab sitting behind his desk thinking about writing a blog about Rust macros for AWS Lambda”

In our last blog post, we showed how a procedural macro can generate basic AWS Lambda boilerplate. But the macro had a serious drawback: we were unable to add any initialization code to the — now fully generated — main function! And initializing things, like AWS SDK clients, to communicate with other services is a common requirement. The alternative, creating clients in the handler, is far from ideal, as the handler code runs every time your Lambda is invoked. Moving setup to the handler means wasting time and money on every invocation, just to avoid a little boilerplate. Hardly a great trade-off.

So, in this post, we will go one step further with our macro. If our handler has additional (AWS SDK) client arguments, we automatically initialize them in the generated main. For example, the following bit of code should compile.

// imports

#[derive(Deserialize,Debug)]
struct Request {
name: String,
}

#[derive(Serialize)]
struct Response {
pet: String,
}

// no main, no initialization of the AWS SDK

#[lambda_setup] // <- our macro
// when a handler has a DynamoDB and SQS client as arguments...
async fn func_with_dynamodb(dynamo_client: &Client, sqs_client: &aws_sdk_sqs::Client, event: LambdaEvent<Request>) -> Result<Response, Error> {
let (event, _) = event.into_parts();

// we should be able to use the DynamoDB client...
let response = dynamo_client.get_item()
.table_name("...")
.key("name".to_string(), AttributeValue::S(event.name))
.send().await.unwrap();

let pet = response.item.unwrap().get("pet").unwrap().as_s().unwrap().to_owned();

// and the SQS client
sqs_client.send_message()
.queue_url("your-sqs-queue")
.message_body(pet.to_string())
.send().await.unwrap();

Ok(Response {
pet,
})
}

As a reminder: in the previous blog post, I mentioned that in many real use cases, you probably don’t need a macro like this one. A bit of boilerplate is acceptable when this helps you retain the flexibility to customize your main function, adding any (initialization) code you require. If you have a lot of functions without anything interesting in their main, or if you typically only need AWS clients with their sensible defaults, you could use this macro though. As before, we won’t go into the basics of procedural macros. For that — shameless plug — see my book.

The project setup is the same as before, except that we now have proc-macro2 as a dependency, as we will be using its version of the TokenStream struct. All our code is still located in lib.rs, now split up into multiple functions. The total length of the macro is around 129 lines compared to 29 in our previous, much more naive, implementation.

We start with the entry point. As before, we have an attribute macro that receives the annotated function as a token stream. It parses the input into ItemFn and retrieves the function name.

// syn and quote imports

#[proc_macro_attribute]
pub fn lambda_setup(_: TokenStream, input: TokenStream) -> TokenStream {
// parse the input and get the function name
let item: ItemFn = parse(input.clone()).expect("a function as input");
let function_name = &item.sig.ident;

// and now for some new stuff, like getting the parameters...
let event_param: Vec<(&Pat, &TypePath)> = item.sig.inputs.iter().filter_map(is_event_param).collect();
let client_param: Vec<(&Pat, TypePath)> = item.sig.inputs.iter().filter_map(is_client_param).collect();

// and making sure we initialize the clients and pass them to the handler
let run = create_run(function_name, event_param, client_param);

let tracing = setup_tracing();

quote!(
#[tokio::main]
async fn main() -> Result<(), lambda_runtime::Error> {
#tracing
#run

Ok(())
}

#item
).into()
}

Below that bit of code, there is some new stuff. We take the function’s signature, loop through its parameters, and look for the Lambda ‘event’ and any ‘client’ parameters, i.e. SDK clients that the handler wants to use. Some additional custom functions help us do this; other functions generate our output. setup_tracing is simple, generating the Cloudwatch tracing code that we had in the naive/previous version. create_run generates a stream of tokens to represent the initializations as well as the lambda_runtime::run. Using quote we bring our outputs together, creating the main and adding the tokens generated by the two previous functions, as well as re-adding the original function so it does not get lost!

Let’s go over create_run in some more detail:

fn create_run(function_name: &Ident, event_param: Vec<(&Pat, &TypePath)>, clients: Vec<(&Pat, TypePath)>) -> proc_macro2::TokenStream {
if !clients.is_empty() {
// we have clients! we need to initialize them, and pass them to the handler as params
let client_initialisations = create_clients(&clients);

let function_client_params = create_client_params(&clients);
let event = event_param.first().expect("an event to be present");
let event_name = event.0;
let event_type_path = event.1;

quote!(
#client_initialisations

lambda_runtime::run(lambda_runtime::service_fn(|#event_name: #event_type_path| #function_name(#function_client_params, #event_name)))
.await?;
)
} else {
// no clients? just do what we did before, only passing in the function name
quote!(
lambda_runtime::run(lambda_runtime::service_fn(#function_name)).await?;
)
}
}

As you can see, there are two possible scenarios: either we have clients and an event, or we only have an event. The latter case is simple, leading to the same code as in our naive implementation. But if we do have clients, we make additional functions generate code for their initialization. We also need to generate the correct parameters for our handler, passing in not only the LambdaEvent but also referencing the clients.

create_clients is relatively easy to explain. It loops over the clients, generating a call to new with the AWS config passed in as a parameter, which is generally what is required for AWS clients. When we have the client setup, we create the configuration that our clients require and return all our generated code.

fn create_clients(clients: &Vec<(&Pat, TypePath)>) -> proc_macro2::TokenStream {
let client_initialisations = clients.iter()
.map(|client| {
let client_name = client.0;
let client_type = &client.1;
quote! {
// now initialize the client with the right name, type, and config
// e.g. this becomes `let dynamo_client = Client(&config);`
let #client_name = #client_type::new(&config);
}
});
// generate the configuration, else we can't actually pass it to the clients
// and generate the client initializations
quote!(
let config = aws_config::load_defaults(aws_config::BehaviorVersion {}).await;
#(#client_initialisations)*
)
}

In create_client_params there is one complexity worth mentioning: we need to pass in the client names as parameters to the function separated by commas. E.g. our_handler(&first_client, &second_client, event_name). A reduce will do:

fn create_client_params(clients: &Vec<(&Pat, TypePath)>) -> proc_macro2::TokenStream {
clients.into_iter()
.map(|c| {
let client_name = c.0;
quote!(&#client_name)
})
.reduce(|acc, curr| {
quote!(#acc,#curr)
})
.expect("this function to only run when we have at least one client")
}

With that, we’ve seen about 60% of the code required to make this macro work. (If anyone is interested in the rest, let me know and I will put it on GitHub.) Applied to the example at the start of this post, we get a working Lambda that retrieves an item from DynamoDB and posts that item to SQS while also returning it to the caller. We did not have to worry about the boilerplate main or client setup, focussing instead on more interesting things.

There are still many possible ways in which we could improve on this macro. Error handling is limited, and we don’t have any tests. And a flexible order of handler parameters would be preferable (currently, we expect the event to always be in the last position). We could also detect the type of Err returned by the handler and make sure it matches the one returned by main. And perhaps we could allow the user to add some custom initialization logic if he/she needs to create something besides AWS clients. But our current implementation is enough for now.

Thanks for reading!

--

--

Sam Van Overmeire

Medium keeps bugging me about filling in a Bio. Maybe this will make those popups go away.