Aren’t you already calling SQS:SendMessage in a for loop for each item anyway?
Preston Tamkin
1

I’m calling SQS:SendMessageBatch, so I’m making one request for every ten items.

I did just review the Lambda docs again and see throttling is handled separately from errors. I thought the handling was the same — that may explain my misunderstanding. I think I see what you’re saying. Invoking the function asynchronously is itself queueing them in case of throttling, so to some extent I’m recreating that behavior with this method.

It says it will retry for up to six hours, but my visibility during that time would be limited, right? If I need to know when the process is complete and I knew I was bumping up against the concurrency limit while the operation were running, I’d essentially need to wait for at least six hours after my last function was invoked and then check the DLQ, right?

Well, but now I’m seeing this in the FAQ regarding what happens after you reach the limit:

Lambda functions being invoked asynchronously can absorb reasonable bursts of traffic for approximately 15–30 minutes, after which incoming events will be rejected as throttled.

I guess I’m not totally sure how I’d deal with that, besides essentially throttling my own S3 list / invoke loop. Probably with some kind of logic for an exponential backoff. It’d be doable, but probably a little tedious to test and verify.

Won’t the DLQ story be the same in both scenarios? Not sure that has bearing one which method is used.

I don’t think so. I don’t need a DLQ on the Lambda function in the method proposed in the post, because a failed or throttled invocation means nothing. I’d put the DLQ on my SQS queue, which means it would only have messages that were legitimately pulled but not processed X times.

It’d still ultimately be a queue of messages that need to be handled, so it’s the same in that sense, but in my post’s scenario, I already have all the logic and code in place for invoking a function many times to process and drain a queue. Without it, I’d still have to deal with the possibility of a queue with a lot of items that require processing and need a way to manage that.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.