Each S3 list operation takes probably less than a second and returns 1,000 results. If I’m understanding what you’re asking, I’d need to make 1 request for the list, then 1,000 separate HTTPS requests to invoke them even with the “Event” type (asynchronously). Then another request for the next 1,000 S3 items, and repeated until it’s finished. That’d be doable — although ultimately I’d still need to use a DLQ on the Lambda function in case of errors (and I’d almost certainly be throttled at one point or another), so I’d be finding myself processing a queue anyway. Let me know if I’m misunderstanding your suggestion, though.
The downside you’re describing can be mitigated using a method implemented in the proof of concept linked above. Effectively I create an SNS topic with subscriptions just like above except for every power of 2 up to the maximum amount I want to process. Then it’s got a wrapper Lambda function in front that you call with the number of invocations and the payload you want.
To do 999 invocations, my application would do a synchronous call to that wrapper Lambda function which itself makes a handful of SNS publishes to the topic with each power of 2 subscription count (512, 256, 128, 64, 32, 4, 2, 1 == 999). That does get slightly away from “single API call == 1,000 invocations,” but it still serves the same purpose because it scales exponentially. Making 100,000 invocations actually takes one fewer SNS publish (65536, 32768, 1024, 512, 128, 32 == 100000).
That makes it super easy to coordinate with a Step Function state machine and a single “controller” Lambda function that checks the number of items in the queue and triggers the hundreds / thousands of invocations as necessary.