An Azure service that provides an event-driven serverless compute platform.
Hi @Wentzler, Charlotte ,
Thanks for reaching out to Microsoft Q&A.
By design, each partition hands you one batch at a time and won’t fetch the next batch until your function completes and checkpoints the current one. There isn’t a setting you can flip in the trigger to overlap batches on the same partition.
Here are some approaches you can consider:
- In your host.json (for v1 you can still set these under the
eventHubsection), adjustmaxBatchSizeandprefetchCountso you pull smaller (or larger) batches in memory, avoiding long‐running single invocations.
{
"version": "2.0",
"extensions": {
"eventHubs": {
"maxBatchSize": 100,
"prefetchCount": 300,
"batchCheckpointFrequency": 1
}
}
}
- The only way to truly parallelize at the trigger level is more partitions. With more partitions, Azure Functions can spin up more hosts and distribute partitions across them. If you can’t change the Event Hub’s partition count, you could add consumer groups plus separate function instances (though you’ll have to handle de‐duplication or idempotency).
- Python v1 doesn't support dynamic concurrency or the more advanced
host.jsontuning that later versions do. Migrating to v2 on a Premium plan gives you dynamic concurrency, which can automatically pull multiple batches per partition as your app scales. - If you need full control over parallelism, drop the built‐in trigger and write your own async receiver with the
azure-eventhublibrary. Then you can spin up coroutines over messages or batches however you like.
Hope this helps!
If the resolution was helpful, kindly take a moment to click on and click on Yes for was this answer helpful. And, if you have any further query do let us know.