In the .NET SQL SDK team we are introducing a feature called Bulk Support into the V3 SDK, and I’d love to hear your feedback about it.
Bulk refers to scenarios that require a high degree of throughput, where you need to dump a big volume of data, and you need to do it with as much throughput as possible.
Are you doing a nightly dump of 2 million files into your Cosmos DB container? That is bulk.
Are you processing a stream of data that comes in batches of 100 thousand items you need to update? …
This time we are revisiting the Azure Cosmos DB Trigger and exposing a very interesting feature, the Remaining Work Estimator.
You are currently using the Azure Cosmos DB Trigger, your Functions are triggering on changes, but you would like to understand if your Functions are “lagging” behind processing changes.
If the rate at which the data is ingested in the database is consistently higher than the rate at which you process them (because your Function does too many things or it’s slow in what it does), then it will start to lag behind. But how can you tell? Read on!
This new recipe aims at both, reducing your operational costs, and being able to monitor and analyze the health of your Azure Functions using the Azure Cosmos DB Trigger.
If you are currently using (or plan to use) the Cosmos DB Trigger, through the official documentation or any of my previous posts, you will notice that it requires a second collection to store the Leases (or state). Maintaining this second collection (even if it’s on the minimum 400 RU) translates to a cost.
Cosmos DB has a feature that allows you to provision throughput at the Database layer and share…
This next Azure Cosmos DB + Azure Functions recipe will show you how to define a preferred connection region, and take advantage of the new Multi-master capabilities.
You have a globally distributed Functions architecture and you want to take advantage of Azure Cosmos DB’s globally distributed endpoints by connecting Functions to the closest database replica to optimize latency.
If you found this post you are probably building a globally distributed Azure Functions…
You want to use the abstraction provided by the Azure Functions’ Cosmos DB bindings but you need to customize the ConnectionMode and Protocol due to a particular circumstance or to improve performance.
The bindings hide from you the complexity of creating…
In this next Azure Cosmos DB + Azure Functions recipe, we’ll see how to create an event-driven Functions architecture with multiple Functions that indepently track changes on an Azure Cosmos DB collection.
You need to perform multiple, independent processes when a change occurs in your Azure Cosmos DB collection and you want to implement each process as a separate Azure Function.
Like we used in the live migration recipe, we’ll use the Azure Cosmos DB Trigger to act as starting point for each Azure Function. Following the trigger’s documentation, we know that we need one Lease collection when using the…
Security is an extremely important topic and we’ll address a way to secure access to the Azure Cosmos DB keys in this next Azure Cosmos DB + Azure Functions Cookbook recipe.
You want to be able to interact with Cosmos DB using the
DocumentClient but you don’t want to provide direct access to the required Key for security purposes.
While you can store the Cosmos DB Key in the Azure Function’s Application Settings (like we saw in the first recipe), you’d be exposing that information to whoever is creating and managing the Azure Function.
For this next Azure Cosmos DB + Azure Functions Cookbook recipe, we’ll be adding a new ingredient to the mix, Azure Search, Azure’s Search-as-a-Service offering.
You are using (or want to provide your application) with Azure Search’s great search capabilities and want to push any change in your database as soon as it happens to a Search Index.
While Azure Search does have an Indexer for Azure Cosmos DB, this recipe will allow you to customize and pre-process the data before it reaches Azure Search and also change the model from pull to push.
Like in our previous post, we’ll…
Continuing with this series of short but sweet recipes you can use with Azure Cosmos DB and Azure Functions, I’ll focus this time on real-time data transfer scenarios based on the use of the Azure Cosmos DB Change Feed.
You need to do a live data migration between two Azure Cosmos DB accounts in real-time or you want to keep two Azure Cosmos DB collections in sync to offload data analysis and post-processing.
This new post in this series of quick and easy recipes you can use with Azure Cosmos DB and Azure Functions is dedicated to collecting and persisting data.
You want to persist data that comes from an event source (Messages in a queue, notifications in an Event Grid, an HTTP payload, a Service Bus message, and a long etc..) into your Azure Cosmos DB account in a simple, scalable and efficient way.
Software Engineer @ Microsoft Azure Cosmos DB. Your knowledge is as valuable as your ability to share it.