Azure Batch supports high-performance batch processing and I have been using it for a few months to test with heavy data workloads. The architecture excellence of using Batch service comes when how they are scaled-up when in need. You can set it up to autoscale, but I prefer manual if the workloads are low priority and less frequent.
In my experience it takes 2-10 minutes to perform a scale up operation within Azure Batch. Scale up performs a set of tasks to allocate an already warmed up machine from a pool of Azure resources within the region.
I would usually prefer to have an event-based solution to trigger these scale-up and down operations as they are easily configurable. This can be simply done by an Azure Function on event-based as well as with polling.
Following code lines on C#.NET shows how easy it is to resize a pool.
Step 1: Create a BatchClient
Step 2: Read Pool by ID
A pool can be accessed via the id of the bool within the batch client.
Step 3: Resize Pool
In a pool you can pick between the dedicated nodes vs low priority nodes. Dedicated nodes guarantee quick allocation and often required by the regulatory and compliance proceedings. Low priority nodes are less expensive and allocated when they are not in use.
If you are already optimising cost with manual scale-in scale-out, I would recommend going with dedicated nodes.
Code sample is as follows:
So, are you ready to tryout the Batch scaling?
No comments:
Post a Comment