Commit Data in Batches using Start and End transactions Java action or using task queue

Hi ,  I have a requirement to change data for millions o records in 1 or 2 times in weekly basis. I tried using batch processing of 500 objects each time as shown in the below screenshot. But we see that after few minutes the memory usuage goes high in short span of time and application is restarting. I came to know that all the commit actions take place at end of the microflow, That may be the reason all 1 million records commiting once resulting in restarting the application. I came to know that using start,end transactions or task queue will solve the issue. Could someone please help me with the below question?   1)If we want to use start and end transactions, Where we need to place start and end transactions in the below microlow? 2)I am not sure on how task queue works, Could some one please help me on how it functions and how to implement? Any documentaion will be helpful.  3)What is the difference between the above two and which one is best for my usecase   Thanks in advance.
2 answers

Hello Thmanampudi Lokesh Parameswara Reddy,


It depends on different things which action suits your use case the best, mainly it has to do with timeliness and synchronous or asychronous results; 

But what you can try and test for yourself is the following:

Scenario 1 Start/End Transaction:

Add an start transaction before your retrieve and after your merge

Add an end transaction after your commit of the list


Scenario 2 Start/End Transaction with TaskQueue

Keep the above start and end transaction but create a submicroflow of the change in the loop and execute this in a taskqueue

Remove the commit from your flow and add it to the submicroflow


Scenario 3  Task Queue without start end transaction

Remove the start and end transaction from your flow 


Scenario 4 create an entity with attributes offset and amount

retrieve the number of records that you need to change and count the records.

Then divide the count by the amount this is the number of badges you need to do.

Then create the object equally the number of badges and make sure you add up the offset on the objects

Then within your sub microflow that you use as a taskqueue put the retrieve of the testing and a loop so similar to the microflow you have now but with the change that it listens to the object with the amount and offset for retrieving the batches 


These are some ways to see what it helping the performance and prevent your app from crashing.


Eventually, you could also use a scheduled event to not process everything at once but with pauses in between.


Hope this helps,


Good luck!



To expand on Jelle's answer, your microflow is not performing true batch processing. The reason is that in Mendix, a (database)commit only takes effect when the microflow completes. This means that all pending commits are retained in memory until the microflow finishes. Consequently, if you structure your flow in this manner, you risk running out of memory as these uncommitted objects accumulate, causing your application to crash and restart.

For more details on how commits work in Mendix and to understand the implications for memory usage, refer to Chapter 5 of this guide:

Committing Objects in Mendix