batch running outofmemory

Our application runs a batch MF to update more then 100.000 objects. The MF follows the guidelines in how to write a batch, which means : using a batch limit and offset and commit per batch size in the MF. But still the server runs outofmemory. What am I doing wrong here? Is it because of all the objects that will be committed are kept in memory and the commit is executed at the end of the MF?
4 answers

Even if doing it as suggested, you will run out of memory because it is still a single database transaction. You can do it as suggested by Johan, or you can use the Process Queue to perform the batches. Process queue provides a liitle more controll and error handling than using the start and end transaction java actions.


Hi Olivier,

You can use EndTransaction and StartTransaction of the communityCommons (at the end of a batch) to save all data to the database and release the memory and start a new transaction again.

See for more information.

Hope this helps!


This is a known problem. You could use the Process Queue Module to handel this scenario, by creating smaller packages of objects you want to handle and add the packages then to the queue. Every of these packages or queued processes are handled as own transactions. This should help you handling this issue.


To add to all the answers given above.

Even with Start transactions and Endtransactions you will eventually run into memory issues when you have lots of data to process. Because mendix somehow still ‘remembers’ everything in the context of the microflow you are running.

You can easily check to see if this is the case by doing the following: Create a datetime with the currentdatetime at the start of your batch called $Start, create a logmessage at the end of your batch with toString(secondsBetween([%CurrentDateTime%],$Start)). And check your logs while the batches are running. if the processing time keeps going up exponentially you're going to eventually run into a memory error. (aside from the really long processing times at the end, which are already a pain)

Easiest way to solve it would be to just use the process queue module. Because the the processqueue runs every job in it's own context. So just create batchjobs which will get picked up and processed and you're done.

I also see you have 5 ‘batches’ running in this flow. Maybe if you can seperate these 5 into their own flows you might not run into the memory issue.

There's also the option to create your own java action by taking the processqueue's java action as a base and use it to run a microflow in it's own ‘context’. (because i don't think there is one in the communitycommons doing this for you But i'm not entirely sure about the newer versions of the CC function library.