How to handle big DataSets in Microflows

0
Hi everyone, I’m having a problem when trying to update a large list in a loop, all goes well, but when it reaches the end point of the microflow it takes ages to close and then throws an error (the error is always Java heap or out of memory). I don’t have any commits or var creations in the loop, also I add the create object to a list and then commit outside of the loop. I tried using the loop by batches (I tried using batches of 5000, 1000 and 100) and even after this clear the list to help the memory but error is the same. Also, I need to use a retrieve using a function contains, but in this process it takes to much time, I know index doesn't work when using a function, how could I do this differently and not increase the time? Thank You!
asked
3 answers
0

Hi Luis, 

When you said you tried it with batching, i am assuming you took 100 objects processed it, made your commit, cleaned your list and then start processing the next 100. 

Does it happens even if you process 100 objects and commits them?

Also, cleaning your list, does not exactly mean you are freeing up the memory. 

Can you also check the size of the object you are processing? For example, if you are processing 100 objects which has lot of attributes for example and if attributes are mostly strings, then the object size will increase, which also has negative impact. 

 

answered
0

The problem you might face is that both the runtime as the database require memory.

The database keeps the changed information for in case a rollback is needed, so the allready processed objects will be kept in memory by the database.

A possible solution could be to do the processing in a sub-microflow and then in the calling microflow set an error handler on the sub-flow: ‘Custom without rollback’ that way the objects will be released from memory faster when they get comitted.

answered
0

 

can you check if you arent having any refresh events as well inside loop. 

possible solutions is to use submicroflow with batching. 

answered