Out of memory when fetching files in batches

0
When fetching 10.000 files from a database using the database connector, I am trying to commit files in batches of 10 files (1 MB per 10 files). Even if I end and start the transaction each 100 files, the heap space increases (see attached graph). Probably objects remain in memory after each cycle. Does someone have a solution for this except increasing node memory.
asked
4 answers
2

Implemented the same microflow with a Database connector and Fetch file from URL action. With fetching from URL it works as expected, but with the Database connector there is a memory leak.

answered
1

I wonder what would happen if you use this workaround:

Create an entity where you store the limit and offset values. Create a scheduled event that process only the filedocuments with the current limit and offset and change the limit and offset after the process. IIf you would run this scheduled event every x minutes (depending on the timing of a batch) it would finish hopefully. t is offcourse not so efficient but I do wonder if this way the memory is released again. If so then a bug report would be the way to go.

Regards,

Ronald

answered
1

Behaviour reproduced in a seperate project with a local Postgresql DB. Filed a ticket at Mendix to have a look at the issue.

answered
0

Thanks Ronald for your reply. If I run it in seperate microflows with different offsets, memory is indeed released by the garbage collector, so it looks like a memory leak in the file document commit or DB connector.

I'm trying to reproduce the behavior in an isolated project, so I can send it to Mendix.

Regards,

Menno 

answered