Hi,
I have had a ticket open with Mendix on a similar case: An API returns maximum 1000 objects per call but we need to download 2.5 million records. The microflow works using the batch pattern, with EndTransaction after each commit of a batch list. Still, processing slowed down and memory usage kept going up.
I changed it to do maximum 5 batches (configurable) and then start a fresh instance in the task queue. So one run does 5 batches at the most. Result is good performance. Number of batches in one run varies for the number of NPEs you retrieve for an entry: many child objects in one entry.
Also, in your REST mapping NPEs, do not use a regular association from the child to the parent but a reference set with owner both or a reference set from the owner to the child. Note that the new REST client cannot handle that. As R&D told me, getting a list of NPEs under a parent is not very efficient as the parent does not have the child GUIDs as with a reference set.
What also helps is deleting the NPEs after processing them into your persistent entities at the end of each batch iteration
You should consider using the batch pattern described in a Mendix learning path. Committing a large number of objects at once can cause heap size and cache issues, which is likely what you are experiencing. Instead, commit the records in smaller chunks. Based on my experience, around 3,000 objects per commit tends to work very efficiently while keeping memory usage under control.