We've researched the project: not a bug, but caused by the use of the in-memory database hsqldb. Thanks for the extensive report!
Update 1
Some feedback from our investigation:
* We see memory usage growing when testing with hsqldb, but not with postgresql. Reason for this is that hsqldb runs as a library in the mendix runtime, and is a in-memory database, meaning all data created in hsqldb is stored in memory.
* Garbage collection is always done at the end of a request, not at the end of a batch commit, or the end of a submicroflow. This has not changed compared to mendix 6, so should not cause the problem. We are however optimizing garbage collection, so starting with mendix 7.15 (planned) memory usage should be better than mendix 6.
* The planned fix for 7.14 as reported by chris addresses a different issue where too much client state was returned to the client at the end of a request. This is however not related to the behavior reported above. We checked the community commons executeAsyncInQueue implementation. This action runs your microflow in a new system context. When a context closed, garbage collection will be executed, so all memory used by the microflow will be cleaned up. We can confirm that this also happens for the project described here.
So to summarize, at this moment we see no memory leak, all memory will be cleaned up at end of the request. We do see a need to increase the number of times garbage collection should be done, and this is being implemented.
@Christiaan, can you validate this, but running your test project on postgresql to see if memory if freed?
Update 2
The issue described by Chris below is not related. It's also not related to garbage collection of mendix objects. This has not changed between Mendix 6 and 7: garbage collection of Mendix objects is always done at the end of a request. Using submicroflows or batches does not change this behavior (again mx 6 and 7 are the same).
The issue reported by Chris is caused by the fact that all newly created objects were sent to the client, which causes a peak in memory usage at the end of the request. The algorithm to determine what new objects are required by the client will be optimized in mendix 7.14, which solves the issue experienced by Chris.
Great research!
Seeing as this is a combination of the Mendix runtime and actions from the Community Commons module, I suggest you create a support ticket with this issue and include the test project + steps with it.
That way we can try to assess from which part this issue originates and try to fix that.
Edit: And about the Start/End transaction. I believe the Start transaction just creates a nested transaction in that call which you remove at the end, but you still end up with the original transaction (that is not ended). I've done some experimenting with this yesterday as well.
From the Community Commons Documentation:
StartTransaction - Start a transaction, if a transaction is already started for this context, a savepoint will be added.
EndTransaction - Commit the transaction, this will end this transaction or remove a save point from the queue if the transaction is nested.
I suggest you try only adding an 'End transaction' call, without the Start call. This should finish the existing 'top-level' transaction for each separate microflow, and seems to still create a new one automagically for the next microflow call.
This bug is more general and will cause 'GC overhead limit exceeded' bugs in batch and list processing. I reported this bug a couple of weeks ago to Mendix and they admitted that the garbage collection in Mx 7 is changed compared to Mx6.
In short: Garbarge collection is done at the end of the microflow and not during. So all data collected in lists will stay in memory until the microflow is ended and not when the list is cleared or recreated.
Adding endtransactions won't help.
My intermediate solution is to use good old 'executeMicroflowInBatches'
Question: How many of you process more than 50.000 objects in a microflow? Here at FlowFabric we have at least 20 projects.
Hi,
This post is really important even someone read this after 4 years.
Thanks for shairing all steps very detailed.