When/how to commit when retrieving large datasets from REST webservices

I've seen multiple solutions around on the forum, and I hope you can validate my reasoning and preferred solution. However, I'm not familiar with performance best practices in Mendix, so therefore I wish to validate my thoughts. I'm retrieving a large dataset from a REST webservice through JSON format, around 100.000 records. The webservice supports skip-and-take, so for example I can loop through the dataset and retrieve sets of 1.000 (or 100) records each time. I need to know what's best when committing data to the Mendix database, I believe I have multiple options: Commit the retrieved list each time I loop through another skip-and-take set Commit the retrieved list each time I loop through another skip-and-take set and perform the EndTransaction Java action from CommunityCommons Keep adding the retrieved list to a list containing all records and only commit when I'm done retrieving the entire dataset I can imagine I need to find balance between disk I/O and memory, and my thoughts are that the second solution is the most effective. Can you please help me find the right solution for this retrieve?
1 answers

I would use 2. Because otherwise Mendix still keeps track on all changes done by that microflow. By using a start and an End transaction you tell Mendix to create an extra savepoint and thus release some of it's memory.
Other option would be to commit in seperate database transaction, you did not mention that one. If you use that option those records are then already available for other processes.