Memory management for SDK scripts

2
Do I need to manually manage memory when I write TypeScript scripts for the SDK? For example, when handling many records in Mendix, I would use batching, do I need to do this as well when I am using TypeScript? Looking at the example script provided by Mendix here, we see the following way to load all microflows: client.platform().createOnlineWorkingCopy(project, new Revision(revNo, new Branch(project, branchName))) .then(workingCopy => workingCopy.model().allMicroflows().filter(mf => mf.qualifiedName === 'MyFirstModule.CreateTestCase')) .then(microflows => loadAllMicroflows(microflows)) .then(microflows => microflows.forEach(microflowToText)) .done( () => { console.log("Done."); }, error => { console.log("Something went wrong:"); console.dir(error); }); Let's assume I remove the filter operation, so I get all microflows. To me, this code snippet suggests all microflows are loaded before the forEach iterator starts. What happens if I have thousands and thousands of microflows?
asked
2 answers
1

In the example, you use loadAllMicroflows to load them first. I would suggest wrapping it in something like processMicroflows. Using the 'when' library, you can use 'guard', which will only execute a number of functions in parallel at once. Basically, you take for example 10 microflows at the same time, load them, process them and resolve the promise (and it will run the next 'batch'). I'll see if I can come up with an example like I described above...

answered
0

We currently have no api to unrelease documents after they have been fetched from the server. Increasing the node memory beyond the initial limit of 1.5GB can be simple done by:

node --max_old_space_size=4096 yourscript.js

Feel free to file a feature request to release documents from memory again.

answered