Better way of Handling Long Running Microflow - Mendix Forum

Better way of Handling Long Running Microflow

1

I have a table with 70 000 records.
I use a microflow to delete the records.
- step one: retrieve all records
- step two: delete all records

On the local PC it works fine.
When deployed on our private cloud I'm getting
- the flow runs for a few minutes
- a pop-up is displayed "there is an error - contact your system admin"

however, when I consult the table I notice that the delete continues to execute.
After a few minutes all records are deleted

 

We all agree that the programmer should avoid to have a long running workflow that interacts with the end-user.

However when a microflow runs too long the behaviour of Mendix should be clear and consistent.
Today this is not the case:
- nginx decides that there is a timeout condition.
- the user gets a popup that does not help in any way.
- nothing is logged in the Mendix log. So the administrator cannot help the user
- the microflow continues to run. This can have disastrous results: the user believes the process was interrupted but in reality the process came to an end. An interrupted process results in a rollback. So the user assumes he can just click another time the button. In reality nothing was rolled-back. So when the user clicks again the transaction is executed again.

Suggestion
1. A microflow that runs too long should be aborted by the mendix runtime. This should do a rollback of the transaction.
This should be logged.
The user should get a proper error message.

2. Async running microflows should be allowed to run for a very long time (e.g. a bulk update on millions of records). The programmer should be able to indicate the max allowed runtime. By preference this should be a calculated value (as a programmer I know it takes x seconds to update one record, I know how many records I need to process so I can calculate the expected max execution time).

asked
1 answers

Hello Dan,

For deletion of this much records you should use amount and offset to delete those. Only you keep the offset to 0 and keep retrieving untill your count after the retrieve is less then the amount. Otherwise you always will run out of memory most of the time. Same for updating million of records. You should always do that also with amount and offset.

Regards,

Ronald

 

Created