Hi Nils,
We’ll tackle the problem on three levels:
Reduce the number of iterations (algorithmic optimization)
Move heavy computation out of synchronous microflows (background job)
Use the right tool for the heavy lifting (database / Java)
1.Reduce the Number of Iterations
Right now, you’re brute-forcing 120,000 combinations via 7 nested loops.Most of these pairs probably fail early anyway.
Optimizations:
Pre-filter:Before entering the nested logic, apply simple conditions via XPath queries.Example:
Only compare objects with the same category/region.
Only consider rows where at least one column matches a threshold.→ This cuts down the candidate pairs drastically.
Early exit in scoring:If the first 2–3 checks already fail, skip the rest instead of checking all 7.
2.Run It as a Background Job
The main bottleneck is the 90s microflow timeout.Even with optimizations, 100k+ operations is too much for a synchronous user action.
Approach:
When the user triggers "Generate Matches":
Instead of running the full calculation, create MatchResult objects with status = Pending
.
Hand these off to a Process Queue or run with CommunityCommons → executeMicroflowInBackground.
A background worker microflow picks them up and calculates results.
The user doesn’t wait — they can come back later and see results once ready.
This way, the computation runs outside the request-response cycle, avoiding timeout.
3.Use the Right Tool (Database + Java)
Mendix microflows are slow for deep loops.For your case, you want either:
a) Database-Driven (XPath / OQL)
Store the matrix in a RowData entity (flattened).
Write XPath or OQL queries that filter rows directly in SQL.
Let the database engine do filtering and matching (much faster than looping in Mendix).
b) Java Action for Scoring
Offload the scoring algorithm to a custom Java Action.
Java runs loops in milliseconds, while Mendix microflows need seconds.
You can fetch rows for ObjectA and ObjectB, compare, and return a score.
Then update the MatchResult
entity.
Or else write a procedure to be executed in the DB side and have a java action to run it and return it to frontend.
Hello ;)
Your microflow is hitting a classic computational complexity issue, and you're right to look for a more efficient approach. Seven nested loops will almost always cause performance problems.
Now, since I don't know exactly what you're doing with high precision, it's difficult to optimize concretely. What I can say is that you could very well run the microflow in a task queue (async) to avoid the timeout problem.
More info about task queues here
However, why do you need 7 nested loops? Even if it's a matrix data structure you should need 2 loops to iterate over each cell if you know what I mean. One loop to go through the rows and the other to go through each column of those rows. Then for each cell you could apply a series of checks and calculate the results.
Often for finding specific records you could use the Mendix XPATH but you have to constrain it properly.
It's difficult to pinpoint performance fixes without the concrete context but for sure you don't need 7 nested loops. I think you could use 2 loops for going through all the records and then calculating some parameter results after which you do a separate querying of data based on those parameter / flags.
What validations are you using? It's likely that a structured XPath would not require the 7 loopings to resolve this issue.