Complex Matching Calculation

0
Hi everyone, I'm facing a performance issue in my Mendix app with a complex calculation running in a Microflow, and I'm looking for advice on a better or more efficient approach. Problem Overview: I have a 1-n associated entity setup representing a table-like structure with 10 columns, each containing 7 rows (essentially a matrix of data points). The goal is to perform a matching algorithm that evaluates compatibility between objects, resulting in a final list of which objects pair well together. To achieve this, I'm currently using 7 nested loops to iterate through all possible combinations, which generates around 120,000 potential results. This calculation is necessary to filter and identify the best matches based on certain criteria (e.g., scores or conditions across the columns/rows). Issue: The Microflow times out after 90 seconds, likely due to the high computational load from the nested loops and the large number of iterations. This happens consistently, preventing the process from completing. Current Implementation Details: Data is retrieved from a 1-n association 7 loops are used to handle the multi-dimensional checks At the end, I aggregate the results into a list of matched objects. Questions: Is there a more efficient way to handle this kind of matching? Could I offload this to a scheduled event, background job?
asked
3 answers
0

Hi Nils,

 

We’ll tackle the problem on three levels:

  1. Reduce the number of iterations (algorithmic optimization)

  2. Move heavy computation out of synchronous microflows (background job)

  3. Use the right tool for the heavy lifting (database / Java)

1.Reduce the Number of Iterations

Right now, you’re brute-forcing 120,000 combinations via 7 nested loops.Most of these pairs probably fail early anyway.

Optimizations:

  • Pre-filter:Before entering the nested logic, apply simple conditions via XPath queries.Example:

    • Only compare objects with the same category/region.

    • Only consider rows where at least one column matches a threshold.→ This cuts down the candidate pairs drastically.

  • Early exit in scoring:If the first 2–3 checks already fail, skip the rest instead of checking all 7.

2.Run It as a Background Job

The main bottleneck is the 90s microflow timeout.Even with optimizations, 100k+ operations is too much for a synchronous user action.

Approach:

  • When the user triggers "Generate Matches":

    • Instead of running the full calculation, create MatchResult objects with status = Pending.

    • Hand these off to a Process Queue or run with CommunityCommons → executeMicroflowInBackground.

  • A background worker microflow picks them up and calculates results.

  • The user doesn’t wait — they can come back later and see results once ready.

This way, the computation runs outside the request-response cycle, avoiding timeout.

 

3.Use the Right Tool (Database + Java)

Mendix microflows are slow for deep loops.For your case, you want either:

a) Database-Driven (XPath / OQL)

  • Store the matrix in a RowData entity (flattened).

  • Write XPath or OQL queries that filter rows directly in SQL.

Let the database engine do filtering and matching (much faster than looping in Mendix).

b) Java Action for Scoring

  • Offload the scoring algorithm to a custom Java Action.

  • Java runs loops in milliseconds, while Mendix microflows need seconds.

  • You can fetch rows for ObjectA and ObjectB, compare, and return a score.

  • Then update the MatchResult entity.

Or else write a procedure to be executed in the DB side and have a java action to run it and return it to frontend.

answered
0

Hello ;)

 

Your microflow is hitting a classic computational complexity issue, and you're right to look for a more efficient approach. Seven nested loops will almost always cause performance problems.

 

Now, since I don't know exactly what you're doing with high precision, it's difficult to optimize concretely. What I can say is that you could very well run the microflow in a task queue (async)  to avoid the timeout problem. 

More info about task queues here 

 

However, why do you need 7 nested loops? Even if it's a matrix data structure you should need 2 loops to iterate over each cell if you know what I mean. One loop to go through the rows and the other to go through each column of those rows. Then for each cell you could apply a series of checks and calculate the results. 

 

Often for finding specific records you could use the Mendix XPATH but you have to constrain it properly. 

 

It's difficult to pinpoint performance fixes without the concrete context but for sure you don't need 7 nested loops. I think you could use 2 loops for going through all the records and then calculating some parameter results after which you do a separate querying of data based on those parameter / flags. 

 

 

 

 

 

answered
0

What validations are you using? It's likely that a structured XPath would not require the 7 loopings to resolve this issue.

answered