Hi Patrick,
Mendix does not recommend using the and operator in Xpaths. Your Xpath actually contains an 'and'. Instead, you can use filter after you retrieve the list (Filtering takes place in the Database again before you retrieve it). Also, Mendix cannot retrieve large numbers at once. That's why I recommend using batch retrieve. I will try to add the documentation about it below..
Batch related Links:
https://medium.com/mendix/performance-modeling-in-mendix-batch-processing-1ed245a1e23b
https://docs.mendix.com/howto7/monitoring-troubleshooting/detect-and-resolve-performance-issues/#batches
https://forum.mendix.com/link/questions/16292
Retrieve related Links:
https://docs.mendix.com/howto/logic-business-rules/working-with-lists-in-a-microflow/
Regards
Ridvan
Can you check if the OpeningDate is the first attribute and the ClosingDate is the second attribute and BusinessUnit as the third in your StockTake entity?Do you have created the index for all of them?
I assume that [TimeOfSale > $StockTake/OpeningDate and TimeOfSale < $StockTake/ClosingDate] will terun lesser records than [NominalAccounts.EPOSTransactionLine_BusinessUnit = $BusinessUnit]
Try to do the XPath in the other order too to make max. use of the indexes.
[TimeOfSale > $StockTake/OpeningDate]
[TimeOfSale < $StockTake/ClosingDate]
[NominalAccounts.EPOSTransactionLine_BusinessUnit = $BusinessUnit]
Go Make IT
(Accept if it worked for you and please post the performance improvements :-) )
Make sure to use the aggregate right after the retrieve (the next action), if you don’t use the list as you said, mendix will optimize the request.
https://docs.mendix.com/howto/logic-business-rules/optimizing-microflow-aggregates/
Hi Patrick,
maybe this can help you https://forum.mendix.com/link/ideas/3394 only instead of union use intersect.
Hi Guys,
Thanks for all the help. All the input helped...This is what we did:
-- Prod Server
max_connections = 1000
shared_buffers = 8GB
effective_cache_size = 24GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
work_mem = 1048kB
min_wal_size = 1GB
max_wal_size = 4GB
max_worker_processes = 16
max_parallel_workers_per_gather = 4
max_parallel_workers = 16
The important change here was the work_mem change…..sometimes smaller is better. Your size was to big. This apparently is the memory it can pushed to each cpu. We were trying to push to much and it was causing a bottle neck.
While doing all these tested we realized that the our production server disk thouthput maxed out at 100meg a second were our test server is doing 1 gig per second. So for small queries the above solution solved our problem, however we have a problem with our production server and its disk speed!!!???££$$$!!.
On our test server we can retrieve ± 200 000 records from the 50 000 000 record table in seconds now…..
Thanks for the help guys!!!!!!
Im sure i can improve it even more by not having a “and” in the retrieve XPATH and using a filter, and doing batch retrieved if the lists are to big, but we need to fix our server first.