Slow database retrieving.

3
Hi all   Im retrieving a date range from a transaction table, then using a list aggregation to sum the net value. The entity has about 40 million rows and the retrieve action normally returns about 3000 records. I don’t use the retrieved list for anything else so the query only returns the aggregated sum. This is the Xpath is use to retrieve. [NominalAccounts.EPOSTransactionLine_BusinessUnit = $BusinessUnit] [TimeOfSale > $StockTake/OpeningDate and TimeOfSale < $StockTake/ClosingDate]   This action can take up to a minute to run. Ive read up in Mendix documentation and these retrieves should be a lot faster. Our server has 32 gig of ram and the DB is on the same server as the Mendix consol.   What am I doing wrong….why does this query take so long…What can I do to speed it up?   Regards, Patrick
asked
5 answers
1

Hi Patrick,

Mendix does not recommend using the and operator in Xpaths. Your Xpath actually contains an 'and'. Instead, you can use filter after you retrieve the list (Filtering takes place in the Database again before you retrieve it). Also, Mendix cannot retrieve large numbers at once. That's why I recommend using batch retrieve. I will try to add the documentation about it below..

 

Batch related Links:
https://medium.com/mendix/performance-modeling-in-mendix-batch-processing-1ed245a1e23b
https://docs.mendix.com/howto7/monitoring-troubleshooting/detect-and-resolve-performance-issues/#batches
https://forum.mendix.com/link/questions/16292

 

Retrieve related Links:
https://docs.mendix.com/howto/logic-business-rules/working-with-lists-in-a-microflow/

 


Regards
Ridvan

answered
1

Can you check if the OpeningDate is the first attribute and the ClosingDate is the second attribute  and BusinessUnit as the third in your StockTake entity?Do you have created the index for all of them?

 

 

I assume that [TimeOfSale > $StockTake/OpeningDate and TimeOfSale < $StockTake/ClosingDate] will terun lesser records than  [NominalAccounts.EPOSTransactionLine_BusinessUnit = $BusinessUnit]

 

Try to do the XPath in the other order too to make max. use of the indexes.

 

[TimeOfSale > $StockTake/OpeningDate]

[TimeOfSale < $StockTake/ClosingDate]

 [NominalAccounts.EPOSTransactionLine_BusinessUnit = $BusinessUnit]

 

Go Make IT

(Accept if it worked for you and please post the performance improvements :-) )

 

answered
1

Make sure to use the aggregate right after the retrieve (the next action), if you don’t use the list as you said, mendix will optimize the request. 

https://docs.mendix.com/howto/logic-business-rules/optimizing-microflow-aggregates/

answered
1

Hi Patrick,

maybe this can help you https://forum.mendix.com/link/ideas/3394 only instead of union use intersect.

answered
1

Hi Guys,

 

Thanks for all the help. All the input helped...This is what we did:

 

  1. We added 3 indexes . A index will both attributes and a index with each individual attribute. Postgress seams to choose which index will be faster.
  2. We moved both search attribute to the top of the Transaction table.
  3. We changed the following in the config file

-- Prod Server

max_connections = 1000

shared_buffers = 8GB

effective_cache_size = 24GB

maintenance_work_mem = 2GB

checkpoint_completion_target = 0.9

wal_buffers = 16MB

default_statistics_target = 100

random_page_cost = 1.1

work_mem = 1048kB

min_wal_size = 1GB

max_wal_size = 4GB

max_worker_processes = 16

max_parallel_workers_per_gather = 4

max_parallel_workers = 16

 

The important change here was the work_mem change…..sometimes smaller is better. Your size was to big. This apparently is the memory it can pushed to each cpu. We were trying to push to much and it was causing a bottle neck.

 

While doing all these tested we realized that the our production server disk thouthput maxed out at 100meg a second were our test server is doing 1 gig per second. So for small queries the above solution solved our problem, however we have a problem with our production server and its disk speed!!!???££$$$!!.

 

On our test server we can retrieve ± 200 000 records from the 50 000 000 record table in seconds now…..

 

Thanks for the help guys!!!!!!

 

Im sure i can improve it even more by not having a “and” in the retrieve XPATH and using a filter, and doing batch retrieved if the lists are to big, but we need to fix our server first.

answered