id and link tables (postgresql partitioning needs)

0
Hi there, Please refer to http://explain.depesz.com/s/68BA for the query and the analysis, and my "issue" are related to line 5. That is a scan on a link/join table (~40million rows) between a profile table (~3000 rows) and a data table (~40million rows). 1:N mapping (Profile:datapoints). The ideal for us would be to split/partition that link table to optimize those queries (which are currently needed to be run every 30minutes, total execution time ~7-8hours per day, and time which are then causing other slow query responses for online users) Looking at the IDs for the profiles, I notice that they aren't that nicely spread:select min(id),max(id)-min(id) as spread,count(id) from field$profile; min | spread | count -------------------+--------+------- 14918173765664769 | 11500 | 3349 THUS they aren't using a sequence on the actual profile table, but some other (random??) number from somewhere. SO, now the questions start: 1) Where are that ID number allocation(s) documented? (I'm a sysadmin and DBA, not a developer O_O ) 2) what are the Mendix developer advice regarding partitioning of such large link tables given the IDs not generated strictly sequential but with gaps in between? 3) Anybody else have experience in splitting/partitioning these link tables on PostgreSQL before?
asked
1 answers
0

Try to solve the problem by using the lists/batches/limit/offset of microflows instead of manipulating the database. The Mendix runtime maintains the database that may conflict with your DBA work.

answered