How to manage Concurrent Updates in a Large Table with High Activity

0
Hello Mendix Community, We are facing a significant challenge with concurrent updates leading to data integrity issues in one of our Mendix applications. An overview of our situation/case: Table Details: A large table (huge number of records and around 100 fields). Data Synchronization: Occurs every minute with data from another system. User Interaction: About 500 concurrent users at peak hours. Microflows: 100 microflows with a total of 162 commit actions to this table. Automated Processes: Jobs sending emails when certain states are reached. User Actions: Users can update records or take actions through the UI. Due to concurrent updates, we’re experiencing data loss and integrity issues. To mitigate this, we are considering splitting the table into groups of fields based on specific update actions, thereby reducing the concurrency risk. Additionally, we are exploring the implementation of optimistic locking using a version stamp. The idea is to: Check the version stamp before committing changes. Increase the version stamp with each commit action. My questions for the community are: Has anyone implemented optimistic locking with a version stamp in Mendix? If so, how effective was it in reducing data conflicts? Are there better strategies or practices to ensure updates do not overwrite other changes? Any insights or experiences would be greatly appreciated. Thank you! Best regards, Rob
asked
2 answers
0

Without knowing your exact situation maybe you could use the task queue mechnism to ensure that acivities are performed in a  first in first out fashion?

answered
0

Honestly speaking 500 CCU is not a something very huge.

if that is bringing integrity issues then you better first check your domain model to see if you can isolate the incoming data v/s what should get into the DB.

Meaning, dont direct process the actual MF's, better use transient entities.

Also you mentioned data is coming from external sources, so why not rate limit it? - if the infra doesnt have any default way to rate limit, use an entity attribute which is a complex key with the primary key of that entity. So this new attribute assume is the current time rounded down to a minute then due to unique constraint the DB insert query will hit the constraint if you get more than 1 call with the same data. 

this is just an example, might give you hints on possible ideas that can be derived

answered