Audit trial - commit takes longer time

0
Hi All, Is any one faced the performance issue in Audit Trail Module? I am uploading more than 10,000 records and while commiting to database its taking almost 1.30 hours  as I used AuditTrailSuperclass as specialization in my entity. Do we have any other alternate solution to reduce this commit time? I tried to use association(entity with Log entity) instead, Still it takes same time. If anyone know any other alternate ways, that would be helpful. Many Thanks, Nirmal Kumar
asked
2 answers
3

Unfortunately, the Mendix audit trail will give you this much excess. You are creating a record in audit trail for each attribute that changes in every entity you are auditing. So if your entity has 20 attributes, you are committing 21 objects to commit it. The commit on the entity itself, plus the commit of 20 audit records (maybe even more as you include the main log item plus all the log lines for attributes which have changed. Even if you switched to using an association to the log object vs. the audittrailSuperClass object, you’d still have this issue.

You can limit the audit trail to only attributes which have changed, but that doesn’t help you with a mass import where all the attributes are new, so all will get an audit record.

If possible, I’d recommend disabling the audit trail for the initial import. This will greatly reduce the import time.

answered
2

I completely agree with Imran Kasam here, but here are two insights that you could use reducing the client-side duration. Unless if you opt for the first option, the whole process will still take 1.5 hours, but no user will be affected by this duration.

  1. Commit without events
    First of all, when using ‘Commit without events’, the commit via the Audit Trail will not be executed, reducing all time taken by the Audit Trail. Using ‘Commit without events’ is, however, not recommended in general and should only be used when necessary. Not because of performance, but simply because you configured the before/after delete/commit/create events on purpose and are now ignoring it. Aside of that, the Audit Trail will miss the initial create of the object, so your history will be incomplete. Using this option also doesn’t require to disable the Audit trail in general, as suggested by Imran Kasam.
     
  2. Use a background queueing process
    Another option and probably more favorable: Use a background queueing process, such as the ‘Process queue’ add-on or the ‘Follow-up Widget’ (both freely available in the App Store). The initial upload of the file or the way you upload the objects to your application could be short, while committing them, the 1.5 hours, will be done in the background.

    For example; if those 10.000 objects are created through an Excel File, commit them to a separate entity. At the end of the import microflow, call/create a Background process through the Process Queue that retrieves those object out of database and loop through the list and create-commit all objects. Since this will be done in the background, no user will be paused in action. For optimal performance and reliability, do a limited retrieve (i.e. 500 items) and another retrieve (all) with a count. If the count > items retrieved, thus there are more objects to process than retrieved, at the end of the microflow, create a new process that retrieves another batch of 500. If object 8.983 would throw an error, at least 8500 objects would’ve been committed already.
answered