Cascade deleting of data takes minutes to process

0
Hello, We've current set up two applications. An application with master data and an application that uses this data and retrieves it from the main app whenever user request it. This way we want to keep the 'sub’ app clean of most data. We're trying to achieve this by linking most of our central entities to the current session when we retrieve the data and deleting it again when the user ends it's session or whenever the user wants to start fresh with a new batch of data. When retrieving a new batch of data, the old (and usually modified data) should be deleted as we can't know if it's still valid. We currently do this by cascade deleting all associated entities linked to some of the main entities just before retrieving the new batch of data. The problem that now arises is that when the user wants to retrieve a new batch, and the microflow deletes the old data, the deletion can sometimes take more than 1 or 2 minutes while retrieving the new data only takes max 10 seconds. Even though we sometimes only retrieve like 5k records of the 'main’ entity and delete this, it will still trigger the cascade delete on all of it's associated objects and the objects associated to those while there are no associations with the main object what so ever. This will still take about a minute to complete even though it's only deleting 5k records and has no other relations/entities to delete. We're curious as to how this cascading delete works and how it's taking this long to delete everything and, most importantly, if there are any ways of improving the deletion of our data. Would it be more efficient to manually retrieve all entities in loops if there are any and delete them in separate activities in the microflow or are there other best practices? We've also been getting some GC overhead limit errors which we assume are related to this issue. Thanks in advance for any advice.
asked
2 answers
0

Hey Melvin,

it's advised to manually delete your objects from the bottom up instead of letting the cascade delete the objects for you. The cascade delete is indeed not very fast. And you can gain some time there. Though I don't know how much you’ll gain it’s definitely worth a shot in your case to try out different delete structures according to the associations you've set.

 

answered
0

As a follow-up note and possible solution for people also wondering what to do best:

I tried to create a microflow to manually check and delete all entities that would normally be deleted by the cascade. Because I had about 3.3k records in my root entity, about a dozen entities linked to the root entity and another 8 entities linked to one of those dozen entities, the microflow became rather lengthy and with lots of retrieves and add-to-lists inside of the main loop of the root entity and of the secondary loop of one of the others. This eventually led to exactly the same performance as cascade deleting all my records, thus did not solve my problem.

Instead I've now created After Create microflows for all persistent entities and in the MF linked them to the current session. When I now need to start the user off with a “clean slate” (aka: removing all old data and retrieving the new), I simply retrieve every entity from DB linked to the current session and delete it. Although the system does need to run a very small after create microflow for every entity imported and created in this application, the deletion of all old data now completes in seconds rather than minutes and the time it takes to run the after create micoflows is neglectable to say the least.

Hope this will help someone running into a similar performance issue.

answered