Hi Roberto,
Note that delete behavior has serious impact on the speed of deletion, so consider removing delete behavior of that entity (if any).
Did you try the deleteAll method of the communitycommons package? I'm pretty curious whether that one is working better on this set of objects. It uses an DB retrieval schema instead of retrieveXPath, so I expect it is a bit more efficient in memory usage.
Its batchsize is 10.000 as well, that worked for me before.
500.000 is really too many records for a logging module. On what setting did you set the log level? From the documentation:
Writing all log messages to the database can have a negative impact on the performance of your application, depending on the number of messages. In a production environment, it is recommended to log only messages of level WARNING and higher.
So that's a point to start with. If you're app is logging too many logging records, you should consider a cleanup scheduled event which removes old (for instance all records older than a week)logging records.
To solve this issue, you really should delete in smaller batches. Deleting 500000 objects with a microflow could cause trouble, but with a java-action it should work. Although it could take some time. Best way is to delete in batches, 50000 is a safe amount of objects. After doing this, you should really log less or cleanup.
@Samet Sure I could try. What I wonder about is if anyone can predict if only a smaller batch-size would help, or if I have to limit the amount of records that are deleted (to lets say 50.000)
Regarding the java-code I would not expect a big difference (regarding the (out of) memory) for deleting 10.000 records and 500.000 records, because a list of 10.000 records is retrieved (in objectList) and after committing the batch this is cleared and filled again... Should perhaps the Core.removeBatch also be called everytime inside the loop?
I'd like to hear some advice on
Update (in a new answer, because I think that is clearer than editing the (large) openingspost):
TOday I tested extensively with the deleteAll java-action from the community commons package.
1: I tried 10.000 as BATCHSIZEREMOVE, 4.000, 1000 but all gave me out of mem (on hosted env). 2: Locally I only tested with 10.000 and gave me out of mem as well 3: BATCHSIZEREMOVE set to 500 worked more or less locally and remote, but with a strange side-effect: it didn't delete all my records, but only about 50% each time.
I started with 285.725 records, and it went from
I tried to check and debug the java-code but no luck: I don't understand why it only deletes (about) 50%.
What I do see in the logging every time te java-action finished was a (warning) message like this:
[deleteAll] After delete all there are 35501 objects remaining. This might be a result of the configured security or deletebehavior.
But: the object I try to deleteAll, doesn't have associations (and thus no delete behavior). And the userrole is allowed to delete.
Anyone ideas why this strange "I'll only delete 50% of your records" behavior occurs?
Of course I could switch back to the "old delete-all option" with a low batch-size; for that I didn't have time today (maybe later), but this one seems to be more efficient (as it only retrieves the IDs of the records to delete...