Hey yeah this is a common pain point. Batch deleting is the right approach but your batch size and method matter a lot.
A Java action is definitely the way to go here. You can use Core.createXPathQuery to retrieve batches of objects and then Core.delete on each batch, all within the same Java action in a loop. The key is to keep batch sizes reasonable (something like 1000-5000) and commit/clear the context between batches so the runtime doesnt hold all those objects in memory. Something like a while loop that retrieves a batch, deletes it, and repeats until the batch comes back empty. This avoids the overhead of passing objects back to the microflow layer each iteration.
If you really want to go nuclear and there are no before/after delete handlers and no delete rules you care about, you can execute raw SQL from a Java action using JDBC. You can get a database connection from the runtime and just run a TRUNCATE or DELETE statement directly. Something like `Core.dataStorage().executeWithConnection(conn -> { conn.prepareStatement("TRUNCATE TABLE mymodule$myentity CASCADE").executeUpdate(); return null; })`. This bypasses all Mendix logic obviously so you need to be careful, but for wiping entire tables overnight with no event handlers its ridiculously fast compared to doing it through the ORM. Ran into this exact scenario on a project where we had nightly data refreshes and going through the normal Mendix delete path for hundreds of thousands of records was just not practical.
One thing to watch out for with the JDBC approach: make sure you handle the association tables too if you have many-to-many relationships, and be aware the table names in the database are lowercased versions of the module and entity names with a dollar sign separator.
Does the Mendix Cloud allow raw SQL? I had that running fine locally, but the Cloud threw an error when I tried it.
This did work, thank you. My problem is that the tables are named differently on my local machine and production.
Local: "PUBLIC"."module$table" (with the quotes)
Cloud: module$table
So, what works on one, doesn't work on the other (which is why I got an error previously). I'll need to add some logic to vary the table names based on a constant, set per environment.
But, yes -- the crux of the code you provided works, and has really helped. Thanks again.
I found this...
String tableName = Core.getDatabaseTableName(Core.getMetaObject(this.EntityType));
But it still returns the wrong table name.
You do not need a Workflow for this. A Scheduled Event is the correct approach for a nightly cleanup job.
The key issue is not batching itself — you are already doing that. The real problem is that even though you retrieve in batches, the delete operations are still executed within the same long-running transaction. Because of that, Mendix keeps the transaction open and memory usage grows, which results in database memory warnings.
Here is the recommended solution:
Create a batch structure inside a Scheduled Event.
Retrieve the objects in chunks (for example 3,000 at a time).
After each retrieve, immediately perform the delete.
Most importantly:
After each delete operation, call CommunityCommons → EndTransaction.
Without EndTransaction, Mendix still accumulates the deletions in the same transaction scope. By explicitly ending the transaction after every batch, the platform commits the changes to the database in smaller chunks (e.g. 3,000 records at a time) and releases memory before continuing.
This way:
This is the cleanest and most stable solution for large-scale deletions in Mendix Cloud.
If this resolves the issue, please close the topic.
Firstly, you cannot use raw SQL queries inside a Java Action or via the Execute Query option in Mendix Cloud. This is because you would need direct access to the database URL, username, and password. Without these credentials, it is not possible to perform direct deletion activities on the Mendix Cloud database. Mendix Cloud does not allow direct database access for security and platform integrity reasons.
Secondly, OQL (Object Query Language) is designed only for retrieving data from the database. You cannot use OQL to delete, update, or insert records. Based on the query, OQL creates objects in memory at runtime and returns the requested data, but it does not support data manipulation operations like DELETE or UPDATE.
The best option for large-scale deletion is to use a Scheduled Event. You can configure the scheduled event to run at a time when application usage is minimal (for example, overnight).
Inside the scheduled event, implement a batch deletion process according to Mendix best practices. Normally, a batch size of 1,000 objects is recommended. Since you are deleting objects (and not processing heavy logic), you may increase the batch size to 2,000 objects, if performance allows.
For each manual iteration of the batch, use the Start Transaction and End Transaction Java actions available in the Community Commons module. By controlling transactions manually, you can reduce memory pressure on the server. In Mendix, memory usage increases during large transactions. By committing and closing transactions per batch, you ensure that memory is released incrementally instead of being held until the entire process completes.
Using Scheduled Events, combined with controlled batch processing and transaction management, is the safest and most scalable approach for mass deletion in Mendix Cloud.
Hi,
Large-scale delete operations must be handled carefully in Mendix because the platform is not designed to retrieve and delete hundreds of thousands of objects in a single transaction. Doing so can lead to memory issues, timeouts, or crashes.
When you:
Mendix tries to load all objects into memory before deleting them. This can exhaust the JVM heap and slow down dramatically.
Even batching inside a loop can still cause memory pressure if you hold lists across iterations.
Do not retrieve the entire table at once.
Instead use:
Microflow pattern:
Repeat until no more records found: Retrieve next 1000 records (limit 1000) Delete list Commit
This limits memory footprint and completes the transaction in smaller chunks.
Example logic:
This avoids loading huge lists.
Instead of a single long-running microflow, split across multiple jobs:
This reduces memory leaks and allows concurrency control.
If you truly need to wipe out entire tables and you are not dependent on Mendix delete behavior (events, associations, business logic), the fastest approach is:
executeWithConnection from the Mendix DataStorage APIDELETE or TRUNCATE TABLEExample from the community shows SQL running much faster than batch delete in microflow.
Important: this bypasses delete behavior (no Mendix event handlers), so only use when safe.
Mendix recommends not retrieving more than a few thousand objects at a time, because doing so can exhaust memory.
If you must retrieve data:
Range in retrieve (limit)This ensures smaller footprint.
IsDeleted) pattern and filtering out flagged objects can be a good alternative temporary solution before actual delete.