There is a Mendix java api available that will give you a jdbc connection to the app database. This enables you to run SQL queries on the app database. More info here: https://docs.mendix.com/howto/extensibility/howto-datastorage-api/#5-retrieving-objects-using-sql
Be careful though, you can easily break existing data, for example associations that may point to deleted records.
For future reference, this is the Java code I ended up with, since I had to do a statement rather than a query:
private final ILogNode logger = Core.getLogger(this.getClass().getName());
logger.info("executeAction: " + this.SQL);
long rowsAffected = Core.dataStorage().executeWithConnection(connection - > {
long rowCount = -1;
try {
PreparedStatement stmt = connection.prepareStatement(this.SQL);
rowCount = stmt.executeUpdate();
} catch (SQLException e) {
logger.error("Failed to execute sql statement: " + e.getMessage());
throw new MendixRuntimeException(e);
}
return rowCount;
});
return rowsAffected;
What about a scheduled event with a queue job in it and then work your way through all the objects via batches? This way it won't run one long microflow but different queue jobs with (1000) objects for instance.
Maybe check if there is no delete behaviour with the to be removed objects to speed things up.
Not ideal but removing 10M objects every day is not ideal if you ask me lol.
Can anyone please provide the .java file ?
Optimizing the process for clearing such a large database table is a common challenge! If TRUNCATE TABLE works well locally, it's definitely one of the fastest solutions since it bypasses logging for each deleted row. However, for a more native Mendix approach, you might consider using scheduled jobs to divide the deletions into smaller, manageable chunks, which can sometimes improve performance and reduce the risk of hitting resource limits.