You can use OQL to do this
This is an example:
String sql = "SELECT DISTINCT tbl_s." + messagevalidation.proxies.SourceData.MemberNames.GridArea.toString() + ", tbl_s." + messagevalidation.proxies.SourceData.MemberNames.GridAreaName.toString() +
" FROM " + SourceData.getType() + " AS tbl_s " +
"WHERE iets='iets'";
IDataTable table = Core.retrieveOQLDataTable(this.getContext(), sql);
List<? extends IDataRow> rowList = table.getRows();
for( IDataRow row : rowList ) {
String gridArea = row.getValue(this.getContext(), SourceData.MemberNames.GridArea.toString());
String gridAreaName = row.getValue(this.getContext(), SourceData.MemberNames.GridAreaName.toString());
HashMap<String, String> params = new HashMap<String, String>();
params.put("GridArea", gridArea);
params.put("GridAreaName", gridAreaName);
Core.execute(this.getContext(), "MessageValidation.ProcessGridArea", params);
}
The query is created using SourceData.MemberNames. This is the proxy file for the metaobject SourceData. I personally always use attributes this way, when you rename any of these attributes you'll get an compile error. If won't use the proxy renaming this attribute will create a runtime exception.
The ? extends IDataRow is a complex interface from the xas, don't bother to understand how and why this is. You don't have to for this action ;-)
As where constraint you can place your own attributes (try to use the proxies) just replace the iets='iets' statement
finally the last line of code states: Core.execute(...) this calls the microflow: MessageValidation.ProcessGridArea. Just create a microflow with 2 input parameters (or more if you want) and add all the parameters to the params map.
Keep in mind that both the microflow name and the parameter names are case-sensitive.
I don't know if what we've done helps at all?
We read in a number (50,000, say) of job records from a file and store these in the database using Mendix API batch processing. At the outset, we create a "JobFile" record with the filename, date and time, numbers of records etc. so that we have a record of the headline facts about the batch that was processed. As we process each job we associate (as part of the batch processing) the job with the JobFile record relating to the file it was read from.
Then, if we want say to back out an entire batch, we go to a list view which shows the JobFiles, select the one we want to delete, and delete it. This calls a microflow which deletes all the jobs (and other entities) associated with the selected JobFile, then deletes the JobFile itself. Again, that is done in a batch process. (If the cardinality did not demand a batch, we could of course just delete a JobFile and have the delete cascade to all the Jobs by setting the delete behaviour).
If we wanted to delete a JobFile and its Jobs automatically according to some rule, we could order the JobFiles by date, then pick (say) the second oldest and any that are older than that, and run the delete microflow on them.
Incidentally, deleting large numbers by batch processing takes a long time because you have to instantiate each object before you put it on the delete batch. Instead, we wrote a SQL script to delete the objects we wanted to delete and their associations. This may require to be modified if the Mendix database schema changes, but performs far faster than the batch implementation.
XPath has no notion of distinct queries. In order to use a distinct you need to execute an OQL query, which can be executed using a java action. But cant you loop the objects and pick the objects with the highest import numbers (the case is not completely clear to me)?
where should i put the code in mendix?