When importing 100,000+ records from Excel, performance becomes very important. Directly committing every record one-by-one can significantly slow down the process because each commit triggers database operations and possible events.
In my case, I experimented with a custom Java action using Apache POI to read the Excel file and implemented batch processing to improve performance.
This is something I tried in my Task — it might be useful for your scenario as well.
For importing 100,000 rows from Excel, the main thing is to avoid processing everything in one big transaction. That can easily cause memory issues or slow down the system.
A better approach is to process the data in batches. For example, you could handle 500–2000 records at a time, commit them, and then continue with the next batch. This keeps the process more stable and easier to recover if something fails.
It also helps to keep the import logic as simple as possible. Try to avoid heavy microflows, complex validations, or extra database retrieves for every row, because those can slow things down a lot.
Another good practice is to create objects in memory and commit them in batches, instead of committing every single row individually. Committing one by one is usually much slower.
If you are using the Excel Importer module, it can handle large files, but with 100k rows you still need to be careful about transaction size and memory usage.
In general, the best approach is to read the file in chunks, process the data in batches, and keep the logic lightweightso the import runs smoothly.
If this helps resolve the issue, please close the topic.
As far as I know there is no standard way
Mendix recommends to chunk the data into smaller files and upload individually or increase the container size.
So we need to write a custom logic here if both the above options are not possible.
The logic would be writing a microflow to read an excel file (you would get Apache POS library jars) and read first 5000 or 10000 records (as much as your server supports). Run the same microflow iteratively till end of your Excel sheet.
Converting excel to csv would give more performance advantage.
Hi,
1.Importing 100,000 records from Excel into Mendix is definitely possible, but it requires some optimization to avoid memory and performance issues.
2.The most common and efficient approach is to use the Excel Importer module, but with a few important optimizations.
3.First, make sure you disable commits per row. Committing each record individually causes a large performance overhead. Instead, collect objects in a list and commit them in batches (for example every 500–1000 records).
4.Second, avoid heavy logic during the import. If the import mapping calls microflows or validations for every row, the performance drops significantly. Try to keep the import mapping simple and run additional processing after the import completes.
5.Third, ensure that indexes exist on attributes used in lookups. If the import needs to retrieve related objects (for example by code or ID), missing indexes can make the import extremely slow.
6.Another useful technique is to disable event handlers temporarily if they are not required for the import. Before/after commit logic executed for every object can drastically increase processing time.
Also keep an eye on memory usage. When importing large files, avoid keeping the entire dataset in memory. Processing records in smaller batches (500–1000) helps prevent JVM memory pressure.
For very large imports or recurring integrations, many teams convert the Excel file to CSV and use a streaming or batch processing approach instead of loading everything at once.
7.In practice, with batch commits and minimal logic inside the import, importing around 100k records is usually handled within a few minutes without issues in Mendix.
Hi Naveen,
Upload Excel File ->
Parse with Excel Importer → Store raw data in ImportRow entities ↓ Background Microflow (Async) ↓ Batch Loop (500 records) → Create domain objects → Commit batch ↓ Progress logging / status update