Error when fetching large amounts of data from the DB

0
For example, this happens when you import data with more than 300,000 columns.(when importing via SQL)   Let's say we have 10 rows, and we know that we only need 2 rows (A and B), so we consider how to selectively import those.I'm thinking of using XPATH to separate them, but if you have any other ideas, I'd love to hear them.   Or is there a way to import large amounts of data safely and without error?
asked
1 answers
0

Hello Woo Ju Kim,

From your description, it seems you're dealing with a large volume of data import, specifically mentioning "more than 300,000 columns." Given the context, I assume you meant to refer to "rows" instead of "columns," as most relational databases have a limit on the number of columns per table/entity, often not exceeding 1024.

In scenarios like this, efficiency and precision in data handling are paramount. Based on my experience with Mendix, particularly with the Database Connector module, there are effective ways to manage large-scale data imports while ensuring system performance and stability.

For selectively importing specific rows, indeed, XPATH could be utilized if you're working within an XML context or need to filter data with specific criteria directly from the source. However, when dealing with SQL databases, leveraging the Database Connector's capabilities to execute custom queries allows for more direct and efficient data retrieval. This approach enables you to specify exactly which rows (and columns) you wish to import. For example, if you're interested in only 2 rows (A and B), a well-crafted SQL query could be executed to retrieve just these rows based on your criteria.

To further optimize the process and ensure safe and error-free imports of large datasets, consider implementing pagination or batch processing techniques. These methods can significantly reduce memory consumption and mitigate potential timeouts or performance bottlenecks by dividing the data import process into more manageable chunks.

In summary, while XPATH provides a viable option for filtering data, utilizing the Database Connector for executing targeted SQL queries offers a more direct and potentially more efficient pathway for managing large-scale data imports in Mendix. Careful planning and strategic data handling techniques are key to successfully importing large amounts of data without compromising system integrity or performance.

Best regards,

Rob de Leeuw

answered