inconsistent response: committed or rolled back object is missing from response and cache

0
Hi there, I’m working on a project right now where we have a system that was developed in Javascript that runs a series of calculations. When these calculations are done, there are about 3000 objects that need to be committed. There is 1 report object and a collection of 3 other groups of objects that get associated with that report object. To avoid auto-commit errors, I implemented it like so: Create the Report Create the 3 groups of data points (These make up the remainder of the objects that need to be committed). Associate them with the Report Commit the Report Commit the 3 groups of data points Call an after-calculation microflow   I’m using the Mendix 9 client API to do this. Most importantly: ‘mx.data.create’ and ‘mx.data.commit’. ‘mx.data.commit’ allows you to pass in an array of object GUIDs to commit them all at the same time, so that’s what I do for each group of data points. Maybe that is the problem and I need to commit them in batches over time. I’m not sure. Unfortunately, I haven’t been able to find the culprit. Every time I try to diagnose this issue, it disappears. It’s infrequent and inconsistent. Every stack trace I look at does not trace back to my Javascript action, so I don’t know where specifically it’s coming from, but it is causing the JS action to halt before it completes what it’s doing. Because it’s inconsistent, I also don’t know what causes it. Ideally, yes, we wouldn’t be managing this in Javascript. We may potentially be able to move this to Java actions or Microflows with requests in the future, but right now we are pressed for time and the current implementation works well aside from this. Does anyone have some advice or experience with the issue that they could share with me? Thanks
asked
1 answers
3

As it would turn out, that was the problem. There was an important little detail I missed in the error message, and that was “ERR:INSUFFICIENT_RESOURCES”. The issue appears to have been memory consumption. Creating and committing so many objects at once seemed to overload the memory of the browser, leading it to sometimes fail to store the returned MxObjects after they were created in the database. This would also explain why it was so inconsistent. The error seemed to appear when the memory usage reached around 92%, but many times it would succeed after getting up to 89% or so. Dev-tools in Chrome also appears to do some garbage collection or something and reduce the load on memory usage, so that would explain why I wasn’t able to reproduce the error when attempting to debug it.

I also ran into another error: ‘Failed to fetch’. I believe this has to do with ‘mx.data.create’, while ‘inconsistent response’ has to do with ‘mx.data.commit’. I believe mx.data.create will fail to make a fetch request if there is not enough memory to safely store the object that is returned. I wouldn’t think mx.data.commit would fail due to memory, but it’s possible that the browser deletes some of the stored data when memory usage is too high and prevents the objects from being sent in the request, leading to that ‘missing from response’ error.

I resolved the issue by implementing a batching system to create and commit the data in much much smaller batches—only 10 or so objects at a time—and that appears to have solved the issue.

I did, however, run into another issue: ‘Unable to find objects for guids’. When searching the database manually, I could confirm that the entire batch would be missing from the database. I’m not sure why this is, but it seems that the server will sometimes lose track of objects if you create and commit quickly like this. I say ‘sometimes’ because it’s also inconsistent. Most of the time, the calculations complete without error. Regardless, I solved that by simply creating new objects and copying the data from the objects that failed to commit when that error occurs.

answered