Is there a way to fine tune the log of an application?

0
We have an application running that runs out of Java heap space memory every once in a while. There are a lot of large microflows handling big sets of objects that are called through scheduled events. Most, if not all, are already using batches to work through everything but the heap error has happened 3 times over the past 2 weeks.   Is there a way to fine tune the log or microflow such that we can pin point more accurately what is causing the issue? The objects in cache before the crash yield no conclusive evidence since the application crashed with 5k, 10k and 20k objects in cache. Increase the java heap memory won't definitely sovle the problem. Screenshot 17th november: Screenshot 20th of november
asked
6 answers
7

You have a memory leak and high object usage at midnight. Here's a suggestion for each:

Memory Leak

It looks like you have a memory leak during the day. Here's a module you can use to dump your object list to the logs in a scheduled event. If you run this every hour during the day, you'll get a picture of what type of objects are getting stuck in memory and what user/users retrieved them. Also check this article about how and when objects are automatically garbage collected in Mx5 (and more importantly when they are not): https://docs.mendix.com/refguide5/transient-objects-garbage-collecting

Usage Spike

The spikes for large microflows are a bit scary especially if things are batched as you say. I'd expect the spike to be smaller unless your batch size is actually 10k records. If the batch size is not 10k records, try manually emptying any in-memory lists you create before the next loop iteration.

 

answered
6

When your application has a gradually filling cache, you should use the Cache tab in the Mendix cloud to see what objects are causing the issue.  This will give you key data to see what objects are taking up the space.  I suspect you will want a 'clear list' activity in there somewhere.

You can increase your custom logging manually in the microflow by adding log actions.  In those actions you can add details like the start/stop time of the batch loop and the counts of lists.

answered
3

Doesn't appear to me as a java Heap space. Since your used java object heap doesn't really exceeds 400M+. 

My gut feeling is that the duration of a scheduled events exceeds the interval time -> resulting in queing and finally borking of the runtime. 

answered
3

Don't know if the log can be made more detailed/specific. Might wanna ask around on a Java-forum.

You can however add a log-message to some of the microflows you suspect.

The spikes always occur just before midnight. Does that make any scheduled process suspect?

answered
3

Jim,

I have a bit of a different approach to your question.  Is there a reason why you can't update quantity in stock as items are depleted or replenished instead of processing all items in a daily event? 

I have found this approach can address these kinds of memory problems by eliminating them, i.e. if you do real time stock updating, you won't need to process all items each day.  A bonus to this approach is that your in stock numbers are always accurate, no need to have coding and checks to be sure that something really is in stock because previous transactions may not be reflected in the stock quantities.

Mike

 

answered
0

Hi Jim,

##There are a lot of large microflows handling big sets of objects that are called through scheduled events. Most, if not all, are already using batches

Some questions about the batches: 
-Are you using begin and end transaction in the batches?
-Do you see the objects are created in database for every batch? (not in the end of the microflow)
-Do you clear lists that are not needed anymore for every batch?

If you want to investigate java heap memory issues i would advise to contact Mendix support to create a heapdump. (on the moment the app did run out of memory). This heapdump can be analysed with for example MAT (from eclipse). 

If you APM installed you cloud also create a measurement that will monitor the heap memory usage. When the heap memory is over a threshold (for example 80%) generate a trap with trace logging. This will give insights in what is running inside you app at the moment the application used 80% of heap memory. This could help to get insight in spike in the heap memory usages. 

Regards
Michel

answered