Database node operating system memory

Hi, I have some questions regarding the "Database node operating system memory" which is part of the metrics in mendix cloud v4. Can anyone tell me what it contains in general? Persistant entities, non-persistant, session data? The documentation did not help my understanding:  The memory graph shows the distribution of operating system memory that is available for this server. The most important part of this graph is the cache section. This type of memory usage contains parts of the database storage that have been read from disk earlier. It is crucial to the performance of an application that parts of the database data and indexes that are referenced a lot are always immediately available in the working memory of the server, at the cache part. A lack of disk cache on a busy application will result in continuous re-reads of data from disk, which takes several orders of magnitude more time, slowing down the entire application.​ The cache is important but where is it? Is it part of the used memory? As you can see from the picture I included I think I run into problems right away using up all freeable memory and starting using swap. I am not sure this is the single reason for the application to become slow/unresponsive but it seems plausible that it may be a contributing cause.    
1 answers

Your application does indeed need more database memory. You have two types of graphs: Database node operating system memory and Application node operating system memory. The Mendix model is using the Application node memory and the Postgres database is using the Database node memory. The database node memory is mostly used by the indexes and earlier reads (ie the cache). In your case you see that the applications due too little memory is swapping its cache to the harddisk and making your applicaion slow. You also might want to check you Application node memory because you can swap out those memory settings. If you have more than enough application node memory you might reduce that and give it to the database and visa versa.

You also might check your application on why so much data is in the cache? Is everything really needed? Mistakes that are often made in long microflows is clearing lists for instance after there use is no longer needed.

And there are clearly peaks. Are the always on the same time? What is happening then? Are batches used and what does lowering the batch size do? But if everythins is as it should you sometimes just need more memory...