We have been seeing this on one of our applications recently too. The redeployment and restart of the environment didn’t work in our case.
We downloaded a backup of the data in the environment (including the files) and restored it to a Postgres database (see restore a backup locally here). A quick select statement like the one below executed in the Query tool of your Postgres environment should show you what tables take up a large amount of space. The SQL statement will not show your file size and only covers structured data (i.e. tables). You should be able to see the total file size from the backup.
SELECT nspname || '.' || relname AS "relation",
pg_size_pretty(pg_total_relation_size(C.oid)) AS "total_size"
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
AND C.relkind <> 'i'
AND nspname !~ '^pg_toast'
ORDER BY pg_total_relation_size(C.oid) DESC
With a bit of troubleshooting, you should be able to see whether any of your tables are unexpectedly large and perhaps whether you can delete some of the data in your tables.
In our case, we will likely need a bigger app container.
You don’t need to take action. But if you really want to do something, redeploy with restart will free up all this memory…..
Hi All, I also get the same alert for 3 different environments from 2 applications. I have also restarted the applications, but nothing helps. As soon as someone is starting to use the Test or Acceptance environment, I get this alert that the Database Freeable Memory is critical. Then I will recieve an Recovery Alert.
I think it also has to do due to an update on Mendix Cloud since October 17th;
Mendix Cloud Updates
October 17th, 2022
Improvements
So, there was always a Database Freeable Memory issue, but now we get the alert because of the threshold set to 10%?
First actions to take: Inspect the trends graph Database Node Operating System Memory for anomalies and correlate those with application behavior. Resolve by identifying and optimizing long-running database queries, or by ordering more memory.
We had never problems with the applications where we recieve this alert, so I think we need to order more memory I guess?