This error happens because two transactions are trying to update the same data at the same time, and each one is waiting for the other to release its lock. PostgreSQL detects that circular wait and aborts one of them with a deadlock detected error.
In your case, the important part is this line: while updating relation “planning$job”. That means the conflict is happening when Mendix is committing or updating the Job entity, most likely from two parallel actions, background processes, scheduled events, task queue executions, or concurrent user actions touching the same record.
So the problem is usually not the database itself. The real issue is the application flow. Two microflows or transactions are trying to commit the same object, or related objects, in a different order. For example, one process updates object A and then B, while another updates B and then A. That is a common deadlock pattern.
The first thing I would check is whether the same planning$job object can be committed from multiple places at nearly the same time. This is especially common with Task Queue, parallel processing, before/after commit logic, or loops that commit many related objects one by one.
A good solution is to reduce concurrent updates on the same object. Try to make sure that only one process updates a specific Job record at a time. If you are using Task Queue or background processing, avoid running multiple jobs in parallel for the same business object.
It also helps to reduce the transaction scope. Do not keep objects open and modified longer than needed. Retrieve the object, apply the change, and commit it as quickly as possible. Long transactions increase the chance of deadlocks.
Another good practice is to keep the commit order consistent everywhere. If multiple microflows update the same set of entities, they should always do it in the same order. That reduces the chance of circular locking.
If you are committing inside loops, that can also make things worse. In those cases, review whether you can collect changes first and commit in a more controlled way, or split the logic so that different threads are not touching the same records.
So the solution is not to “fix PostgreSQL,” but to find where the same planning$job record is being updated concurrently and redesign that flow to avoid parallel commits on the same data. I would start by checking scheduled events, task queues, parallel user actions, and any microflows with commit events around the Job entity.
If this resolves your issue, please mark it as accepted.