First of all: this problem only occurs in 2.3. In 2.4 the performance has improved a lot. Furthermore, in 2.4 less transactions will be created and less shared memory of the database is needed.
The solution you give can fix your problem because you explicitly end a transaction. This means the database transaction will be committed and the shared memory will be released.
You will need to test how many explicit commits you need. This depends on the number of data you change and is kept by the database in a seperate transaction. However, keep in mind what kind of rollback behaviour you want. Once a transaction has been committed you can't rollback the changes you made anymore.
I have the same set up with batches as Herbert Vuijk already mentioned in a previous question: How to deal with huge list in microflows I have one subflow which receives a bunch of datasets in a row. These set together are making to much database transactions which causes database to fail.
Isn't it a possibility that I make one microflow which calls a java action, within that java action I create the same structure as Herbert explained and after that I start a new transaction and call a microflow within that java action which executes the necessary functionality. After the subflow is executed I stop the transaction and start a new transaction and call the subflow again with a new set up objects? (like the code below)
IContext context = new Context((Session) this.getContext().getSession());
context.startTransaction();
try {
Map<String, Object> params = new HashMap<String, Object>();
params.put("ErrorCode", "Exception");
params.put("Message", msg);
params.put("ErrorType", ErrorType.Error);
params.put("ErrorExplanation", "An error has occured while executing the action: please contact the administrator. " + e.getMessage());
params.put("SystemNotificationCode", "STRUCT");
Core.execute( context, "MessageExchange.CreateErrorMessage", params );
}
catch( throwable t ) {
context.rollback();
throw new CoreException(e);
}
context.endTransaction();
Could this solution resolve the problem of the amount of transactions the XAS makes or is this not a solution?
My own awnser I suggested was the solution for the problem.
You could try upping the max_fsm_relations
parameter in your postgres.conf file.
See here for more info.