An existing connection was forcibly closed by the remote host

I have been seeing this error in my logs quite frequently for several users after they successfully log in.  The application fails to load the start page for the user after this error.  The stack trace contains this at org.eclipse.jetty.http.HttpGenerator.flushBuffer( at org.eclipse.jetty.http.AbstractGenerator.blockForOutput( at org.eclipse.jetty.server.HttpOutput.write( at org.eclipse.jetty.server.HttpOutp... and Caused by: An existing connection was forcibly closed by the remote host at Method) at Source) at A similar issue was reported a couple of years ago here Looking at the links here is suggests that this might be caused by either TCP errors or running out of connections to the database.  My application is hosted on premise using MS SQL Server. Can someone please check my reasoning here about what might be the issue...  The Database is configured for 0 (unlimited) connections, so I don't think the DB configuration is the issue.  The Mendix runtime has no special configuration options set, so I think this defaults to a pool of 50 connections??  So, I'm going to add the configuration option ConnectionPoolingMaxActive and set it to a higher number. Is this a reasonable thing to try, and what number should I set, and are there any performance effects on the runtime? A second observation about this trying to monitor the number of DB connections through MS SQL Management Studio: SELECT DB_NAME(sP.dbid) AS the_database, COUNT(sP.spid) AS total_database_connections FROM sys.sysprocesses sP GROUP BY DB_NAME(sP.dbid) ORDER BY 1; When I run this against an idle application instance (just restarted) I have one or two connections to the database, presumably for scheduled events, etc.  When the first user logs in I get an increase of 20 or more database connections established - is this normal?  When the user logs out the connections are not closed.  Should I set the configuration option  ConnectionPoolingTimeBetweenEvictionRunsMillis and ConnectionPoolingSoftMinEvictableIdleTimeMillis to something less than 5 minutes to release the connections more frequestly, or is this a different configuration option?  Again, what are the impacts for making this change? Thanks for any advice Issue resolved: I had rewrite rules (IIS front end) and SSL installed.  Some users were accessing through http:// + portnumber, some using http:// using the rewrite rules, and some using https:// with rewrite rules.  The users having the issues seemed to be in the first group.  I closed down all unnecessary ports and only allowed access through https:// with the rewrite rules and that seems to have fixed the issue.
2 answers

I cannot help you with a solution for the error because I haven't seen it before. However, I've had contact with Mendix support converning the number of DB connections in one of our apps. You can indeed raise the maximum number of connections by adding the ConnectionPoolingMaxActive & ConnectionPoolingMaxIdle settings. The default for these settings should indeed be 50. You'll also need to contact Mendix support beforehand because the cloud ops team will also need to add some configurations if I'm correct. If your and their configurations don't match, you could experience some undesired behaviour (Errors like: Opening JDBC connection to failed with SQLState: 53300 Error code: 0 Message: FATAL: too many connections for role "..." Retrying...(1/4) )



There might be something else going on. Are you using a reverse proxy between Mx and your end users? Because this could also cause the issue. Also have a look if your users are logged out immediately or if the experience a delay before they are logged out. If the delay is there make sure that you are not having long running queries open. 

I had a look at the mee documentation and found that mendix can close the client connections as well. 

# Abort database SELECT queries that are started from a client XPath request,
 # or XLS/CSV Export button and run for a configurable amount of time.
 # The reverse http proxy in use might have a proxy gateway timeout set (which
 # is by default 60 seconds when using Nginx for example), so continuing while
 # nobody can receive the results anymore is a bit pointless...
 # Setting this option prevents runaway database queries from eating up all
 # of your database cpu cycles, while you're busy tracing down the source of
 # the problem (using LogMinDurationQuery, see below)
 # This option was introduced in Mendix version 2.5.6
 # The value is specified in seconds.
 # default: not set, no timeout
 ClientQueryTimeout: 70