Discussion:
How to close persistent non-idle connections after some time
Boris Granveaud
2013-09-18 09:39:01 UTC
Permalink
Hi,

I would like to automatically close persistent connections after some
time, *even if they are used* (but only at the end of a query of course).

I need this so that when I would like to deploy a new middle server, I
can remove it from the production pool of servers and wait for example
60 seconds that the front webapps close their persistent HttpClient
connections.

I have seen that you can set a timeToLive parameter in
PoolingHttpClientConnectionManager but this is used only to close idle
connections.

I cannot use ConnectionReuseStrategy and KeepAliveStrategy because I
don't have access to the connection that is used.

Finally, I tried to extend PoolingHttpClientConnectionManager to remove
the call to updateExpiry in releaseConnection:

public void releaseConnection(
final HttpClientConnection managedConn,
final Object state,
final long keepalive, final TimeUnit tunit) {
...
if (conn.isOpen()) {
entry.setState(state);
entry.updateExpiry(keepalive, tunit != null ? tunit
: TimeUnit.MILLISECONDS);

but this is not feasible as CPoolEntry and CPoolProxy which are used in
this method are not public classes.

Any idea?

Thanks,
Boris.
Oleg Kalnichevski
2013-09-18 11:23:39 UTC
Permalink
Post by Boris Granveaud
Hi,
I would like to automatically close persistent connections after some
time, *even if they are used* (but only at the end of a query of course).
I need this so that when I would like to deploy a new middle server, I
can remove it from the production pool of servers and wait for example
60 seconds that the front webapps close their persistent HttpClient
connections.
I have seen that you can set a timeToLive parameter in
PoolingHttpClientConnectionManager but this is used only to close idle
connections.
I cannot use ConnectionReuseStrategy and KeepAliveStrategy because I
don't have access to the connection that is used.
Finally, I tried to extend PoolingHttpClientConnectionManager to remove
public void releaseConnection(
final HttpClientConnection managedConn,
final Object state,
final long keepalive, final TimeUnit tunit) {
...
if (conn.isOpen()) {
entry.setState(state);
entry.updateExpiry(keepalive, tunit != null ? tunit
: TimeUnit.MILLISECONDS);
but this is not feasible as CPoolEntry and CPoolProxy which are used in
this method are not public classes.
Any idea?
Thanks,
Boris.
AbstractConnPool class, which CPool is based upon, provides #enumLeased
method that can be used to enumerate leased connections and optionally
close some or all of them. Truth to be told, I simply forgot to add a
corresponding method to PoolingHttpClientConnectionManager.

Please raise a change request in JIRA for this issue. For the time being
you will have to resort to reflection in order to get hold of the 'pool'
instance variable and cast it to AbstractConnPool.

Oleg
Boris Granveaud
2013-09-18 13:24:17 UTC
Permalink
Post by Oleg Kalnichevski
Post by Boris Granveaud
Hi,
I would like to automatically close persistent connections after some
time, *even if they are used* (but only at the end of a query of course).
I need this so that when I would like to deploy a new middle server, I
can remove it from the production pool of servers and wait for example
60 seconds that the front webapps close their persistent HttpClient
connections.
I have seen that you can set a timeToLive parameter in
PoolingHttpClientConnectionManager but this is used only to close idle
connections.
I cannot use ConnectionReuseStrategy and KeepAliveStrategy because I
don't have access to the connection that is used.
Finally, I tried to extend PoolingHttpClientConnectionManager to remove
public void releaseConnection(
final HttpClientConnection managedConn,
final Object state,
final long keepalive, final TimeUnit tunit) {
...
if (conn.isOpen()) {
entry.setState(state);
entry.updateExpiry(keepalive, tunit != null ? tunit
: TimeUnit.MILLISECONDS);
but this is not feasible as CPoolEntry and CPoolProxy which are used in
this method are not public classes.
Any idea?
Thanks,
Boris.
AbstractConnPool class, which CPool is based upon, provides #enumLeased
method that can be used to enumerate leased connections and optionally
close some or all of them. Truth to be told, I simply forgot to add a
corresponding method to PoolingHttpClientConnectionManager.
Please raise a change request in JIRA for this issue. For the time being
you will have to resort to reflection in order to get hold of the 'pool'
instance variable and cast it to AbstractConnPool.
I'm trying to implement your solution with a timer which periodically
enumerates leased connections and closes those which are too old, but
from time to time I have an exception like this:

Caused by: java.io.InterruptedIOException: Connection already shutdown
at
org.apache.http.impl.conn.DefaultManagedHttpClientConnection.bind(DefaultManagedHttpClientConnection.java:116)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.conn.HttpClientConnectionOperator.connect(HttpClientConnectionOperator.java:110)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:314)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:357)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:218)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:194)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
~[httpclient-4.3.jar:4.3]

I suppose this is due to the fact that the connection is closed in
another thread. I do call CPoolEntry.shutdownConnection() instead of
CPoolEntry.closeConnection() but there must be a race condition somewhere.

I'm not very comfortable with this solution because of these thread
synchronization issues. I also have to do the same on idle connections
and each enumeration, even if it is fast, locks the pool. It is a pity
that it is not possible to alter the behavior of
PoolingHttpClientConnectionManager.releaseConnection.

Here is my complete code if you see what's wrong:

public class AutoClosePoolingHttpClientConnectionManager extends
PoolingHttpClientConnectionManager {
private static final Logger LOGGER =
LoggerFactory.getLogger(AutoClosePoolingHttpClientConnectionManager.class);

final static private Field poolField;
final static private Method enumLeasedMethod;
final static private Method shutdownConnectionMethod;

static {
try {
poolField =
PoolingHttpClientConnectionManager.class.getDeclaredField("pool");
poolField.setAccessible(true);

enumLeasedMethod =
AbstractConnPool.class.getDeclaredMethod("enumLeased",
PoolEntryCallback.class);
enumLeasedMethod.setAccessible(true);

shutdownConnectionMethod =
AutoClosePoolingHttpClientConnectionManager.class.getClassLoader().loadClass("org.apache.http.impl.conn.CPoolEntry")
.getDeclaredMethod("shutdownConnection");
shutdownConnectionMethod.setAccessible(true);

} catch (Exception e) {
throw new RuntimeException("Cannot access
PoolingHttpClientConnectionManager fields", e);
}
}

private AbstractConnPool<HttpRoute, ManagedHttpClientConnection,
PoolEntry<HttpRoute, ManagedHttpClientConnection>> pool;
private Timer timer;

@SuppressWarnings("unchecked")
public AutoClosePoolingHttpClientConnectionManager() {
super();

try {
pool = (AbstractConnPool) poolField.get(this);
} catch (Exception e) {
throw new IllegalArgumentException("Cannot access pool
field", e);
}

timer = new
Timer("autoClosePoolingHttpClientConnectionManagerTimer");
timer.schedule(new TimerTask() {
@Override
public void run() {
try {
shutdownOldConnections();
} catch (Exception e) {
LOGGER.warn("Error in shutdownOldConnections", e);
}
}
}, 0, 2000);
}

@Override
public void close() {
if (timer != null) timer.cancel();
super.close();
}

private void shutdownOldConnections() throws
IllegalAccessException, InvocationTargetException {
final long now = System.currentTimeMillis();

LOGGER.info("POOL DUMP " + pool.getTotalStats());
enumLeasedMethod.invoke(pool, new PoolEntryCallback<HttpRoute,
ManagedHttpClientConnection>() {
@Override
public void process(PoolEntry<HttpRoute,
ManagedHttpClientConnection> entry) {
long dtime = now - entry.getCreated();
LOGGER.info("ENTRY " + entry + " dtime=" + dtime);

if (dtime > 10000) {
try {
LOGGER.info("SHUTDOWN ENTRY " + entry + " " +
dtime);
shutdownConnectionMethod.invoke(entry);
} catch (Exception e) {
LOGGER.warn("Cannot shutdown connection", e);
}
}
}
});
}
}

Thanks,
Boris.
Post by Oleg Kalnichevski
Oleg
---------------------------------------------------------------------
Oleg Kalnichevski
2013-09-18 13:54:05 UTC
Permalink
Post by Boris Granveaud
Post by Oleg Kalnichevski
Post by Boris Granveaud
Hi,
I would like to automatically close persistent connections after some
time, *even if they are used* (but only at the end of a query of course).
I need this so that when I would like to deploy a new middle server, I
can remove it from the production pool of servers and wait for example
60 seconds that the front webapps close their persistent HttpClient
connections.
I have seen that you can set a timeToLive parameter in
PoolingHttpClientConnectionManager but this is used only to close idle
connections.
I cannot use ConnectionReuseStrategy and KeepAliveStrategy because I
don't have access to the connection that is used.
Finally, I tried to extend PoolingHttpClientConnectionManager to remove
public void releaseConnection(
final HttpClientConnection managedConn,
final Object state,
final long keepalive, final TimeUnit tunit) {
...
if (conn.isOpen()) {
entry.setState(state);
entry.updateExpiry(keepalive, tunit != null ? tunit
: TimeUnit.MILLISECONDS);
but this is not feasible as CPoolEntry and CPoolProxy which are used in
this method are not public classes.
Any idea?
Thanks,
Boris.
AbstractConnPool class, which CPool is based upon, provides #enumLeased
method that can be used to enumerate leased connections and optionally
close some or all of them. Truth to be told, I simply forgot to add a
corresponding method to PoolingHttpClientConnectionManager.
Please raise a change request in JIRA for this issue. For the time being
you will have to resort to reflection in order to get hold of the 'pool'
instance variable and cast it to AbstractConnPool.
I'm trying to implement your solution with a timer which periodically
enumerates leased connections and closes those which are too old, but
Caused by: java.io.InterruptedIOException: Connection already shutdown
at
org.apache.http.impl.conn.DefaultManagedHttpClientConnection.bind(DefaultManagedHttpClientConnection.java:116)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.conn.HttpClientConnectionOperator.connect(HttpClientConnectionOperator.java:110)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:314)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:357)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:218)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:194)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
~[httpclient-4.3.jar:4.3]
I suppose this is due to the fact that the connection is closed in
another thread. I do call CPoolEntry.shutdownConnection() instead of
CPoolEntry.closeConnection() but there must be a race condition somewhere.
You are basically shutting down the connection which is currently being
used to execute a request. This can cause all sorts of exceptions
depending on what stage execution process is at. InterruptedIOException
is perfectly legit in this context.
Post by Boris Granveaud
I'm not very comfortable with this solution because of these thread
synchronization issues. I also have to do the same on idle connections
and each enumeration, even if it is fast, locks the pool. It is a pity
that it is not possible to alter the behavior of
PoolingHttpClientConnectionManager.releaseConnection.
There is probably nothing wrong with your code. If you intend to shut
down connections 'midair' so to speak, this is what you have to live
with. Changing #releaseConnection will not help given that time to live
and expiry attributes apply to idle connections only.

Oleg
Boris Granveaud
2013-09-18 14:33:48 UTC
Permalink
Post by Oleg Kalnichevski
Post by Boris Granveaud
Post by Oleg Kalnichevski
Post by Boris Granveaud
Hi,
I would like to automatically close persistent connections after some
time, *even if they are used* (but only at the end of a query of course).
I need this so that when I would like to deploy a new middle server, I
can remove it from the production pool of servers and wait for example
60 seconds that the front webapps close their persistent HttpClient
connections.
I have seen that you can set a timeToLive parameter in
PoolingHttpClientConnectionManager but this is used only to close idle
connections.
I cannot use ConnectionReuseStrategy and KeepAliveStrategy because I
don't have access to the connection that is used.
Finally, I tried to extend PoolingHttpClientConnectionManager to remove
public void releaseConnection(
final HttpClientConnection managedConn,
final Object state,
final long keepalive, final TimeUnit tunit) {
...
if (conn.isOpen()) {
entry.setState(state);
entry.updateExpiry(keepalive, tunit != null ? tunit
: TimeUnit.MILLISECONDS);
but this is not feasible as CPoolEntry and CPoolProxy which are used in
this method are not public classes.
Any idea?
Thanks,
Boris.
AbstractConnPool class, which CPool is based upon, provides #enumLeased
method that can be used to enumerate leased connections and optionally
close some or all of them. Truth to be told, I simply forgot to add a
corresponding method to PoolingHttpClientConnectionManager.
Please raise a change request in JIRA for this issue. For the time being
you will have to resort to reflection in order to get hold of the 'pool'
instance variable and cast it to AbstractConnPool.
I'm trying to implement your solution with a timer which periodically
enumerates leased connections and closes those which are too old, but
Caused by: java.io.InterruptedIOException: Connection already shutdown
at
org.apache.http.impl.conn.DefaultManagedHttpClientConnection.bind(DefaultManagedHttpClientConnection.java:116)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.conn.HttpClientConnectionOperator.connect(HttpClientConnectionOperator.java:110)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:314)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:357)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:218)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:194)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
~[httpclient-4.3.jar:4.3]
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
~[httpclient-4.3.jar:4.3]
I suppose this is due to the fact that the connection is closed in
another thread. I do call CPoolEntry.shutdownConnection() instead of
CPoolEntry.closeConnection() but there must be a race condition somewhere.
You are basically shutting down the connection which is currently being
used to execute a request. This can cause all sorts of exceptions
depending on what stage execution process is at. InterruptedIOException
is perfectly legit in this context.
ok, I thought that shutdownConnection() was intended to mark that the
connection should be closed when released.
Post by Oleg Kalnichevski
Post by Boris Granveaud
I'm not very comfortable with this solution because of these thread
synchronization issues. I also have to do the same on idle connections
and each enumeration, even if it is fast, locks the pool. It is a pity
that it is not possible to alter the behavior of
PoolingHttpClientConnectionManager.releaseConnection.
There is probably nothing wrong with your code. If you intend to shut
down connections 'midair' so to speak, this is what you have to live
with. Changing #releaseConnection will not help given that time to live
and expiry attributes apply to idle connections only.
As far as I understand, #releaseConnection resets the connection expiry
date with the keepalive value because the connection has just been used.
So if I was able to remove this behavior, the connection will "expire"
for example 60 seconds after its creation even if it is intensively used.

Or I could let the keepalive and expiry management, and add something
like this:

public void releaseConnection(
(...)
if (conn.isOpen()) {
entry.setState(state);
entry.updateExpiry(keepalive, tunit != null ? tunit
: TimeUnit.MILLISECONDS);

if (System.currentTimeMillis() - entry.getCreated()
Post by Oleg Kalnichevski
maxAge) {
entry.closeConnection();
}
}

Maybe would it be interesting to make this maxAge parameter standard in
PoolingHttpClientConnectionManager. What do you think?

Boris.
Oleg Kalnichevski
2013-09-18 14:50:06 UTC
Permalink
On Wed, 2013-09-18 at 16:33 +0200, Boris Granveaud wrote:

...
Post by Boris Granveaud
Post by Oleg Kalnichevski
Post by Boris Granveaud
I'm not very comfortable with this solution because of these thread
synchronization issues. I also have to do the same on idle connections
and each enumeration, even if it is fast, locks the pool. It is a pity
that it is not possible to alter the behavior of
PoolingHttpClientConnectionManager.releaseConnection.
There is probably nothing wrong with your code. If you intend to shut
down connections 'midair' so to speak, this is what you have to live
with. Changing #releaseConnection will not help given that time to live
and expiry attributes apply to idle connections only.
As far as I understand, #releaseConnection resets the connection expiry
date with the keepalive value because the connection has just been used.
So if I was able to remove this behavior, the connection will "expire"
for example 60 seconds after its creation even if it is intensively used.
Or I could let the keepalive and expiry management, and add something
public void releaseConnection(
(...)
if (conn.isOpen()) {
entry.setState(state);
entry.updateExpiry(keepalive, tunit != null ? tunit
: TimeUnit.MILLISECONDS);
if (System.currentTimeMillis() - entry.getCreated()
Post by Oleg Kalnichevski
maxAge) {
entry.closeConnection();
}
}
Maybe would it be interesting to make this maxAge parameter standard in
PoolingHttpClientConnectionManager. What do you think?
Boris.
Connection manager's job is to keep track of available persistent
connections and enforce certain time-to-live constraints for those
connections. It is not meant to police busy connections leased by worker
threads.

Nothing prevents you from creating a custom connection manager, though,
tailored specifically to your application needs.

Oleg
Boris Granveaud
2013-09-18 15:25:13 UTC
Permalink
Post by Oleg Kalnichevski
...
Post by Boris Granveaud
Post by Oleg Kalnichevski
Post by Boris Granveaud
I'm not very comfortable with this solution because of these thread
synchronization issues. I also have to do the same on idle connections
and each enumeration, even if it is fast, locks the pool. It is a pity
that it is not possible to alter the behavior of
PoolingHttpClientConnectionManager.releaseConnection.
There is probably nothing wrong with your code. If you intend to shut
down connections 'midair' so to speak, this is what you have to live
with. Changing #releaseConnection will not help given that time to live
and expiry attributes apply to idle connections only.
As far as I understand, #releaseConnection resets the connection expiry
date with the keepalive value because the connection has just been used.
So if I was able to remove this behavior, the connection will "expire"
for example 60 seconds after its creation even if it is intensively used.
Or I could let the keepalive and expiry management, and add something
public void releaseConnection(
(...)
if (conn.isOpen()) {
entry.setState(state);
entry.updateExpiry(keepalive, tunit != null ? tunit
: TimeUnit.MILLISECONDS);
if (System.currentTimeMillis() - entry.getCreated()
Post by Oleg Kalnichevski
maxAge) {
entry.closeConnection();
}
}
Maybe would it be interesting to make this maxAge parameter standard in
PoolingHttpClientConnectionManager. What do you think?
Boris.
Connection manager's job is to keep track of available persistent
connections and enforce certain time-to-live constraints for those
connections. It is not meant to police busy connections leased by worker
threads.
Nothing prevents you from creating a custom connection manager, though,
tailored specifically to your application needs.
My intention is not to close busy connections but to enforce a maxAge
policy when the connection is released to the pool. I see this as
complementary to keepalive.

Sure I could extend PoolingHttpClientConnectionManager but this implies
some code duplication (at least releaseConnection) and reflection to
access CPoolEntry.

My point is that it is difficult to add this feature externally, but it
could be easily integrated as a standard behavior as does Tomcat JDBC
Pool for example (see maxAge parameter
http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Tomcat_JDBC_Enhanced_Attributes).

Boris.
Oleg Kalnichevski
2013-09-18 16:07:58 UTC
Permalink
On Wed, 2013-09-18 at 17:25 +0200, Boris Granveaud wrote:

...
Post by Boris Granveaud
Post by Oleg Kalnichevski
Connection manager's job is to keep track of available persistent
connections and enforce certain time-to-live constraints for those
connections. It is not meant to police busy connections leased by worker
threads.
Nothing prevents you from creating a custom connection manager, though,
tailored specifically to your application needs.
My intention is not to close busy connections but to enforce a maxAge
policy when the connection is released to the pool. I see this as
complementary to keepalive.
Sure I could extend PoolingHttpClientConnectionManager but this implies
some code duplication (at least releaseConnection) and reflection to
access CPoolEntry.
My point is that it is difficult to add this feature externally, but it
could be easily integrated as a standard behavior as does Tomcat JDBC
Pool for example (see maxAge parameter
http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Tomcat_JDBC_Enhanced_Attributes).
Boris.
Maybe I am getting too dense with age, but still fail to see a problem
here. There is already a TTL (total time to live) parameter settable at
the connection manager level, which prevents connections from being
re-used beyond a particular time limit.

Besides, I would very much rather add a protected method to
PoolingHttpClientConnectionManager allowing the expiry time to be
adjusted by a super class instead of introducing yet another parameter
that basically duplicates an existing one.

Oleg
Boris Granveaud
2013-09-19 08:05:00 UTC
Permalink
Post by Oleg Kalnichevski
...
Post by Boris Granveaud
Post by Oleg Kalnichevski
Connection manager's job is to keep track of available persistent
connections and enforce certain time-to-live constraints for those
connections. It is not meant to police busy connections leased by worker
threads.
Nothing prevents you from creating a custom connection manager, though,
tailored specifically to your application needs.
My intention is not to close busy connections but to enforce a maxAge
policy when the connection is released to the pool. I see this as
complementary to keepalive.
Sure I could extend PoolingHttpClientConnectionManager but this implies
some code duplication (at least releaseConnection) and reflection to
access CPoolEntry.
My point is that it is difficult to add this feature externally, but it
could be easily integrated as a standard behavior as does Tomcat JDBC
Pool for example (see maxAge parameter
http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Tomcat_JDBC_Enhanced_Attributes).
Boris.
Maybe I am getting too dense with age, but still fail to see a problem
here. There is already a TTL (total time to live) parameter settable at
the connection manager level, which prevents connections from being
re-used beyond a particular time limit.
Besides, I would very much rather add a protected method to
PoolingHttpClientConnectionManager allowing the expiry time to be
adjusted by a super class instead of introducing yet another parameter
that basically duplicates an existing one.
Hum, sorry you are right! I have been confused by the keepalive
management and I didn't see that the timeToLive parameter is exactly
what I need. Thanks!

Another point: in Tomcat JDBC pool, there is a validationInterval
parameter which allows to avoid checking for stale connection each time
a connection is borrowed from the pool. Basically, this check is
performed at most at the specified frequency. I have a benchmark where
the number of requests per second is doubled if I disable the stale
connection detection, but this seems not a good idea for production.
Maybe this validationInterval could provide a good compromise between
performance and reliability, what do you think?

Boris.
Oleg Kalnichevski
2013-09-19 09:10:21 UTC
Permalink
...
Post by Boris Granveaud
Post by Oleg Kalnichevski
Maybe I am getting too dense with age, but still fail to see a problem
here. There is already a TTL (total time to live) parameter settable at
the connection manager level, which prevents connections from being
re-used beyond a particular time limit.
Besides, I would very much rather add a protected method to
PoolingHttpClientConnectionManager allowing the expiry time to be
adjusted by a super class instead of introducing yet another parameter
that basically duplicates an existing one.
Hum, sorry you are right! I have been confused by the keepalive
management and I didn't see that the timeToLive parameter is exactly
what I need. Thanks!
Another point: in Tomcat JDBC pool, there is a validationInterval
parameter which allows to avoid checking for stale connection each time
a connection is borrowed from the pool. Basically, this check is
performed at most at the specified frequency. I have a benchmark where
the number of requests per second is doubled if I disable the stale
connection detection, but this seems not a good idea for production.
Maybe this validationInterval could provide a good compromise between
performance and reliability, what do you think?
Boris.
Hi Boris

Stale connection is known to be quite expensive and I generally
recommend disabling it. Such optimization, though, would be a valuable
contribution for sure. Please note however that presently the stale
check is not performed by the connection manager but by the protocol
handler. So, this change might require substantial efforts and entail
some design changes.

Oleg
Boris Granveaud
2013-09-20 08:27:34 UTC
Permalink
Post by Oleg Kalnichevski
...
Post by Boris Granveaud
Post by Oleg Kalnichevski
Maybe I am getting too dense with age, but still fail to see a problem
here. There is already a TTL (total time to live) parameter settable at
the connection manager level, which prevents connections from being
re-used beyond a particular time limit.
Besides, I would very much rather add a protected method to
PoolingHttpClientConnectionManager allowing the expiry time to be
adjusted by a super class instead of introducing yet another parameter
that basically duplicates an existing one.
Hum, sorry you are right! I have been confused by the keepalive
management and I didn't see that the timeToLive parameter is exactly
what I need. Thanks!
Another point: in Tomcat JDBC pool, there is a validationInterval
parameter which allows to avoid checking for stale connection each time
a connection is borrowed from the pool. Basically, this check is
performed at most at the specified frequency. I have a benchmark where
the number of requests per second is doubled if I disable the stale
connection detection, but this seems not a good idea for production.
Maybe this validationInterval could provide a good compromise between
performance and reliability, what do you think?
Boris.
Hi Boris
Stale connection is known to be quite expensive and I generally
recommend disabling it. Such optimization, though, would be a valuable
contribution for sure. Please note however that presently the stale
check is not performed by the connection manager but by the protocol
handler. So, this change might require substantial efforts and entail
some design changes.
Hi Oleg,

For the moment, I'm reluctant to remove the stale check in production
because in case of a server restart or a crash, all dead connections
will be tried one time, so even with a retry handler I can get errors. I
have to test it coupled with a short time to live to see if the number
of errors is acceptable.

Boris.
Oleg Kalnichevski
2013-09-20 13:08:19 UTC
Permalink
Post by Boris Granveaud
Post by Oleg Kalnichevski
...
Post by Boris Granveaud
Post by Oleg Kalnichevski
Maybe I am getting too dense with age, but still fail to see a problem
here. There is already a TTL (total time to live) parameter settable at
the connection manager level, which prevents connections from being
re-used beyond a particular time limit.
Besides, I would very much rather add a protected method to
PoolingHttpClientConnectionManager allowing the expiry time to be
adjusted by a super class instead of introducing yet another parameter
that basically duplicates an existing one.
Hum, sorry you are right! I have been confused by the keepalive
management and I didn't see that the timeToLive parameter is exactly
what I need. Thanks!
Another point: in Tomcat JDBC pool, there is a validationInterval
parameter which allows to avoid checking for stale connection each time
a connection is borrowed from the pool. Basically, this check is
performed at most at the specified frequency. I have a benchmark where
the number of requests per second is doubled if I disable the stale
connection detection, but this seems not a good idea for production.
Maybe this validationInterval could provide a good compromise between
performance and reliability, what do you think?
Boris.
Hi Boris
Stale connection is known to be quite expensive and I generally
recommend disabling it. Such optimization, though, would be a valuable
contribution for sure. Please note however that presently the stale
check is not performed by the connection manager but by the protocol
handler. So, this change might require substantial efforts and entail
some design changes.
Hi Oleg,
For the moment, I'm reluctant to remove the stale check in production
because in case of a server restart or a crash, all dead connections
will be tried one time, so even with a retry handler I can get errors. I
have to test it coupled with a short time to live to see if the number
of errors is acceptable.
Boris.
Boris,

A much better alternative to the stale check would be a policy of
pro-active eviction of expired (and idle) connections. See this section
of the tutorial for details:

http://hc.apache.org/httpcomponents-client-4.3.x/tutorial/html/connmgmt.html#d5e405

Please note that one does not really have to use a separate thread to
enforce such a policy. Calling #closeExpiredConnections every one in a
while from the same thread after a period of long inactivity should be
perfectly sufficient.

Oleg

Continue reading on narkive:
Loading...