-
Type: Sub-task
-
Status: Closed (View Workflow)
-
Priority: Major
-
Resolution: Fixed
-
Affects Version/s: 2.6 Larks
-
Fix Version/s: 2.7 Larks
-
Labels:
-
Sprint:2.7 Larks
A java/python/bash code should be provided that can reproduce the issue on Ubuntu or CentOS (if you don't have required OS, you can use jtalks-vm).
The script/code should leave a socket in CLOSE_WAIT state for long time. After executing script for many times (the number should be specified in the comments), we should start getting an error in JCommune:
Mar 23, 2014 6:04:19 AM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run SEVERE: Socket accept failed java.net.SocketException: Too many open files at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61) at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:352) at java.lang.Thread.run(Thread.java:724)
Before starting the task you'd probably need to dive into TCP and its states on unix. Note though that in real situations, when the problem shows up, we use Nginx in front of Tomcat. It was reproduced on both Tomcat6 & Tomcat7.
QA team was able to reproduce the issue with this scanner: https://www.netsparker.com/netsparker/