Skip to content

HTTP Error 500 with substatus 1011/1013 and win32 error 109 (ERROR_BROKEN_PIPE) #57

@PG-Devs

Description

@PG-Devs

On some of our servers our users are experiencing occasional internal server errors which we are unable to explain. According to our logs some requests fail with a broken pipe, but these requests are no different from other requests which work fine. We have now implemented a retry-mechanism, and so far requests that failed do work on the first retry. This seems to confirm that there is nothing 'wrong' with the requests themselves.

We have tried to create a minimal example (see files below), and performed some tests:

Node version IISNode version NODE_PENDING_PIPE_INSTANCES Result
7.3.0 0.2.18.0 Undefined Works (in practice). There are no errors, at least long as there are not "too many" concurrent requests. However, when it fails, it fails with error 500.1003 (which is a different error, which we have not seen, "The service is unavailable"). The number of concurrent requests needed to cause this is sufficiently large that we don't expect to have hit this in practice.
9.3.0 0.2.18.0 Undefined Error 500.1011 / 500.1013 on some requests.
10.7.0 0.2.18.0 Undefined Error 500.1011 / 500.1013 on some requests.
7.3.0 0.2.21.0 5000 Works perfectly. I get errors on my PC (regarding fork limit) before the server starts to give error 500s.
9.3.0 0.2.21.0 5000 Error 500.1011 / 500.1013 on some requests.
10.7.0 0.2.21.0 5000 Error 500.1011 / 500.1013 on some requests.

I used the following minimal example, on an up-to-date Windows Server 2016 Version 1607 (OS Build 14393.2248) installation with IIS (Version 10.0.014393.0). The below files were placed in C:\Error1013. In this directory I placed 3 node executables with different versions (see above table). On a different PC we
run the 'poll.sh' script to test concurrent requests.

The example web server is very simple, and consists only of a short delay (1/8th of a second) in order to simulate processing time otherwise taken by our service.

It is our understanding that with NODE_PENDING_PIPE_INSTANCES set to the default value of 5000, we should not run into resource limits regarding concurrent requests until we reach 5000 concurrent requests. However, are getting error 500.1013 / 500.1011 with as few as 50 concurrent requests. Is this a known issue (e.g. #50)? In our case it does not seem to be related to the processing time of a request, as each simulated request takes only 1/8th of a second.

web.config
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
	<system.webServer>
		<iisnode nodeProcessCommandLine="C:\Error1013\node_10.7.0.exe"
			loggingEnabled="true"
			devErrorsEnabled="false"
			enableXFF="true"
			promoteServerVars="HTTP_URL"
			logDirectory=".\logs"
		/>
		<handlers>
			<add name="iisnode" path="hello.js" verb="*" modules="iisnode" />
		</handlers>
		<rewrite>
			<rules>
				<rule name="nodejs">
					<match url=".*" />
					<action type="Rewrite" url="hello.js" />
				</rule>
			</rules>
		</rewrite>
	</system.webServer>
</configuration>
package.json
{
	  "name": "my-awesome-package"
	, "version": "1.0.0"
	, "main": "hello.js"
	, "dependencies": {
		"http": "*"
	   }
}
hello.js
var http = require('http');
const util = require('util');

var seconds = 0.125;

var startUp = new Date();

http.createServer((req, res) => {
	res.writeHead(200, {'Content-Type': 'text/plain'});
	var waitTill = new Date(new Date().getTime() + seconds * 1000);
	while(waitTill > new Date()){}
	res.end(`${startUp} process.env.NODE_PENDING_PIPE_INSTANCES: ${process.env.NODE_PENDING_PIPE_INSTANCES}\n`);
}).listen(process.env.PORT);
poll.sh
#!/bin/sh

server=192.168.22.254:8080

# This breaks sometimes, but not very often
concurrent_requests=30
inter_request_delay=0.05

# This breaks almost every time
concurrent_requests=50
inter_request_delay=0.01

# # We've gotta push it harder!
# concurrent_requests=250 # 500 gives fork error on my pc
# inter_request_delay=

while true ; do
	echo "Press <enter> to send ${concurrent_requests} requests (delay: ${inter_request_delay:-none})..."
	read l
	i=0; while [ ${i} -lt ${concurrent_requests} ] ; do
		curl "${server}" &
		i=$((i+1))
		[ -n "${inter_request_delay}" ] && sleep ${inter_request_delay}
	done
	# concurrent_requests=$((concurrent_requests+10))
	echo "Waiting for requests to finish..."
	wait
	echo "Done"
	echo
done

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions