Patch against server DOS

Hello,

My NGINX got a denial of service. The machine proxied large files using
“proxy_store”.
Someone was creating an artifical request for a rarely used file,
causing
NGINX to download a big file from upstream, then he immediately closed
the
connection. NGINX continued to download this file.
Then he did the same again with some other rarely used file.
Within a couple of minutes I had thousands of connections, downloading
huge
files from the backend.

My solution was, to add a small feature:
proxy_ignore_client_abort 10%;
If the server did not download at least 10% from the backend-machine, he
closes the connection to the backend as soon as the client closed the
connection to the server, even if “proxy_store” was used.

The patch:
http://doppelbauer.name/abort-upstream-161.patch

Thanks a lot
Markus

Posted at Nginx Forum:

Hello,

I may have missed something, but it was to my understanding that nginx
continuously send data to clients, thus fill up buffers whil the client
empties it at the same time (FIFO).
Thus, to me, backend upload was stopping when the allocated buffer(s)
was(were) full, waiting for space being available in it(them).

That is how/why, to my understanding (again), nginx was supposed to be
able
to handle slow clients.

The intuitive solution if it was to happen to me, would have been to
reduce
buffer(s) size + number to ensure they fill up quickier (and thus stop
downloading from upstream with the same velocity).
In the end, the computation of the ‘lost’ resource is done:

  • in space with number of ‘attackers’ * num buffers * size buffer
  • in time with space calculated above / upstream sownloading speed (an
    average would be enough)

Is not your patch redundant with existing capabilities?
You just added another caluclation, competing with the one above,
multiplying the above values per 10%. You could as much have reduced the
settings above to meet the same result, could not you? Not talking about
the risk of introducing vulnerabilities/instabilities with custom patch.

What if the attacker modifies its client to ensure downloading 50% of
the
file (thanks to his /dev/null)? Your patch becomes useless and the
resources grow back to what they used to be… on the other hand, the
standard way of having modified how you handle upstream data would have
been resisting, whatever amount of data any client grabs.

What have I missed here?

B. R.