Proxy_Ssl_Session_Reuse

Module ngx_http_proxy_module – Nginx.org

The ngx__proxy_module module allows passing
requests to another server.
Example Configuration location / {
proxy_pass localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;}
Directives
Syntax:
proxy_bind
address
[transparent] |
off;
Default:

Context:, server, location
This directive appeared in version 0. 8. 22.
Makes outgoing connections to a proxied server originate
from the specified local IP address with an optional port (1. 11. 2).
Parameter value can contain variables (1. 3. 12).
The special value off (1. 12) cancels the effect
of the proxy_bind directive
inherited from the previous configuration level, which allows the
system to auto-assign the local IP address and port.
The transparent parameter (1. 0) allows
outgoing connections to a proxied server originate
from a non-local IP address,
for example, from a real IP address of a client:
proxy_bind $remote_addr transparent;
In order for this parameter to work,
it is usually necessary to run nginx worker processes with the
superuser privileges.
On Linux it is not required (1. 13. 8) as if
the transparent parameter is specified, worker processes
inherit the CAP_NET_RAW capability from the master process.
It is also necessary to configure kernel routing table
to intercept network traffic from the proxied server.
proxy_buffer_size size;
proxy_buffer_size 4k|8k;
Sets the size of the buffer used for reading the first part
of the response received from the proxied server.
This part usually contains a small response header.
By default, the buffer size is equal to one memory page.
This is either 4K or 8K, depending on a platform.
It can be made smaller, however.
proxy_buffering on | off;
proxy_buffering on;
Enables or disables buffering of responses from the proxied server.
When buffering is enabled, nginx receives a response from the proxied server
as soon as possible, saving it into the buffers set by the
proxy_buffer_size and proxy_buffers directives.
If the whole response does not fit into memory, a part of it can be saved
to a temporary file on the disk.
Writing to temporary files is controlled by the
proxy_max_temp_file_size and
proxy_temp_file_write_size directives.
When buffering is disabled, the response is passed to a client synchronously,
immediately as it is received.
nginx will not try to read the whole response from the proxied server.
The maximum size of the data that nginx can receive from the server
at a time is set by the proxy_buffer_size directive.
Buffering can also be enabled or disabled by passing
“yes” or “no” in the
“X-Accel-Buffering” response header field.
This capability can be disabled using the
proxy_ignore_headers directive.
proxy_buffers number size;
proxy_buffers 8 4k|8k;
Sets the number and size of the
buffers used for reading a response from the proxied server,
for a single connection.
proxy_busy_buffers_size size;
proxy_busy_buffers_size 8k|16k;
When buffering of responses from the proxied
server is enabled, limits the total size of buffers that
can be busy sending a response to the client while the response is not
yet fully read.
In the meantime, the rest of the buffers can be used for reading the response
and, if needed, buffering part of the response to a temporary file.
By default, size is limited by the size of two buffers set by the
proxy_cache zone | off;
proxy_cache off;
Defines a shared memory zone used for caching.
The same zone can be used in several places.
Parameter value can contain variables (1. 7. 9).
The off parameter disables caching inherited
from the previous configuration level.
proxy_cache_background_update on | off;
proxy_cache_background_update off;
This directive appeared in version 1. 10.
Allows starting a background subrequest
to update an expired cache item,
while a stale cached response is returned to the client.
Note that it is necessary to
allow
the usage of a stale cached response when it is being updated.
proxy_cache_bypass string… ;
Defines conditions under which the response will not be taken from a cache.
If at least one value of the string parameters is not empty and is not
equal to “0” then the response will not be taken from the cache:
proxy_cache_bypass $cookie_nocache $arg_nocache$arg_comment;
proxy_cache_bypass $_pragma $_authorization;
Can be used along with the proxy_no_cache directive.
proxy_cache_convert_head on | off;
proxy_cache_convert_head on;
This directive appeared in version 1. 9. 7.
Enables or disables the conversion of the “HEAD” method
to “GET” for caching.
When the conversion is disabled, the
cache key should be configured
to include the $request_method.
proxy_cache_key string;
proxy_cache_key $scheme$proxy_host$request_uri;
Defines a key for caching, for example
proxy_cache_key “$host$request_uri $cookie_user”;
By default, the directive’s value is close to the string
proxy_cache_key $scheme$proxy_host$uri$is_args$args;
proxy_cache_lock on | off;
proxy_cache_lock off;
This directive appeared in version 1. 1. 12.
When enabled, only one request at a time will be allowed to populate
a new cache element identified according to the proxy_cache_key
directive by passing a request to a proxied server.
Other requests of the same cache element will either wait
for a response to appear in the cache or the cache lock for
this element to be released, up to the time set by the
proxy_cache_lock_timeout directive.
proxy_cache_lock_age time;
proxy_cache_lock_age 5s;
This directive appeared in version 1. 8.
If the last request passed to the proxied server
for populating a new cache element
has not completed for the specified time,
one more request may be passed to the proxied server.
proxy_cache_lock_timeout time;
proxy_cache_lock_timeout 5s;
Sets a timeout for proxy_cache_lock.
When the time expires,
the request will be passed to the proxied server,
however, the response will not be cached.
Before 1. 8, the response could be cached.
proxy_cache_max_range_offset number;
This directive appeared in version 1. 6.
Sets an offset in bytes for byte-range requests.
If the range is beyond the offset,
the range request will be passed to the proxied server
and the response will not be cached.
proxy_cache_methods
GET |
HEAD |
POST… ;
proxy_cache_methods GET HEAD;
This directive appeared in version 0. 59.
If the client request method is listed in this directive then
the response will be cached.
“GET” and “HEAD” methods are always
added to the list, though it is recommended to specify them explicitly.
See also the proxy_no_cache directive.
proxy_cache_min_uses number;
proxy_cache_min_uses 1;
Sets the number of requests after which the response
will be cached.
proxy_cache_path
path
[levels=levels]
[use_temp_path=on|off]
keys_zone=name:size
[inactive=time]
[max_size=size]
[min_free=size]
[manager_files=number]
[manager_sleep=time]
[manager_threshold=time]
[loader_files=number]
[loader_sleep=time]
[loader_threshold=time]
[purger=on|off]
[purger_files=number]
[purger_sleep=time]
[purger_threshold=time];
Context:
Sets the path and other parameters of a cache.
Cache data are stored in files.
The file name in a cache is a result of
applying the MD5 function to the
cache key.
The levels parameter defines hierarchy levels of a cache:
from 1 to 3, each level accepts values 1 or 2.
For example, in the following configuration
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=one:10m;
file names in a cache will look like this:
/data/nginx/cache/c/29/b7f54b2df7773722d382f4809d65029c
A cached response is first written to a temporary file,
and then the file is renamed.
Starting from version 0. 9, temporary files and the cache can be put on
different file systems.
However, be aware that in this case a file is copied
across two file systems instead of the cheap renaming operation.
It is thus recommended that for any given location both cache and a directory
holding temporary files
are put on the same file system.
The directory for temporary files is set based on
the use_temp_path parameter (1. 10).
If this parameter is omitted or set to the value on,
the directory set by the proxy_temp_path directive
for the given location will be used.
If the value is set to off,
temporary files will be put directly in the cache directory.
In addition, all active keys and information about data are stored
in a shared memory zone, whose name and size
are configured by the keys_zone parameter.
One megabyte zone can store about 8 thousand keys.
As part of
commercial subscription,
the shared memory zone also stores extended
cache information,
thus, it is required to specify a larger zone size for the same number of keys.
For example,
one megabyte zone can store about 4 thousand keys.
Cached data that are not accessed during the time specified by the
inactive parameter get removed from the cache
regardless of their freshness.
By default, inactive is set to 10 minutes.
The special “cache manager” process monitors the maximum cache size set
by the max_size parameter,
and the minimum amount of free space set
by the min_free (1. 19. 1) parameter
on the file system with cache.
When the size is exceeded or there is not enough free space,
it removes the least recently used data.
The data is removed in iterations configured by
manager_files,
manager_threshold, and
manager_sleep parameters (1. 5).
During one iteration no more than manager_files items
are deleted (by default, 100).
The duration of one iteration is limited by the
manager_threshold parameter (by default, 200 milliseconds).
Between iterations, a pause configured by the manager_sleep
parameter (by default, 50 milliseconds) is made.
A minute after the start the special “cache loader” process is activated.
It loads information about previously cached data stored on file system
into a cache zone.
The loading is also done in iterations.
During one iteration no more than loader_files items
are loaded (by default, 100).
Besides, the duration of one iteration is limited by the
loader_threshold parameter (by default, 200 milliseconds).
Between iterations, a pause configured by the loader_sleep
Additionally,
the following parameters are available as part of our
commercial subscription:
purger=on|off
Instructs whether cache entries that match a
wildcard key
will be removed from the disk by the cache purger (1. 12).
Setting the parameter to on
(default is off)
will activate the “cache purger” process that
permanently iterates through all cache entries
and deletes the entries that match the wildcard key.
purger_files=number
Sets the number of items that will be scanned during one iteration (1. 12).
By default, purger_files is set to 10.
purger_threshold=number
Sets the duration of one iteration (1. 12).
By default, purger_threshold is set to 50 milliseconds.
purger_sleep=number
Sets a pause between iterations (1. 12).
By default, purger_sleep is set to 50 milliseconds.
In versions 1. 3, 1. 7, and 1. 10 cache header format has been changed.
Previously cached responses will be considered invalid
after upgrading to a newer nginx version.
proxy_cache_purge string… ;
This directive appeared in version 1. 5. 7.
Defines conditions under which the request will be considered a cache
purge request.
If at least one value of the string parameters is not empty and is not equal
to “0” then the cache entry with a corresponding
cache key is removed.
The result of successful operation is indicated by returning
the 204 (No Content) response.
If the cache key of a purge request ends
with an asterisk (“*”), all cache entries matching the
wildcard key will be removed from the cache.
However, these entries will remain on the disk until they are deleted
for either inactivity,
or processed by the cache purger (1. 12),
or a client attempts to access them.
Example configuration:
proxy_cache_path /data/nginx/cache keys_zone=cache_zone:10m;
map $request_method $purge_method {
PURGE 1;
default 0;}
server {…
location / {
proxy_pass backend;
proxy_cache cache_zone;
proxy_cache_key $uri;
proxy_cache_purge $purge_method;}}
This functionality is available as part of our
commercial subscription.
proxy_cache_revalidate on | off;
proxy_cache_revalidate off;
Enables revalidation of expired cache items using conditional requests with
the “If-Modified-Since” and “If-None-Match”
header fields.
proxy_cache_use_stale
error |
timeout |
invalid_header |
updating |
_500 |
_502 |
_503 |
_504 |
_403 |
_404 |
_429 |
off… ;
proxy_cache_use_stale off;
Determines in which cases a stale cached response can be used
during communication with the proxied server.
The directive’s parameters match the parameters of the
proxy_next_upstream directive.
The error parameter also permits
using a stale cached response if a proxied server to process a request
cannot be selected.
Additionally, the updating parameter permits
using a stale cached response if it is currently being updated.
This allows minimizing the number of accesses to proxied servers
when updating cached data.
Using a stale cached response
can also be enabled directly in the response header
for a specified number of seconds after the response became stale (1. 10).
This has lower priority than using the directive parameters.
The
“stale-while-revalidate”
extension of the “Cache-Control” header field permits
“stale-if-error”
using a stale cached response in case of an error.
To minimize the number of accesses to proxied servers when
populating a new cache element, the proxy_cache_lock
directive can be used.
proxy_cache_valid [code… ] time;
Sets caching time for different response codes.
For example, the following directives
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
set 10 minutes of caching for responses with codes 200 and 302
and 1 minute for responses with code 404.
If only caching time is specified
proxy_cache_valid 5m;
then only 200, 301, and 302 responses are cached.
In addition, the any parameter can be specified
to cache any responses:
proxy_cache_valid 301 1h;
proxy_cache_valid any 1m;
Parameters of caching can also be set directly
in the response header.
This has higher priority than setting of caching time using the directive.
The “X-Accel-Expires” header field sets caching time of a
response in seconds.
The zero value disables caching for a response.
If the value starts with the @ prefix, it sets an absolute
time in seconds since Epoch, up to which the response may be cached.
If the header does not include the “X-Accel-Expires” field,
parameters of caching may be set in the header fields
“Expires” or “Cache-Control”.
If the header includes the “Set-Cookie” field, such a
response will not be cached.
If the header includes the “Vary” field
with the special value “*”, such a
response will not be cached (1. 7).
with another value, such a response will be cached
taking into account the corresponding request header fields (1. 7).
Processing of one or more of these response header fields can be disabled
using the proxy_ignore_headers directive.
proxy_connect_timeout time;
proxy_connect_timeout 60s;
Defines a timeout for establishing a connection with a proxied server.
It should be noted that this timeout cannot usually exceed 75 seconds.
proxy_cookie_domain off;proxy_cookie_domain domain replacement;
proxy_cookie_domain off;
This directive appeared in version 1. 15.
Sets a text that should be changed in the domain
attribute of the “Set-Cookie” header fields of a
proxied server response.
Suppose a proxied server returned the “Set-Cookie”
header field with the attribute
“domain=localhost”.
The directive
proxy_cookie_domain localhost;
will rewrite this attribute to
“”.
A dot at the beginning of the domain and
replacement strings and the domain
attribute is ignored.
Matching is case-insensitive.
The domain and replacement strings
can contain variables:
proxy_cookie_domain $host $host;
The directive can also be specified using regular expressions.
In this case, domain should start from
the “~” symbol.
A regular expression can contain named and positional captures,
and replacement can reference them:
proxy_cookie_domain ~. (? P[-0-9a-z]+. [a-z]+)$ $sl_domain;
Several proxy_cookie_domain directives
can be specified on the same level:
proxy_cookie_domain ~. ([a-z]+. [a-z]+)$ $1;
If several directives can be applied to the cookie,
the first matching directive will be chosen.
The off parameter cancels the effect
of the proxy_cookie_domain directives
inherited from the previous configuration level.
proxy_cookie_flags
off |
cookie
[flag… ];
proxy_cookie_flags off;
This directive appeared in version 1. 3.
Sets one or more flags for the cookie.
The cookie can contain text, variables, and their combinations.
The flag
can contain text, variables, and their combinations (1. 8).
secure,
only,
samesite=strict,
samesite=lax,
samesite=none
parameters add the corresponding flags.
nosecure,
noonly,
nosamesite
parameters remove the corresponding flags.
The cookie can also be specified using regular expressions.
In this case, cookie should start from
Several proxy_cookie_flags directives
can be specified on the same configuration level:
proxy_cookie_flags one only;
proxy_cookie_flags ~ nosecure samesite=strict;
In the example, the only flag
is added to the cookie one,
for all other cookies
the samesite=strict flag is added and
the secure flag is deleted.
of the proxy_cookie_flags directives
proxy_cookie_path off;proxy_cookie_path path replacement;
proxy_cookie_path off;
Sets a text that should be changed in the path
“path=/two/some/uri/”.
proxy_cookie_path /two/ /;
“path=/some/uri/”.
The path and replacement strings
proxy_cookie_path $uri /some$uri;
In this case, path should either start from
the “~” symbol for a case-sensitive matching,
or from the “~*” symbols for case-insensitive
matching.
The regular expression can contain named and positional captures,
proxy_cookie_path ~*^/user/([^/]+) /u/$1;
Several proxy_cookie_path directives
proxy_cookie_path /one/ /;
proxy_cookie_path / /two/;
of the proxy_cookie_path directives
proxy_force_ranges on | off;
proxy_force_ranges off;
This directive appeared in version 1. 7.
Enables byte-range support
for both cached and uncached responses from the proxied server
regardless of the “Accept-Ranges” field in these responses.
proxy_headers_hash_bucket_size size;
proxy_headers_hash_bucket_size 64;
Sets the bucket size for hash tables
used by the proxy_hide_header and proxy_set_header
directives.
The details of setting up hash tables are provided in a separate
document.
proxy_headers_hash_max_size size;
proxy_headers_hash_max_size 512;
Sets the maximum size of hash tables
proxy_hide_header field;
By default,
nginx does not pass the header fields “Date”,
“Server”, “X-Pad”, and
“X-Accel-… ” from the response of a proxied
server to a client.
The proxy_hide_header directive sets additional fields
that will not be passed.
If, on the contrary, the passing of fields needs to be permitted,
the proxy_pass_header directive can be used.
proxy__version 1. 0 | 1. 1;
proxy__version 1. 0;
This directive appeared in version 1. 4.
Sets the HTTP protocol version for proxying.
By default, version 1. 0 is used.
Version 1. 1 is recommended for use with
keepalive
connections and
NTLM authentication.
proxy_ignore_client_abort on | off;
proxy_ignore_client_abort off;
Determines whether the connection with a proxied server should be
closed when a client closes the connection without waiting
for a response.
proxy_ignore_headers field… ;
Disables processing of certain response header fields from the proxied server.
The following fields can be ignored: “X-Accel-Redirect”,
“X-Accel-Expires”, “X-Accel-Limit-Rate” (1. 6),
“X-Accel-Buffering” (1. 6),
“X-Accel-Charset” (1. 6), “Expires”,
“Cache-Control”, “Set-Cookie” (0. 44),
and “Vary” (1. 7).
If not disabled, processing of these header fields has the following
effect:
“X-Accel-Expires”, “Expires”,
“Cache-Control”, “Set-Cookie”,
and “Vary”
set the parameters of response caching;
“X-Accel-Redirect” performs an
internal
redirect to the specified URI;
“X-Accel-Limit-Rate” sets the
rate
limit for transmission of a response to a client;
“X-Accel-Buffering” enables or disables
buffering of a response;
“X-Accel-Charset” sets the desired
charset
of a response.
proxy_intercept_errors on | off;
proxy_intercept_errors off;
Determines whether proxied responses with codes greater than or equal
to 300 should be passed to a client
or be intercepted and redirected to nginx for processing
with the error_page directive.
proxy_limit_rate rate;
proxy_limit_rate 0;
Limits the speed of reading the response from the proxied server.
The rate is specified in bytes per second.
The zero value disables rate limiting.
The limit is set per a request, and so if nginx simultaneously opens
two connections to the proxied server,
the overall rate will be twice as much as the specified limit.
The limitation works only if
buffering of responses from the proxied
server is enabled.
proxy_max_temp_file_size size;
proxy_max_temp_file_size 1024m;
server is enabled, and the whole response does not fit into the buffers
set by the proxy_buffer_size and proxy_buffers
directives, a part of the response can be saved to a temporary file.
This directive sets the maximum size of the temporary file.
The size of data written to the temporary file at a time is set
by the proxy_temp_file_write_size directive.
The zero value disables buffering of responses to temporary files.
This restriction does not apply to responses
that will be cached
or stored on disk.
proxy_method method;
Specifies the HTTP method to use in requests forwarded
to the proxied server instead of the method from the client request.
Parameter value can contain variables (1. 6).
proxy_next_upstream
non_idempotent |
proxy_next_upstream error timeout;
Specifies in which cases a request should be passed to the next server:
error
an error occurred while establishing a connection with the
server, passing a request to it, or reading the response header;
timeout
a timeout has occurred while establishing a connection with the
invalid_header
a server returned an empty or invalid response;
_500
a server returned a response with the code 500;
_502
a server returned a response with the code 502;
_503
a server returned a response with the code 503;
_504
a server returned a response with the code 504;
_403
a server returned a response with the code 403;
_404
a server returned a response with the code 404;
_429
a server returned a response with the code 429 (1. 13);
non_idempotent
normally, requests with a
non-idempotent
method
(POST, LOCK, PATCH)
are not passed to the next server
if a request has been sent to an upstream server (1. 13);
enabling this option explicitly allows retrying such requests;
off
disables passing a request to the next server.
One should bear in mind that passing a request to the next server is
only possible if nothing has been sent to a client yet.
That is, if an error or timeout occurs in the middle of the
transferring of a response, fixing this is impossible.
The directive also defines what is considered an
unsuccessful
attempt of communication with a server.
The cases of error, timeout and
invalid_header are always considered unsuccessful attempts,
even if they are not specified in the directive.
The cases of _500, _502,
_503, _504,
and _429 are
considered unsuccessful attempts only if they are specified in the directive.
The cases of _403 and _404
are never considered unsuccessful attempts.
Passing a request to the next server can be limited by
the number of tries
and by time.
proxy_next_upstream_timeout time;
proxy_next_upstream_timeout 0;
This directive appeared in version 1. 5.
Limits the time during which a request can be passed to the
next server.
The 0 value turns off this limitation.
proxy_next_upstream_tries number;
proxy_next_upstream_tries 0;
Limits the number of possible tries for passing a request to the
proxy_no_cache string… ;
Defines conditions under which the response will not be saved to a cache.
equal to “0” then the response will not be saved:
proxy_no_cache $cookie_nocache $arg_nocache$arg_comment;
proxy_no_cache $_pragma $_authorization;
Can be used along with the proxy_cache_bypass directive.
proxy_pass URL;
location, if in location, limit_except
Sets the protocol and address of a proxied server and an optional URI
to which a location should be mapped.
As a protocol, “” or “”
can be specified.
The address can be specified as a domain name or IP address,
and an optional port:
proxy_pass localhost:8000/uri/;
or as a UNIX-domain socket path specified after the word
“unix” and enclosed in colons:
proxy_pass unix:/tmp/;
If a domain name resolves to several addresses, all of them will be
used in a round-robin fashion.
In addition, an address can be specified as a
server group.
Parameter value can contain variables.
In this case, if an address is specified as a domain name,
the name is searched among the described server groups,
and, if not found, is determined using a
resolver.
A request URI is passed to the server as follows:
If the proxy_pass directive is specified with a URI,
then when a request is passed to the server, the part of a
normalized
request URI matching the location is replaced by a URI
specified in the directive:
location /name/ {
proxy_pass}
If proxy_pass is specified without a URI,
the request URI is passed to the server in the same form
as sent by a client when the original request is processed,
or the full normalized request URI is passed
when processing the changed URI:
location /some/path/ {
Before version 1. 12,
if proxy_pass is specified without a URI,
the original request URI might be passed
instead of the changed URI in some cases.
In some cases, the part of a request URI to be replaced cannot be determined:
When location is specified using a regular expression,
and also inside named locations.
In these cases,
proxy_pass should be specified without a URI.
When the URI is changed inside a proxied location using the
rewrite directive,
and this same configuration will be used to process a request
(break):
rewrite /name/([^/]+) /users? name=$1 break;
In this case, the URI specified in the directive is ignored and
the full changed request URI is passed to the server.
When variables are used in proxy_pass:
proxy_pass request_uri;}
In this case, if URI is specified in the directive,
it is passed to the server as is,
replacing the original request URI.
WebSocket proxying requires special
configuration and is supported since version 1. 13.
proxy_pass_header field;
Permits passing otherwise disabled header
fields from a proxied server to a client.
proxy_pass_request_body on | off;
proxy_pass_request_body on;
Indicates whether the original request body is passed
to the proxied server.
location /x-accel-redirect-here/ {
proxy_method GET;
proxy_pass_request_body off;
proxy_set_header Content-Length “”;
proxy_pass… }
See also the proxy_set_header and
proxy_pass_request_headers directives.
proxy_pass_request_headers on | off;
proxy_pass_request_headers on;
Indicates whether the header fields of the original request are passed
proxy_pass_request_headers off;
proxy_pass_request_body directives.
proxy_read_timeout time;
proxy_read_timeout 60s;
Defines a timeout for reading a response from the proxied server.
The timeout is set only between two successive read operations,
not for the transmission of the whole response.
If the proxied server does not transmit anything within this time,
the connection is closed.
proxy_redirect default;proxy_redirect off;proxy_redirect redirect replacement;
proxy_redirect default;
Sets the text that should be changed in the “Location”
and “Refresh” header fields of a proxied server response.
Suppose a proxied server returned the header field
“Location: localhost:8000/two/some/uri/”.
proxy_redirect localhost:8000/two/ frontend/one/;
will rewrite this string to
“Location: frontend/one/some/uri/”.
A server name may be omitted in the replacement string:
proxy_redirect localhost:8000/two/ /;
then the primary server’s name and port, if different from 80,
will be inserted.
The default replacement specified by the default parameter
uses the parameters of the
location and
proxy_pass directives.
Hence, the two configurations below are equivalent:
location /one/ {
proxy_pass upstream:port/two/;
proxy_redirect upstream:port/two/ /one/;
The default parameter is not permitted if
proxy_pass is specified using variables.
A replacement string can contain variables:
proxy_redirect localhost:8000/ $host:$server_port/;
A redirect can also contain (1. 11) variables:
proxy_redirect $proxy_host:8000/ /;
The directive can be specified (1. 11) using regular expressions.
In this case, redirect should either start with
or with the “~*” symbols for case-insensitive
proxy_redirect ~^([^:]+):d+(/. +)$ $1$2;
proxy_redirect ~*/user/([^/]+)/(. +)$ $1. $2;
Several proxy_redirect directives
proxy_redirect localhost:8000/ /;
proxy_redirect /;
If several directives can be applied to
the header fields of a proxied server response,
of the proxy_redirect directives
Using this directive, it is also possible to add host names to relative
redirects issued by a proxied server:
proxy_redirect / /;
proxy_request_buffering on | off;
proxy_request_buffering on;
This directive appeared in version 1. 11.
Enables or disables buffering of a client request body.
When buffering is enabled, the entire request body is
read
from the client before sending the request to a proxied server.
When buffering is disabled, the request body is sent to the proxied server
In this case, the request cannot be passed to the
next server
if nginx already started sending the request body.
When HTTP/1. 1 chunked transfer encoding is used
to send the original request body,
the request body will be buffered regardless of the directive value unless
HTTP/1. 1 is enabled for proxying.
proxy_send_lowat size;
proxy_send_lowat 0;
If the directive is set to a non-zero value, nginx will try to
minimize the number
of send operations on outgoing connections to a proxied server by using either
NOTE_LOWAT flag of the
kqueue method,
or the SO_SNDLOWAT socket option,
with the specified size.
This directive is ignored on Linux, Solaris, and Windows.
proxy_send_timeout time;
proxy_send_timeout 60s;
Sets a timeout for transmitting a request to the proxied server.
The timeout is set only between two successive write operations,
not for the transmission of the whole request.
If the proxied server does not receive anything within this time,
proxy_set_body value;
Allows redefining the request body passed to the proxied server.
The value can contain text, variables, and their combination.
proxy_set_header field value;
proxy_set_header Host $proxy_host;proxy_set_header Connection close;
Allows redefining or appending fields to the request header
passed to the proxied server.
The value can contain text, variables, and their combinations.
These directives are inherited from the previous configuration level
if and only if there are no proxy_set_header directives
defined on the current level.
By default, only two fields are redefined:
proxy_set_header Host $proxy_host;
proxy_set_header Connection close;
If caching is enabled, the header fields
“If-Modified-Since”,
“If-Unmodified-Since”,
“If-None-Match”,
“If-Match”,
“Range”,
and
“If-Range”
from the original request are not passed to the proxied server.
An unchanged “Host” request header field can be passed like this:
proxy_set_header Host $_host;
However, if this field is not present in a client request header then
nothing will be passed.
In such a case it is better to use the $host variable – its
value equals the server name in the “Host” request header
field or the primary server name if this field is not present:
In addition, the server name can be passed together with the port of the
proxied server:
proxy_set_header Host $host:$proxy_port;
If the value of a header field is an empty string then this
field will not be passed to a proxied server:
proxy_set_header Acce
Securing HTTP Traffic to Upstream Servers | NGINX Plus

Securing HTTP Traffic to Upstream Servers | NGINX Plus

Secure HTTP traffic between NGINX or NGINX Plus and upstream servers, using SSL/TLS encryption.
This article explains how to encrypt HTTP traffic between NGINX and a upstream group or a proxied server.
Prerequisites
NGINX Open Source or NGINX Plus
A proxied server or an upstream group of servers
SSL certificates and a private key
Obtaining SSL Server Certificates
You can purchase a server certificate from a trusted certificate authority (CA), or your can create own internal CA with an OpenSSL library and generate your own certificate. The server certificate together with a private key should be placed on each upstream server.
Obtaining an SSL Client Certificate
NGINX will identify itself to the upstream servers by using an SSL client certificate. This client certificate must be signed by a trusted CA and is configured on NGINX together with the corresponding private key.
You will also need to configure the upstream servers to require client certificates for all incoming SSL connections, and to trust the CA that issued NGINX’ client certificate. Then, when NGINX connects to the upstream, it will provide its client certificate and the upstream server will accept it.
Configuring NGINX
First, change the URL to an upstream group to support SSL connections. In the NGINX configuration file, specify the “” protocol for the proxied server or an upstream group in the proxy_pass directive:
location /upstream {
proxy_pass}
Add the client certificate and the key that will be used to authenticate NGINX on each upstream server with proxy_ssl_certificate and proxy_ssl_certificate_key directives:
proxy_pass proxy_ssl_certificate /etc/nginx/;
proxy_ssl_certificate_key /etc/nginx/;}
If you use a self-signed certificate for an upstream or your own CA, also include the proxy_ssl_trusted_certificate. The file must be in the PEM format. Optionally, include the proxy_ssl_verify and proxy_ssl_verfiy_depth directives to have NGINX check the validity of the security certificates:
#…
proxy_ssl_trusted_certificate /etc/nginx/;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
#… }
Each new SSL connection requires a full SSL handshake between the client and server, which is quite CPU-intensive. To have NGINX proxy previously negotiated connection parameters and use a so-called abbreviated handshake, include the proxy_ssl_session_reuse directive:
proxy_ssl_session_reuse on;
Optionally, you can specify which SSL protocols and ciphers are used:
proxy_ssl_protocols TLSv1 TLSv1. 1 TLSv1. 2;
proxy_ssl_ciphers HIGH:! aNULL:! MD5;}
Configuring Upstream Servers
Each upstream server should be configured to accept HTTPS connections. For each upstream server, specify a path to the server certificate and the private key with ssl_certificate and ssl_certificate_key directives:
server {
listen 443 ssl;
server_name;
ssl_certificate /etc/ssl/certs/;
ssl_certificate_key /etc/ssl/certs/;
location /yourapp {
proxy_pass #… }}
Specify the path to a client certificate with the ssl_client_certificate directive:
ssl_client_certificate /etc/ssl/certs/;
ssl_verify_client optional;
Complete Example
{
upstream {
server;
server;}
listen 80;
server_name #…
proxy_ssl_certificate_key /etc/nginx/;
proxy_ssl_ciphers HIGH:! aNULL:! MD5;
proxy_ssl_session_reuse on;}}
proxy_pass #… }
proxy_pass #… }}}
In this example, the “” protocol in the proxy_pass directive specifies that the traffic forwarded by NGINX to upstream servers be secured.
When a secure connection is passed from NGINX to the upstream server for the first time, the full handshake process is performed. The proxy_ssl_certificate directive defines the location of the PEM-format certificate required by the upstream server, the proxy_ssl_certificate_key directive defines the location of the certificate’s private key, and the proxy_ssl_protocols and proxy_ssl_ciphers directives control which protocols and ciphers are used.
The next time NGINX passes a connection to the upstream server, session parameters will be reused because of the proxy_ssl_session_reuse directive, and the secured connection is established faster.
The trusted CA certificates in the file named by the proxy_ssl_trusted_certificate directive are used to verify the certificate on the upstream. The proxy_ssl_verify_depth directive specifies that two certificates in the certificates chain are checked, and the proxy_ssl_verify directive verifies the validity of certificates.
Configure Nginx as reverse proxy with upstream SSL - Server ...

Configure Nginx as reverse proxy with upstream SSL – Server …

I try to configure an Nginx server as a reverse proxy so the requests it receives from clients are forwarded to the upstream server via as well.
Here’s the configuration that I use:
{
# enable reverse proxy
proxy_redirect off;
proxy_set_header Host $_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for;
upstream streaming_example_com
server WEBSERVER_IP:443;}
server
listen 443 default ssl;
server_name;
access_log /tmp/;
error_log /tmp/;
root /usr/local/nginx/html;
index;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_certificate /etc/nginx/ssl/;
ssl_certificate_key /etc/nginx/ssl/;
ssl_verify_client off;
ssl_protocols SSLv3 TLSv1 TLSv1. 1 TLSv1. 2;
ssl_ciphers RC4:HIGH:! aNULL:! MD5;
ssl_prefer_server_ciphers on;
location /
proxy_pass streaming_example_com;}}}
Anyway, when I try to access a file using reverse proxy this is the error I get in reverse proxy logs:
2014/03/20 12:09:07 [error] 4113079#0: *1 SSL_do_handshake() failed
(SSL: error:1408E0F4:SSL routines:SSL3_GET_MESSAGE:unexpected message)
while SSL handshaking to upstream, client: 192. 168. 1. 2, server:, request: “GET /publishers/0/645/
HTTP/1. 1”, upstream:
“, host:
“”
Any idea what I am doing wrong?
asked Mar 20 ’14 at 11:18
3
I found what was the error, I needed to add
proxy_ssl_session_reuse off;
answered Mar 20 ’14 at 12:42
Alex FloAlex Flo1, 6213 gold badges17 silver badges23 bronze badges
2
In my case, I was trying to reverse proxy a website behind Cloudflare. I got the same error in /var/log/nginx/ I tried many solutions and this one worked for me:
proxy_ssl_server_name on;
yes even it’s 2019 now some services still need SNI to distinguish among hosted sites.
answered Jun 2 ’19 at 17:47
iBugiBug9171 gold badge7 silver badges20 bronze badges
1
This fully solved the issue for me:
location / {
proxy_pass proxy_set_header Host $host;
proxy_ssl_name $host;
proxy_ssl_session_reuse off;… }
I also had to add proxy_ssl_name in order to make sure that nginx knew what name to pass to the upstream server.
answered Mar 16 at 7:03
Not the answer you’re looking for? Browse other questions tagged nginx ssl reverse-proxy or ask your own question.

Frequently Asked Questions about proxy_ssl_session_reuse

What is Proxy_buffering?

proxy_buffering on; Context: http , server , location. Enables or disables buffering of responses from the proxied server. When buffering is enabled, nginx receives a response from the proxied server as soon as possible, saving it into the buffers set by the proxy_buffer_size and proxy_buffers directives.

What is proxy_pass?

The proxy_pass docs say: This directive sets the address of the proxied server and the URI to which location will be mapped. So when you tell Nginx to proxy_pass , you’re saying “Pass this request on to this proxy URL”.Apr 20, 2013

What is Proxy_redirect nginx?

Nginx proxy_redirect: Change response-header Location and Refresh in the response of the server.Jun 27, 2012

Leave a Reply

Your email address will not be published. Required fields are marked *