You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/nginx/admin-guide/web-server/serving-static-content.md
+9-9Lines changed: 9 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
description: Configure NGINX and F5 NGINX Plus to serve static content, with type-specific
3
3
root directories, checks for file existence, and performance optimizations.
4
4
docs: DOCS-442
5
-
title: Serving Static Content
5
+
title: Serve Static Content
6
6
toc: true
7
7
weight: 200
8
8
type:
@@ -108,11 +108,11 @@ location @backend {
108
108
For more information, watch the [Content Caching](https://www.nginx.com/resources/webinars/content-caching-nginx-plus/) webinar on‑demand to learn how to dramatically improve the performance of a website, and get a deep‑dive into NGINX’s caching capabilities.
109
109
110
110
<spanid="optimize"></span>
111
-
## Optimizing Performance for Serving Content
111
+
## Optimize Performance for Serving Content
112
112
113
113
Loading speed is a crucial factor of serving any content. Making minor optimizations to your NGINX configuration may boost the productivity and help reach optimal performance.
114
114
115
-
### Enabling`sendfile`
115
+
### Enable`sendfile`
116
116
117
117
By default, NGINX handles file transmission itself and copies the file into the buffer before sending it. Enabling the [sendfile](https://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile) directive eliminates the step of copying the data into the buffer and enables direct copying data from one file descriptor to another. Alternatively, to prevent one fast connection from entirely occupying the worker process, you can use the [sendfile_max_chunk](https://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile_max_chunk) directive to limit the amount of data transferred in a single `sendfile()` call (in this example, to `1` MB):
118
118
@@ -124,7 +124,7 @@ location /mp3 {
124
124
}
125
125
```
126
126
127
-
### Enabling`tcp_nopush`
127
+
### Enable`tcp_nopush`
128
128
129
129
Use the [tcp_nopush](https://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nopush) directive together with the [sendfile](https://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile)`on;`directive. This enables NGINX to send HTTP response headers in one packet right after the chunk of data has been obtained by `sendfile()`.
130
130
@@ -136,7 +136,7 @@ location /mp3 {
136
136
}
137
137
```
138
138
139
-
### Enabling`tcp_nodelay`
139
+
### Enable`tcp_nodelay`
140
140
141
141
The [tcp_nodelay](https://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nodelay) directive allows override of [Nagle’s algorithm](https://en.wikipedia.org/wiki/Nagle's_algorithm), originally designed to solve problems with small packets in slow networks. The algorithm consolidates a number of small packets into a larger one and sends the packet with a `200` ms delay. Nowadays, when serving large static files, the data can be sent immediately regardless of the packet size. The delay also affects online applications (ssh, online games, online trading, and so on). By default, the [tcp_nodelay](https://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nodelay) directive is set to `on` which means that the Nagle’s algorithm is disabled. Use this directive only for keepalive connections:
142
142
@@ -150,11 +150,11 @@ location /mp3 {
150
150
```
151
151
152
152
153
-
### Optimizing the Backlog Queue
153
+
### Optimize the Backlog Queue
154
154
155
155
One of the important factors is how fast NGINX can handle incoming connections. The general rule is when a connection is established, it is put into the “listen” queue of a listen socket. Under normal load, either the queue is small or there is no queue at all. But under high load, the queue can grow dramatically, resulting in uneven performance, dropped connections, and increased latency.
156
156
157
-
#### Displaying the Listen Queue
157
+
#### Display the Listen Queue
158
158
159
159
To display the current listen queue, run this command:
160
160
@@ -182,7 +182,7 @@ Listen Local Address
182
182
0/0/128 *.8080
183
183
```
184
184
185
-
#### Tuning the Operating System
185
+
#### Tune the Operating System
186
186
187
187
Increase the value of the `net.core.somaxconn` kernel parameter from its default value (`128`) to a value high enough for a large burst of traffic. In this example, it's increased to `4096`.
188
188
@@ -205,7 +205,7 @@ Increase the value of the `net.core.somaxconn` kernel parameter from its default
205
205
net.core.somaxconn = 4096
206
206
```
207
207
208
-
#### Tuning NGINX
208
+
#### Tune NGINX
209
209
210
210
If you set the `somaxconn` kernel parameter to a value greater than `512`, change the `backlog` parameter to the NGINX [listen](https://nginx.org/en/docs/http/ngx_http_core_module.html#listen) directive to match:
0 commit comments