diff --git a/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md b/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md index fa9afcd78..66b98b608 100644 --- a/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md +++ b/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md @@ -9,48 +9,58 @@ toc: true weight: 100 --- -This deployment guide explains how to create and configure the initial components for an all‑active, high‑availability deployment of F5 NGINX Plus on Google Compute Engine (GCE), the Google Cloud Platform (GCP) product for running workloads on virtual machines (VMs). Multiple instances of NGINX Plus in active pairs load balance incoming connections across multiple application environments. +This guide explains how to deploy F5 NGINX Plus in a high-availability configuration on Google Compute Engine (GCE). GCE is the Google Cloud Platform (GCP) service for running workloads on virtual machines. In this setup, multiple NGINX Plus instances work together, in active pairs. They load balance incoming connections across your app environments. **Notes:** -- GCE is a highly dynamic environment where the names and arrangement of GUI elements (such as menu items, checkboxes, and configuration navigation) are subject to change. We have made every effort to accurately represent the GCE GUI at the time of original publication, but options and locations might change over time. Use this guide as a reference and adapt to the current GCE working environment as necessary. -- The configuration described in this guide allows anyone from any public IP address to access the NGINX Plus instances directly without restriction. While this most flexibly accommodates common scenarios in a test environment, we do not recommend it for production use. Before deploying the architecture in production, we strongly recommend that you disable HTTP and HTTPS access to the app-1 and app-2 instances over the GCE external IP addresses, or remove the external IP addresses for all application instances so that they're accessible only on the internal GCE network. +- The GCE environment changes constantly. This could include names and arrangements of GUI elements. This guide was accurate when published. But, some GCE GUI elements might have changed over time. Use this guide as a reference and adapt to the current GCE working environment. +- The configuration described in this guide allows anyone from a public IP address to access the NGINX Plus instances. While this works in common scenarios in a test environment, we do not recommend it in production. Block external HTTP/HTTPS access to app-1 and app-2 instances to external IP address before production deployment. Alternatively, remove the external IP addresses for all application instances, so they're accessible only on the internal GCE network. + + ## Design and Topology The deployment combines the following technologies: -- [NGINX Plus](https://www.f5.com/products/nginx/nginx-plus) – Load balances HTTP connections across multiple instances of two different applications. Instructions are provided both for manual installation on a standard GCE VM image and for setting up the prebuilt NGINX Plus VM image available in the Google Marketplace. +- [NGINX Plus](https://www.f5.com/products/nginx/nginx-plus) – Load balances HTTP connections across multiple instances of two applications. We provide instructions for manual installation on a GCE VM image and setting up the prebuilt NGINX Plus VM image. Both are available in the Google Marketplace. - PHP-FPM – Supports the two sample applications. -- [GCE network load balancer](https://cloud.google.com/compute/docs/load-balancing/network) – Provides TCP connectivity between clients and the NGINX Plus load‑balancing (LB) instances in a GCP region, as well as maintaining session persistence for each NGINX Plus instance. +- [GCE network load balancer](https://cloud.google.com/compute/docs/load-balancing/network) – Enables TCP connectivity between clients and NGINX Plus load-balancing (LB) instances in a GCP region. It also maintains session persistence for each NGINX Plus instance. - [GCE instance groups](https://cloud.google.com/compute/docs/instance-groups) – Provide a mechanism for managing a group of VM instances as a unit. - [GCE health checks](https://cloud.google.com/compute/docs/load-balancing/health-checks) – Maintain high availability of the NGINX Plus LB instances by controlling when GCE creates a new LB instance in the instance group. Topology of the all‑active deployment of NGINX Plus as the Google Cloud Platform load balancer. -[Session persistence](https://www.nginx.com/products/nginx/load-balancing/#session-persistence) is managed at the network layer by GCE network load balancer (based on client IP address) and at the application layer by the NGINX Plus LB instance (via a session cookie). When a new client connection enters the GCE network environment, GCE network load balancer assigns it to a specific frontend NGINX Plus LB instance, and the association persists as long as the LB instance is up and functional. The NGINX Plus LB instance forwards the request to a specific application instance in one of the two groups of them, selected using its default Round Robin algorithm. It also issues a cookie to the client to represent the session with that application instance, so that subsequent requests from the client are forwarded to that application instance as long as it is up and running. +[Session persistence](https://www.nginx.com/products/nginx/load-balancing/#session-persistence) is managed at the network layer by the GCE network load balancer (based on client IP address). The NGINX Plus LB instance also manages it at the application layer (with a session cookie). + +The GCE network LB assigns each new client to a specific NGINX Plus LB. This association persists as long as the LB instance is up and functional. + +NGINX Plus LB uses the round-robin algorithm to forward requests to specific app instances. It also adds a session cookie. It keeps future requests from the same client on the same app instance as long as it's running. + +This deployment guide uses two groups of app instances: – app-1 and app-2. It demonstrates [load balancing](https://www.nginx.com/products/nginx/load-balancing/) between different app types. But both groups have the same app configurations. -This deployment in this guide utilizes two groups of application instances – app-1 and app-2 – to demonstrate [load balancing](https://www.nginx.com/products/nginx/load-balancing/) between different application types, but the application configurations are the same in both groups. The deployment can be very easily adapted to distribute unique connections to different groups of application instances by creating discrete upstream blocks and doing content routing based on URI. Please refer to the reference documentation for details on configuring multiple [upstream server groups](https://nginx.org/en/docs/http/ngx_http_upstream_module.html). +You can adapt the deployment to distribute unique connections to different groups of app instances. This can be done by creating discrete upstream blocks and routing content based on the URI. + +Please see the reference docs for details on configuring multiple [upstream server groups](https://nginx.org/en/docs/http/ngx_http_upstream_module.html). ## Prerequisites This guide assumes that you: -- Have a [Google account](https://accounts.google.com/SignUp) (a separate GCP or GCE account is not needed). -- Have enrolled in a [free trial](https://cloud.google.com/free-trial) with available credit or have an established payment account with GCP. +- Have a [Google account](https://accounts.google.com/SignUp) (a separate GCP or GCE account is unnecessary). +- Have enrolled in a [free trial](https://cloud.google.com/free-trial) available credit or have a GCP payment account. - Have a basic working knowledge of GCE and its GUI control panel: - Navigation - Creating instances - Managing IAM policies - Understand basic networking. - Have an NGINX Plus subscription. You can start a [free 30‑day trial](https://www.nginx.com/free-trial-request/) if you don't already have a paid subscription. -- Know how to install NGINX Plus, have a basic understanding of how it performs in load balancing and application delivery modes, and are familiar with its configuration syntax. +- Know how to install NGINX Plus. Have a basic understanding of performance in load balancing and application delivery modes. Be familiar with its configuration syntax. - Are familiar with GitHub and know how to [clone a repository](https://help.github.com/en/articles/cloning-a-repository/). -All component names – projects, instances, templates, instance groups, and so on – are examples only. You can change them as suits your needs. +All component names, like projects and instances, are examples only. You can change them to suit your needs. ## Task 1: Creating a Project and Firewall Rules @@ -88,7 +98,7 @@ Create firewall rules that allow access to the HTTP and HTTPS ports on your GCE - Source filter – On the drop-down menu, select either Allow from any source (0.0.0.0/0), or IP range if you want to restrict access to users on your private network. In the second case, fill in the Source IP ranges field that opens. In the screenshot, we are allowing unrestricted access. - Allowed protocols and ports – tcp:80; tcp:8080; tcp:443 - **Note:** As mentioned in the introduction to this guide, opening these ports for your application instances is appropriate only in a test environment. We strongly recommend that before deploying the architecture in production you create a new firewall rule for your application instances that blocks all port access to the external IP address, or disable external IP addresses for the instances to make them accessible only on the internal GCE network. + **Note:** As noted in the introduction, allowing access from any public IP address is appropriate only in a test environment. Before deploying the architecture in production, create a firewall rule. Use this rule to block access to the external IP address for your application instances. Alternatively, you can disable external IP addresses for the instances. This limits access only to the internal GCE network. - Target tags – nginx-plus-http-fw-rule @@ -99,14 +109,14 @@ Create firewall rules that allow access to the HTTP and HTTPS ports on your GCE ## Task 2: Creating Source Instances -Create three GCE source instances that will serve as templates for the instance groups you will create later on: one instance for the NGINX Plus load balancer and two instances for NGINX Plus PHP application servers. +Create three GCE source instances. Use them as templates for later instance groups. One for the NGINX Plus load balancer and two for NGINX Plus PHP app servers. You can create source instances in either of two ways: -- [Based on a standard GCE VM image](#source-vm), in which case you install NGINX Plus manually. This guide uses the Ubuntu LTS image that was most current at the time of publication (Ubuntu 16.04 LTS), but you can use any Unix or Linux OS that [NGINX Plus supports]({{< relref "../../technical-specs.md" >}}). +- [Based on a standard GCE VM image](#source-vm), you install NGINX Plus manually. This guide uses the latest Ubuntu LTS image at publication (Ubuntu 24.04 LTS). You can use any Unix or Linux OS that [NGINX Plus supports]({{< relref "../../technical-specs.md" >}}). - [Based on the prebuilt NGINX Plus image](#source-prebuilt) in the Google Marketplace, which at the time of publication runs on Ubuntu 14.04 LTS. -The instructions for the two methods are significantly different, but after you create the source instances all subsequent instructions are the same. +The methods to create a source instance are different. Once you've created the source instances, all later instructions are the same. ### Creating Source Instances from VM Images @@ -135,7 +145,7 @@ Create three source VM instances based on a GCE VM image. We're basing our insta Screenshot of the 'Boot disk' page in Google Cloud Platform for selecting the OS on which a VM runs. In the deployment of NGINX Plus as the Google load balancer, we select Ubuntu 16.04 LTS. - - Identity and API access – Unless you want more granular control over access, keep the defaults for the Service account field (Compute Engine default service account) and Access scopes radio button (Allow default access). + - Identity and API access – Keep the defaults for the Service account field and Access scopes radio button. Unless you need more granular control. - **Firewall** – Verify that neither check box is checked (the default). The firewall rule invoked in the **Tags** field on the Management subtab (see Step 3 below) controls this type of access. 2. Click Management, disk, networking, SSH keys to open that set of subtabs. (The screenshot shows the values entered in the previous step.) @@ -160,7 +170,7 @@ Create three source VM instances based on a GCE VM image. We're basing our insta Screenshot of the Networking subtab used during creation of a new VM instance, part of deploying NGINX Plus as the Google Cloud load balancer. -6. If you are using your own SSH public key instead of the default keys associated with your GCE identity, on the SSH Keys subtab paste the hexadecimal key string into the box that reads Enter entire key data. +6. If you're using your own SSH public key instead of your default GCE keys, paste the hexadecimal key string on the SSH Keys subtab. Right into the box that reads Enter entire key data. Screenshot of the SSH Keys subtab used during creation of a new VM instance, part of deploying NGINX Plus as the Google Cloud Platform load balancer. @@ -175,7 +185,7 @@ Create three source VM instances based on a GCE VM image. We're basing our insta 1. On the VM instances summary page, click CREATE INSTANCE. -2. Repeat the steps in Creating the First Application Instance to create the second application instance, specifying the same values as for the first application instance, except: +2. Repeat the steps in Creating the First Application Instance to create the second application instance. Specify the same values as for the first application instance, except: - In Step 1, **Name** – nginx-plus-app-2 - In Step 3, **Description** – NGINX Plus app-2 Image @@ -185,7 +195,7 @@ Create three source VM instances based on a GCE VM image. We're basing our insta 1. On the VM instances summary page, click CREATE INSTANCE. -2. Repeat the steps in Creating the First Application Instance to create the load‑balancing instance, specifying the same values as for the first application instance, except: +2. Repeat the steps in Creating the First Application Instance to create the load‑balancing instance. Specify the same values as for the first application instance, except: - In Step 1, **Name** – nginx-plus-lb - In Step 3, **Description** – NGINX Plus Load Balancing Image @@ -238,7 +248,7 @@ Install and configure PHP and FastCGI on the instances. Now install NGINX Plus and download files that are specific to the all‑active deployment: -- An NGINX Plus configuration file customized for the function performed by the instance (application server or load balancer) +- An NGINX Plus configuration file customized for the function performed by the instance - A set of content files (HTML, images, and so on) served by the application servers in the deployment Both the configuration and content files are available at the [NGINX GitHub repository](https://github.com/nginxinc/NGINX-Demos/tree/master/gce-nginx-plus-deployment-guide-files). @@ -251,9 +261,9 @@ Both the configuration and content files are available at the [NGINX GitHub repo 2. Clone the GitHub repository for the [all‑active load balancing deployment](https://github.com/nginxinc/NGINX-Demos/tree/master/gce-nginx-plus-deployment-guide-files). (Instructions for downloading the files directly from the GitHub repository are provided below, in case you prefer not to clone it.) -3. Copy the contents of the usr\_share\_nginx subdirectory from the cloned repository to the local /usr/share/nginx directory, creating the local directory if necessary. (If you choose not to clone the repository, you need to download each file from the GitHub repository individually.) +3. Copy the contents of the usr\_share\_nginx subdirectory from the cloned repository to the local /usr/share/nginx directory. Create the local directory if needed. (If you choose not to clone the repository, you need to download each file from the GitHub repository individually.) -4. Copy the appropriate configuration file from the etc\_nginx\_conf.d subdirectory of the cloned repository to /etc/nginx/conf.d: +4. Copy the right configuration file from the etc\_nginx\_conf.d subdirectory of the cloned repository to /etc/nginx/conf.d: - On both nginx-plus-app-1 and nginx-plus-app-2, copy gce-all-active-app.conf. @@ -287,7 +297,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo wget https://github.com/nginxinc/NGINX-Demos/blob/master/gce-nginx-plus-deployment-guide-files/etc_nginx_conf.d/gce-all-active-lb.conf ``` -5. On the LB instance (nginx-plus-lb), use your preferred text editor to open gce-all-active-lb.conf and change the `server` directives in the `upstream` block to reference the internal IP addresses of the nginx-plus-app-1 and nginx-plus-app-2 instances (substitute the address for the expression in angle brackets). No action is required on the two application instances themselves. +5. On the LB instance (nginx-plus-lb), use a text editor to open gce-all-active-lb.conf. Change the `server` directives in the `upstream` block to reference the internal IP addresses of the nginx-plus-app-1 and nginx-plus-app-2 instances (substitute the address for the expression in angle brackets). You do not need to modify the two application instances. You can look up internal IP addresses in the Internal IP column of the table on the Compute Engine > VM instances summary page. @@ -308,7 +318,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo mv default.conf default.conf.bak ``` -7. Enable the NGINX Plus [live activity monitoring](https://www.nginx.com/products/nginx/live-activity-monitoring/) dashboard for the instance by copying status.html from the etc\_nginx\_conf.d subdirectory of the cloned repository to /etc/nginx/conf.d. +7. Enable the NGINX Plus [live activity monitoring](https://www.nginx.com/products/nginx/live-activity-monitoring/) dashboard for the instance. Copy status.html from the etc\_nginx\_conf.d subdirectory of the cloned repository to /etc/nginx/conf.d. You can also run the following commands to download the configuration file directly from the GitHub repository: @@ -390,19 +400,19 @@ Create three source instances based on a prebuilt NGINX Plus image running on < 9. Modify or verify the indicated editable fields (non‑editable fields are not listed): - - **Tags** – If a default tag appears in the field (for example, nginx-plus-app-1-tcp-80), click the X after its name to remove it, and type in nginx-plus-http-fw-rule. + - **Tags** – If a default tag appears (for example, nginx-plus-app-1-tcp-80), click the X after its name to remove it. Then, type in nginx-plus-http-fw-rule. - External IP – Ephemeral (the default) - - Boot disk and local disks – Uncheck the checkbox labeled Delete boot disk when when instance is deleted. + - Boot disk and local disks – Uncheck the checkbox labeled Delete boot disk when instance is deleted. - Additional disks – No changes - - **Network** – We recommend keeping the default settings, but if you need to change them (if, for example, you're configuring a production environment), click default and then EDIT on the Network details page that opens. After making your changes click the  Save  button. + - **Network** – If you must change the defaults, for example, when configuring a production environment, select default Then, select EDIT on the opened Network details page. After making your changes select the  Save  button. - **Firewall** – Verify that neither check box is checked (the default). The firewall rule named in the **Tags** field that's above on the current page (see the first bullet in this list) controls this type of access. - Automatic restart – On (recommended) (the default) - On host maintenance – Migrate VM instance (recommended) (the default) - Custom metadata – No changes - - SSH Keys – If you are using your own SSH public key instead of the default keys associated with your GCE identity, paste the hexadecimal key string into the box that reads Enter entire key data. + - SSH Keys – If you're using your own SSH public key instead of your default GCE keys, paste the hexadecimal key string into the box labeled Enter entire key data. - Serial port – Verify that the check box labeled Enable connecting to serial ports is not checked (the default). - The screenshot shows the results of your changes, omitting some fields than cannot be edited or for which we recommend retaining the defaults. + The screenshot shows the results of your changes. It omits some fields that can't be edited or for which we recommend keeping the defaults. Screenshot showing the configuration modifications for a VM instance being deployed as part of setting up NGINX Plus as the Google load balancer. @@ -430,14 +440,14 @@ Create the second application instance by cloning the first one. - Identity and API access – Set the Access scopes radio button to Allow default access and accept the default values in all other fields. If you want more granular control over access than is provided by these settings, modify the fields in this section as appropriate. - **Firewall** – Verify that neither check box is checked (the default). -6. Click Management, disk, networking, SSH keys to open that set of subtabs. +5. Click Management, disk, networking, SSH keys to open that set of subtabs. -7. Verify the following settings on the subtabs, modifying them as necessary: +6. Verify the following settings on the subtabs, modifying them as necessary: - Management – In the **Tags** field: nginx-plus-http-fw-rule - Disks – The Deletion rule checkbox (labeled Delete boot disk when instance is deleted) is not checked -9. Click the  Create  button. +7. Select the  Create  button. #### Creating the Load-Balancing Instance from a Prebuilt Image @@ -458,7 +468,7 @@ Install and configure PHP and FastCGI on the instances. 1. Connect to the instance over SSH using the method of your choice. GCE provides a built‑in mechanism: - Navigate to the Compute Engine > VM instances tab. - - In the row for the instance in the table, click the triangle icon in the Connect column at the far right and select a method (for example, Open in browser window). + - In the table, find the row for the instance. Select the triangle icon in the Connect column at the far right. Then, select a method (for example, Open in browser window). The screenshot shows instances based on the prebuilt NGINX Plus images. @@ -505,15 +515,16 @@ Both the configuration and content files are available at the [NGINX GitHub repo **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. -1. Clone the GitHub repository for the [all‑active load balancing deployment](https://github.com/nginxinc/NGINX-Demos/tree/master/gce-nginx-plus-deployment-guide-files). (Instructions for downloading the files directly from the GitHub repository are provided below, in case you prefer not to clone it.) +1. Clone the GitHub repository for the [all‑active load balancing deployment](https://github.com/nginxinc/NGINX-Demos/tree/master/gce-nginx-plus-deployment-guide-files). (See the instructions below for downloading the files from GitHub if you choose not to clone it.) + +2. Copy the contents of the usr\_share\_nginx subdirectory from the cloned repo to the local /usr/share/nginx directory. Create the local directory if necessary. (If you choose not to clone the repository, you need to download each file from the GitHub repository one at a time.) -2. Copy the contents of the usr\_share\_nginx subdirectory from the cloned repository to the local /usr/share/nginx directory, creating the local directory if necessary. (If you choose not to clone the repository, you need to download each file from the GitHub repository individually.) -3. Copy the appropriate configuration file from the etc\_nginx\_conf.d subdirectory of the cloned repository to /etc/nginx/conf.d: +3. Copy the right configuration file from the etc\_nginx\_conf.d subdirectory of the cloned repository to /etc/nginx/conf.d: - On both nginx-plus-app-1-vm and nginx-plus-app-2-vm, copy gce-all-active-app.conf. - You can also run the following commands to download the configuration file directly from the GitHub repository: + You can also run these commands to download the configuration file from GitHub: ```shell cd /etc/nginx/conf.d/ @@ -543,7 +554,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo wget https://github.com/nginxinc/NGINX-Demos/blob/master/gce-nginx-plus-deployment-guide-files/etc_nginx_conf.d/gce-all-active-lb.conf ``` -4. On the LB instance (nginx-plus-lb-vm), use your preferred text editor to open gce-all-active-lb.conf and change the `server` directives in the `upstream` block to reference the internal IP addresses of the nginx-plus-app-1-vm and nginx-plus-app-2-vm instances. (No action is required on the two application instances themselves.) +4. On the LB instance (nginx-plus-lb-vm), use a text editor to open gce-all-active-lb.conf. Change the `server` directives in the `upstream` block to reference the internal IP addresses of the nginx-plus-app-1-vm and nginx-plus-app-2-vm instances. (No action is required on the two application instances themselves.) You can look up internal IP addresses in the Internal IP column of the table on the Compute Engine > VM instances summary page. @@ -558,13 +569,13 @@ Both the configuration and content files are available at the [NGINX GitHub repo Directive documentation: [server](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#server), [`sticky cookie`](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky), [upstream](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream), [zone](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#zone) -5. Rename **default.conf** to **default.conf.bak** so that NGINX Plus does not load it. The configuration files provided for the all‑active deployment include equivalent instructions plus additional function‑specific directives. +5. Rename **default.conf** to **default.conf.bak** so that NGINX Plus does not load it. The configuration files for the all-active deployment include equivalent instructions. They also have extra, function-specific directives. ```shell mv default.conf default.conf.bak ``` -6. Enable the NGINX Plus [live activity monitoring](https://www.nginx.com/products/nginx/live-activity-monitoring/) dashboard for the instance by copying status.html from the etc\_nginx\_conf.d subdirectory of the cloned repository to /etc/nginx/conf.d. +6. Enable the NGINX Plus [live activity monitoring](https://www.nginx.com/products/nginx/live-activity-monitoring/) dashboard for the instance. To do this, copy status.html from the etc\_nginx\_conf.d subdirectory of the cloned repository to /etc/nginx/conf.d. You can also run the following commands to download the configuration file directly from the GitHub repository: @@ -587,7 +598,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo nginx -s reload ``` -8. Verify the instance is working by accessing it at its external IP address. (As previously noted, we recommend blocking access to the external IP addresses of the application instances in a production environment.) The external IP address for the instance appears on the Compute Engine > VM instances summary page, in the External IP column of the table. +8. Verify the instance is working by accessing it at its external IP address. (As noted, we recommend blocking access, in production, to the external IPs of the app.) The external IP address for the instance appears on the Compute Engine > VM instances summary page, in the External IP column of the table. - Access the index.html page either in a browser or by running this `curl` command. @@ -604,7 +615,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo ## Task 3: Creating "Gold" Images -Create _gold images_, which are base images that GCE clones automatically when it needs to scale up the number of instances. They are derived from the instances you created in [Creating Source Instances](#source). Before creating the images, you must delete the source instances to break the attachment between them and the disk (you can't create an image from a disk that's attached to a VM instance). +Create _gold images_, which are base images that GCE clones automatically when it needs to scale up the number of instances. They are derived from the instances you created in [Creating Source Instances](#source). Before creating the images, delete the source instances. This breaks the attachment between them and the disk. (you can't create an image from a disk attached to a VM instance). 1. Verify that the NGINX Plus All-Active-LB project is still selected in the Google Cloud Platform header bar. @@ -621,14 +632,14 @@ Create _gold images_, which are base images that GCE clones automatically when i 5. Click DELETE in the top toolbar to delete the instances. - **Note:** If the pop‑up confirmation window warns that the boot disk for any instance will also be deleted, cancel the deletion, and perform these steps for each affected instance: + **Note:** If the pop-up warns that it will delete the boot disk for any instance, cancel the deletion. Then, perform the steps below for each affected instance: - Navigate to the Compute Engine > VM instances tab and click the instance in the Name column in the table. (The screenshot shows nginx-plus-app-1-vm.) Screenshot showing how to access the page where configuration details for a VM instance can be modified during deployment of NGINX Plus as the Google Cloud load balancer. - On the VM instances page that opens, click EDIT at the top of the page. In fields that can be edited, the value changes from static text to text boxes, drop‑down menus, and checkboxes. - - In the Boot disk and local disks field, uncheck the checkbox labeled Delete boot disk when when instance is deleted. + - In the Boot disk and local disks field, uncheck the checkbox labeled Delete boot disk when instance is deleted. - Click the  Save  button. - On the VM instances summary page, select the instance in the table and click DELETE in the top toolbar to delete it. @@ -664,7 +675,8 @@ Create _gold images_, which are base images that GCE clones automatically when i ## Task 4: Creating Instance Templates -Create _instance templates_, which are the compute workloads that are created in instance groups, either manually or automatically when GCE detects a failure. +Create _instance templates_. They are the compute workloads in instance groups. These are created manually or automatically when GCE detects a failure. + ### Creating the First Application Instance Template @@ -696,11 +708,11 @@ Create _instance templates_, which are the compute workloads that are created in - Identity and API access – Unless you want more granular control over access, keep the defaults in the Service account field (Compute Engine default service account) and Access scopes field (Allow default access). - **Firewall** – Verify that neither check box is checked (the default). The firewall rule invoked in the **Tags** field on the Management subtab (see Step 6 below) controls this type of access. -6. Click Management, disk, networking, SSH keys (indicated with a red arrow in the following screenshot) to open that set of subtabs. +5. Select Management, disk, networking, SSH keys (indicated with a red arrow in the following screenshot) to open that set of subtabs. Screenshot of the interface for creating a Google Compute Engine (GCE) instance template, used during deployment of NGINX Plus as the Google load balancer. -7. On the Management subtab, modify or verify the fields as indicated: +6. On the Management subtab, modify or verify the fields as indicated: - **Description** – NGINX Plus app-1 Instance Template - **Tags** – nginx-plus-http-fw-rule @@ -710,28 +722,28 @@ Create _instance templates_, which are the compute workloads that are created in Screenshot of the Management subtab used during creation of a new VM instance template, part of deploying NGINX Plus as the Google load balancer. -8. On the Disks subtab, verify that the checkbox labeled Delete boot disk when instance is deleted is checked. +7. On the Disks subtab, verify that the checkbox labeled Delete boot disk when instance is deleted is checked. - Instances created from this template are ephemeral instantiations of the gold image, so we want GCE to reclaim the disk when the instance is terminated. New instances are always based on the gold image, so there is no reason to have the instantiations persist on disk when the instance is deleted. + Instances from this template are ephemeral instantiations of the gold image. So, we want GCE to reclaim the disk when the instance is terminated. New instances are always based on the gold image. So, there is no reason to keep the instantiations on disk when the instance is deleted. Screenshot of the Disks subtab used during creation of a new VM instance template, part of deploying NGINX Plus as the Google Cloud load balancer. -9. On the Networking subtab, verify the default settings of Ephemeral for External IP and Off for IP Forwarding. +8. On the Networking subtab, verify the default settings of Ephemeral for External IP and Off for IP Forwarding. Screenshot of the Networking subtab used during creation of a new VM instance template, part of deploying NGINX Plus as the Google load balancer. -10. If you are using your own SSH public key instead of the default keys associated with your GCE identity, on the SSH Keys subtab paste the hexadecimal key string into the box that reads Enter entire key data. +9. If you're using your own SSH public key instead of your default keys, paste the hexadecimal key string on the SSH Keys subtab. Right into the box that reads Enter entire key data. Screenshot of the SSH Keys subtab used during creation of a new VM instance, part of deploying NGINX Plus as the Google Cloud Platform load balancer. -11. Click the  Create  button. +10. Click the  Create  button. ### Creating the Second Application Instance Template 1. On the Instance templates summary page, click CREATE INSTANCE TEMPLATE. -2. Repeat Steps 4 through 10 of Creating the First Application Instance Template to create a second application instance template, specifying the same values as for the first instance template, except as noted: +2. Repeat Steps 4 through 10 of Creating the First Application Instance Template to create a second application instance template. Use the same values as for the first instance template, except as noted: - In Step 4: - **Name** – nginx-plus-app-2-instance-template @@ -743,7 +755,7 @@ Create _instance templates_, which are the compute workloads that are created in 1. On the Instance templates summary page, click CREATE INSTANCE TEMPLATE. -2. Repeat Steps 4 through 10 of Creating the First Application Instance Template to create the load‑balancing instance template, specifying the same values as for the first instance template, except as noted: +2. Repeat Steps 4 through 10 of Creating the First Application Instance Template to create the load‑balancing instance template. Use the same values as for the first instance template, except as noted: - In Step 4: - **Name** – nginx-plus-lb-instance-template. @@ -754,7 +766,7 @@ Create _instance templates_, which are the compute workloads that are created in ## Task 5: Creating Image Health Checks -Define the simple HTTP health check that GCE uses to verify that each NGINX Plus LB image is running correctly (and to re-create an LB instance that is not running correctly). +Define the simple HTTP health check that GCE uses. This verifies that each NGINX Plus LB image is running (and to re-create any LB instance that isn't running). 1. Verify that the NGINX Plus All-Active-LB project is still selected in the Google Cloud Platform header bar. @@ -820,7 +832,7 @@ Create three independent instance groups, one for each type of function-specific 1. On the Instance groups summary page, click CREATE INSTANCE GROUP. -2. Repeat the steps in [Creating the First Application Instance Group](#groups-app-1) to create a second application instance group, specifying the same values as for the first instance template, except for these fields: +2. Repeat the steps in [Creating the First Application Instance Group](#groups-app-1) to create a second application instance group. Specify the same values as for the first instance template, except for these fields: - **Name** – nginx-plus-app-2-instance-group - **Description** – Instance group to host NGINX Plus app-2 instances @@ -831,7 +843,7 @@ Create three independent instance groups, one for each type of function-specific 1. On the Instance groups summary page, click CREATE INSTANCE GROUP. -2. Repeat the steps in [Creating the First Application Instance Group](#groups-app-1) to create the load‑balancing instance group, specifying the same values as for the first instance template, except for these fields: +2. Repeat the steps in [Creating the First Application Instance Group](#groups-app-1) to create the load‑balancing instance group. Specify the same values as for the first instance template, except for these fields: - **Name** – nginx-plus-lb-instance-group - **Description** – Instance group to host NGINX Plus load balancing instances @@ -840,7 +852,7 @@ Create three independent instance groups, one for each type of function-specific ### Updating and Testing the NGINX Plus Configuration -Update the NGINX Plus configuration on the two LB instances (nginx-plus-lb-instance-group-[a...z]) to list the internal IP addresses of the four application servers (two instances each of nginx-plus-app-1-instance-group-[a...z] and nginx-plus-app-2-instance-group-[a...z]). +Update the NGINX Plus configuration on the two LB instances (nginx-plus-lb-instance-group-[a...z]). It should list the internal IP addresses of the four application servers (two instances each of nginx-plus-app-1-instance-group-[a...z] and nginx-plus-app-2-instance-group-[a...z]). Repeat these instructions for both LB instances. @@ -849,9 +861,9 @@ Update the NGINX Plus configuration on the two LB instances (Compute Engine > VM instances tab. - - In the row for the instance in the table, click the triangle icon in the Connect column at the far right and select a method (for example, Open in browser window). + - In the table, find the row for the instance. Click the triangle icon in the Connect column at the far right. Then, select a method (for example, Open in browser window). -3. Working in the SSH terminal, use your preferred text editor to modify gce-all-active-lb.conf, changing the `server` directives in the `upstream` block to reference the internal IP addresses of the two nginx-plus-app-1-instance-group-[a...z] instances and the two nginx-plus-app-2-instance-group-[a...z] instances. You can look up the addresses in the Internal IP column of the table on the Compute Engine > VM instances summary page. For example: +2. In the SSH terminal, use your preferred text editor to edit gce-all-active-lb.conf. Change the `server` directives in the `upstream` block to reference the internal IPs of the two nginx-plus-app-1-instance-group-[a...z] instances and the two nginx-plus-app-2-instance-group-[a...z] instances. You can check the addresses in the Internal IP column of the table on the Compute Engine > VM instances summary page. For example: ```nginx upstream upstream_app_pool { @@ -868,18 +880,18 @@ Update the NGINX Plus configuration on the two LB instances (nginx-plus-lb-instance-group-[a...z]). You can look up the instance's external IP address on the Compute Engine > VM instances summary page, in the External IP column of the table. +4. Verify that the four application instances are receiving traffic and responding. To do this, access the NGINX Plus live activity monitoring dashboard on the load-balancing instance (nginx-plus-lb-instance-group-[a...z]). You can see the instance's external IP address on the Compute Engine > VM instances summary page in the External IP column of the table. https://_LB-external-IP-address_:8080/status.html -6. Verify that NGINX Plus is load balancing traffic among the four application instance groups, by running this command on a separate client machine: +5. Verify that NGINX Plus is load balancing traffic among the four application instance groups. Do this by running this command on a separate client machine: ```shell while true; do curl -s | grep Server: ;done @@ -890,7 +902,7 @@ Update the NGINX Plus configuration on the two LB instances ( ## Task 7: Configuring GCE Network Load Balancer -Set up GCE network load balancer to distribute incoming client traffic to the NGINX Plus LB instances. The first step is to reserve the static IP address that GCE network load balancer advertises to clients. +Set up a GCE network load balancer. It will distribute incoming client traffic to the NGINX Plus LB instances. First, reserve the static IP address the GCE network load balancer advertises to clients. 1. Verify that the NGINX Plus All-Active-LB project is still selected in the Google Cloud Platform header bar. @@ -933,7 +945,7 @@ Set up GCE network load balancer to distribute incoming client traffic to the NG Screenshot of the interface for backend configuration of GCE network load balancer, used during deployment of NGINX Plus as the Google Cloud Platform load balancer. -13. Click Frontend configuration in the left column to open the Frontend configuration interface in the right column. +13. Select Frontend configuration in the left column. This opens up the Frontend configuration interface on the right column. 14. Create three Protocol-IP-Port tuples, each with: