How do I use HTTP Load Balancing?

How do I use HTTP Load Balancing?

Load balancing ensures high system availability through the distribution of workload across multiple components. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy.

Gigality uses two types of load balancing: TCP and HTTP. HTTP load balancing dominates in Gigality. As a reverse proxy server for HTTP protocols we use NGINX.

NGINX is the second-most popular open source web server in the world giving customers greater performance and efficiency for their applications. Using NGINX requires no extra deployment steps or pre-configuration. NGINX offers built-in Layer 7 load balancing and content caching to provide a cost-effective and highly available platform for hosted applications. NGINX with its scalability, security and high efficiency in memory and CPU became the fastest web server in the world.

Let's examine the very process of HTTP balancing in Gigality.

The balancer represents frontend which receives all the http requests and distributes them between backends - application servers. It provides two-level balancing based on cookies.

The first level is at the rate of one node. And the second is at the rate of group of nodes connected by the same sticky session.

           

When the user makes his http request, the balancer provides him with two cookies:
C1 - node ID
C2 - group ID

The first cookie (node ID) redirects the request to the necessary node (server). If this node suddenly dies, the balancer will stop redirecting to it and instead favor the server that is still working. This active server will be chosen with a help of the second cookie (group ID) from the group of nodes with common to failed node sticky session.

Note: Storage is not shared between load-balanced instances, between replicated only.
    • Related Articles

    • What type of load balancing options are available with Gigality PaaS?

      Load balancing is a process of traffic navigation and workload distribution across multiple components, which is performed by the dedicated type of nodes called load balancers. In Gigality PaaS such instance(s) can be manually added into the ...
    • What is shared load balancer and how it works under Gigality platform?

      The Gigality Platform provides us with a Shared Load Balancer (resolver). It represents an NGINX proxy server between the client side (browser, for example) and my application, deployed to the Gigality Cloud. Shared LB processes all incoming ...
    • How to configure NGINX load balancer in Gigality?

      Load Balancing is a process of distributing load across multiple components. This process is performed by a specific type of node called ‘load balancers’. In Gigality, load balancers can be added manually to the virtual environment. Nginx is one of ...
    • How do I use caching with NGINX balancer?

      Caching in NGINX is the process of storing data in front of web servers. For example, the files a user automatically requests by looking at a web-page can be stored in your NGINX cache directory. When a user returns to a page he’s recently looked at, ...
    • How to install a traffic distributor on a Gigality environment?

      The process of Traffic Distributor installation is fairly simple with Gigality - being specially packed for the Marketplace, it can be created in a few clicks and start working in just a matter of minutes. Herewith, the configuration of the solution ...