How should I use GlassFish & Payara Clustering and Run Java EE highly-available applications in the cloud?

How should I use GlassFish & Payara Clustering and Run Java EE highly-available applications in the cloud?

           

Ensuring trouble-proof 24/7 service delivery is among of the most discussed areas in cloud hosting for the last few years. And the very obvious and commonly used solution here is building a clustered infrastructure for my project.
Intending to help our customers to deal with such a non-trivial task and save time for other project-related activities, today we are glad to present a special high-availability solution, designed to facilitate the Java EE application hosting, – embedded Auto-Clustering for GlassFish and Payara application servers.
The main advantage of this solution is in the automatic interconnection of multiple application server instances upon the application topology change, which implements the commonly used clustering configuration.
So, the article below describes how the Glassfish and Payara auto-clustering works, as well as infrastructure topology specifics and the way I can get the appropriate development and production environments up and running inside Gigality PaaS.

How does Auto-Clustering for GlassFish and Payara Work?

In the most general sense, any “clusterized solution” can be defined as a set of interconnected instances that run the same stack and operate the same data. In other words, this means that the corresponding server should be horizontally scaled and share user sessions.

Starting with the Gigality 5.5.3 version, a new Auto-Clustering feature is introduced allowing to enable clusterization of the GlassFish and Payara instances directly within the topology wizard:



Choose either the GlassFish or Payara application server on the Java tab of the wizard. Then, in the central part, locate and enable the appropriate Auto-Clustering switcher. Configure the remaining settings up to my need (consider horizontal scaling to get a reliable solution from the start).

Tip: The Auto-Clustering feature is also available for some other software templates (e.g. MySQL, MariaDB, and Couchbase).

Based on your environment purpose, you may consider not to use Auto-Clustering (for example during development). In such a way a regular standalone server(s) will be created without configuring a cluster. For production, clustering is virtually a mandatory option to ensure the application has high-availability and smooth/uninterrupted experience for clients. The usage of the Auto-Clustering by Gigality is the simplest way to implement a reliable topology for my services without a necessity to manually configure anything.

Herewith, the following adjustments take place:
  1. For 2+ GlassFish (Payara) instances, environment topology is complemented with a load balancer (LB), intended to handle the incoming requests and distribute them across the workers
  2. An extra Domain Administration Server (DAS) node is automatically added – a dedicated instance to perform centralized control of cluster nodes and to configure interaction between them via SSH. Its integration implies a number of specifics:
    a. Administration server is linked to all workers within the application server layer with the DAS alias hostname, which can be used by workers for further interaction
    b. To enable proper nodes connectivity and control, the system automatically generates an SSH keypair for DAS node and places it within a volume, mounted to all the rest of the cluster instance.



Search Replication Implementation:

To ensure high availability of my cluster, Gigality PaaS automatically configures session replication across the worker nodes. This way, all user session data, that is stored during its processing, is distributed across all application server instances from the node that has actually handled the request.
Together with automatically configured sticky sessions mechanism on the load balancer layer, session replication ensures hosting of the increased reliability and improves failover capabilities of my application within such GlassFish or Payara cluster. Herewith, depending on a used stack, the implemented replication mechanism will slightly differ – let’s consider each approach in more details.

Glassfish Session Replication with GMS:

Within the GlassFish cluster, session replication is powered by the Group Management Service (GMS) – a built-in application server component that ensures failover protection, in-memory replication, transaction and timer services for cluster instances.
GMS uses TCP without multicast to detect cluster instances. When a new node is joining a GlassFish cluster, the system re-detects all running workers and DAS node – such auto discovery mechanism is applied by means of the GMS_DISCOVERY_URI_LIST property being set to the generate value.

Payara Session Replication with Hazelcast:

Session replication inside the Payara cluster is based on Hazelcast, which has an extra benefit of being JCache compliant and provides the embedded Web and EJB sessions’ persistence. This in-memory data grid is automatically enabled at all Payara instances to discover my environment cluster members by TCP without multicast.



To manage Hazelcast settings, access the Administration Console and refer to the Hazelcast Configuration page.

Deploy Example Application for HA Testing:

Now, let’s check the high availability of such an automatically composed cluster with the example of a scaled GlassFish server. To make sure of its fault tolerance, we’ll deploy a dedicated testing application, which enables us to add some custom session data and to view the detailed information on a server this session is handled by. This way, stopping particular cluster instances allows ascertaining that the already running user sessions will continue being processed even in case the corresponding server fails.
So, let’s see it in practice.



Within the opened page, select the go to the Administration Console reference and log in with credentials, delivered to me via email upon the environment creation.

2. Switch to the Applications section and upload clusterjsp.ear application to the Packaged File to Be Uploaded to the Server location.



3. Check to have the Availability enabled and set up cluster1 as the application target, then click OK to proceed


Now, open the environment in the browser and append /clusterjsp to the URL


Provide any custom Name and Value for my own session attribute and click on Add Session Data.

5. Switch back to the admin panel and navigate to the Clusters > cluster1 > Instances tab. Here, select and Stop the instance session is running on (its hostname is circled in the image above).


6. Return to our application and Reload Page with the appropriate button.


As you can see, despite the session being handled by another instance, our custom attribute is still showing proper output.
    • Related Articles

    • How do I enable and use Redis?

      What is Redis? Redis is a super-fast, in-memory database that stores data in a key-value format, making it quick to access. It's often used for caching, real-time applications, and quick data lookups. Redis is versatile, supporting various data types ...
    • What type of load balancing options are available with Gigality PaaS?

      Load balancing is a process of traffic navigation and workload distribution across multiple components, which is performed by the dedicated type of nodes called load balancers. In Gigality PaaS such instance(s) can be manually added into the ...
    • How do I use MariaDB template with auto-clustering?

      Auto-Clustering is provided for the 10.x MariaDB versions only. Automatic clusterization of the databases with pre-configured replication and auto-discovery of the nodes. Based on the requirements, I can select Scheme of the following types: 1. ...
    • How do I use HTTP Load Balancing?

      Load balancing ensures high system availability through the distribution of workload across multiple components. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. Gigality uses ...
    • How do I use horizontal scaling inside the cloud: multi nodes?

      Gigality (PaaS) also lets you to increase/decrease the number of servers in your environment manually, if it is required for your application. The process of scaling is fairly simple - just open the environment topology wizard and use the appropriate ...