Official Power Up Hosting Blog

Everything about Linux, Windows, and hosting ;)

Selvakumar
Author

I am an Online Marketer and technology lover. I like to learn new things and share that with people.

Share


Our Newsletter


Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

Tags


Twitter


Official Power Up Hosting Blog

(9 Steps) Set Up an Apache Active Passive Cluster on CentOS 7

SelvakumarSelvakumar

Being online 24*7 is important and essential for many services. It is not an easy task to do that.

Sometimes the resource may be stuck in the problem or fails to work. This is where you should have an additional resource to come back online instantly.

If your server goes down, then it will result in the loss including money and reputation. So, it is always necessary to take precautions to prevent such incident.

We are going to make it possible with the Pacemaker stack, a cluster resource manager.

It will manage all type of service of the cluster. The Pacemaker stack will use the messaging and membership capabilities of the cluster.

We will be using the Corosync in this tutorial. You might have a question that what is Corosync?

Here, the Corosync will be used as cluster engine. All the resources have the resource agent.

These resource agents are the external program used to abstract the service.

The active Passive cluster has two systems. One is the primary system which will run all the services. Another one is passive cluster. It is the backup system.

If the primary server goes down, the backup server will start running. So, the availability of the service will be ensured here.

Even the Maintenance of the system is possible without interruption using the Active-Passive Cluster.

A virtual IP will be used by the Pacemaker to allow the user to access the web application using that Virtual IP.

Both the Apache service and Pacemaker Virtual IP will be on the same Host. When that host fails, the Pacemaker will switch over the Virtual IP to the backup system.

So, the user will not notice any interruption in the service.

Let us see the requirements for the Active-Passive cluster.

Here, we are going to see centos 7 cluster configuration step by step. You have to follow some of the steps on both servers and some steps on only one server.

Setup the Name Resolution

Both the host should resolve the name of the two cluster nodes. We have to make sure that by adding the following lines in the /etc/hosts file. Do this on both servers.

Open the /etc/hosts file using nano editor.

$ sudo nano /etc/hosts

After that, add the following lines at the end of the file.

                    /etc/hosts
your_first_server_ip webnode01.example.com webnode01
your_second_server_ip webnode02.example.com webnode02

Once you have added the line, just save and close the file.

Installing the Apache

The apache clustering and load balancing give the best support for the Active-Passive cluster on CentOS 7.

You have to install the Apache web server on both of the servers. Follow the same steps on both servers.

The Apache web server clustering is what we are going to do.

To install Apache, use the below command.

$ sudo yum install httpd

The Apache has the server status page which will be used by the Apache resource agents to know the health of the Apache service.

You have to create this status page by creating /etc/httpd/conf.d/status.conf file.

Open the file in that path using nano editor.

$ sudo nano /etc/httpd/conf.d/status.conf

Paste the below codes in that file. This directive will allow the access to the Apache status page from the localhost.

But:

Other hosts cannot access this status page.

/etc/httpd/conf.d/status.conf
<Location /server-status>
SetHandler server-status
Order Deny,Allow
Deny from all
Allow from 127.0.0.1
</Location>

After that, save and close the file. The apache cluster centos are now half ready.

Pacemaker Installation

After the Apache installation, the next step is to install the Pacemaker.

You have to install the Pacemaker stack on both of the servers. You should install the Pacemaker stack on the pcs cluster shell.

For that, use the below command.

$ sudo yum install pacemaker pcs

The pcs daemon has to be initiated. The pcs daemon will synchronize the Corosync configuration across all clusters.

$ sudo systemctl start pcsd.service

The above command will initiate the pcs daemon.

You should also enable the service after each time when the daemon gets started.

$ sudo systemctl enable pcsd.service

Once you are done with the installation steps, a new user will be created on your system named hacluster.

You can't login this user remotely. You have to set the same password for this user to synchronize the configuration or initiating the service on the other nodes.

$ sudo passwd hacluster

Pacemaker Configuration

The host has to communicate each other. For that, you have to allow the cluster traffic in the FirewallD.

First, you should check whether the firewall is running.

$ sudo firewall-cmd --state

If you found that the firewall is not running, then start that using the below command.

$ sudo systemctl start firewalld.service

You have to execute the code in both hosts. Once they start running, add the high-availability service to the firewallD.

$ sudo firewall-cmd --permanent --add-service=high-availability

Once you are done with the above step, then reload the FirewallD on both hosts.

$ sudo firewall-cmd --reload

You are almost done. Both hosts can communicate with each other.

You have to set up an authentication between the two nodes. Execute the below command on webnode1.

$ sudo pcs cluster auth webnode01 webnode02
$ Username: hacluster

You will get the following output.

      Output
webnode01: Authorized
webnode02: Authorized

Here:

We have to generate the configuration and synchronize the Corosync configuration on the nodes. Use the below command for that purpose.

$ sudo pcs cluster setup --name webcluster webnode01 webnode02

You will get the following output.

    Output
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
webnode01: Succeeded
webnode02: Succeeded

The Corosync configuration will be created and synchronized to all of the nodes.

The configuration will be stored in the following location /etc/corosync/corosync.conf.

Initiate the Cluster

You can start the cluster by executing the below command on webnode1.

$ sudo pcs cluster start --all

Enable the Corosync and Pacemaker to boot on both nodes. Both services should start at boot.

$ sudo systemctl enable corosync.service
$ sudo systemctl enable pacemaker.service

You can check the status using the below command on any of the hosts.

$ sudo pcs status

You should get the following output. Make sure both servers are online.

Output
. . .

Online: [ webnode01 webnode02 ]

Full list of resources:


PCSD Status:
webnode01: Online
webnode02: Online

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

Disable the Stonith and Ignore the Quorum

If you check out the output of pcs status, you will get warnings "no stonith devices and stonith-enabled is not false".

The output will be like

Warning
. . .
WARNING: no stonith devices and stonith-enabled is not 
false
. . .

You might have the question about the importance of this. Let us see why should you care about this.

If the cluster resource manager is unable to find the status of a node, then the fencing will be used to bring back the node to the known status.

The resource level fencing will ensure that there is no data corruption in case of the outage. The resource level fencing will allocate a resource in case of failure.

The resource level fencing can be established using the DRBD. It will mark the disk as outdated when the communication link goes down.

The next one is one is node-level fencing. It will make sure that the nodes are not running on any resources.

It can be done by the resetting the node and pacemaker. It is called as STONITH. It means shoot the other node in the head.

The main advantages are that the pacemaker supports the variety of fencing devices.

We will disable the node level fencing. Use the below command for that.

$ sudo pcs property set stonith-enabled=false

Quorum

A cluster will have the quorum if the half of the nodes in the cluster are running. The pacemaker's main job is to stop all the resources if the cluster don't have the quorum.

This is not suitable when the cluster has only two nodes which will result in loss of Quorum. So, we will instruct the pacemaker to ignore the quorum.

For that, you have to set the No Quorum policy using the below command.

$ sudo pcs property set no-quorum-policy=ignore

Configuration of Virtual IP Address

After the above step, we will start using the pcs to interact with the cluster. All commands will be executed using only one node. It can be any node.

The Pacemaker cluster will become up and start running. We have to start adding the resources.

We will add the first resource to it. The resource is Virtual IP Address.

For that, we will be configuring the ocf:heartbeat:IPaddr2. It is a resource agent.

The resource agent name will have two to three fields. All those fields will be separated by : .

The first field of the resource agents is the resource class which informs the pacemaker with the location of the script.

The second field entirely depends on the standard. The second field is used by the OCF resource for the OCF namespace.

The last field is the name of the resource agent.

The resources can have two types of attributes. One is meta attributes and other is instance attributes.

The meta attributes are not depending on the resource type.

But:

The instance attribute is a resource agent specific one.

Here, the only instance attribute, we will use is Virtual IP address. Here we will add the CIDR notation cidr_netmask.

Here, the cluster has to perform the resource operation such as start, stop, monitor.

The operations will be indicated with the keyword op.

You have to add the monitor operations. So that cluster will monitor the resource every 20 seconds to find whether they are healthy.

Let us create the virtual IP address resource. Use the pcs resource create virtual IP address (127.0.0.2) command. The resource name will be Cluster_VIP.

$ sudo pcs resource create Cluster_VIP ocf:heartbeat:IPaddr2 ip=127.0.0.2 cidr_netmask=24 op monitor interval=20s

After that, check the status of the resource page.

$ sudo pcs status

You will get the following output.

Output
...
Full list of resources:

Cluster_VIP    (ocf::heartbeat:IPaddr2):   Started webnode01
...

Now, the virtual IP address will become active on the webnode01.

Adding the second resource

Use the below command to create the second resource for the cluster.

The resource agent for the service will be ocf:heartbeat:apache.

The name of the resource is WebServer. The instance attributes for this will be configfile and statusurl.

We will set the monitoring interval to 20 seconds.

$ sudo pcs resource create WebServer ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl="http://127.0.0.1/server-status" op monitor interval=20s

To check the status of a resource, use the below command.

$ sudo pcs status

You will get the following output which mentions the web server running on the webnode02.

Output
...
Full list of resources:

Cluster_VIP    (ocf::heartbeat:IPaddr2):   Started webnode01
WebServer  (ocf::heartbeat:apache):    Started webnode02
...

Configuration of Colocation Constraints

The Pacemaker decision is being made by the scores. Using that score, the Pacemaker decides where a resource should run.

The scores are calculated per resources. The cluster resource manager takes the decision using the score.

It chooses the node with the **highest score **to make a resource run on that node.

If the resource has the negative score, then it can't be used.

The decisions of the cluster can be manipulated using the constraints. The constraints have the score and the score should be infinity.

Execute the below command to define the colocation constraints score to infinity.

$ sudo pcs constraint colocation add WebServer Cluster_VIP INFINITY

Here, the order of resource in the constrains has to be perfect.

In case the cluster VIP is not active, the website can't run anywhere. We specified that the Apache(Web server) must run on the same hosts where the Virtual IP is running.

Now, check the status using the below command.

$ sudo pcs status

You will get the following output.

Output
...
Full list of resources:

 Cluster_VIP    (ocf::heartbeat:IPaddr2):   Started webnode01
 WebServer  (ocf::heartbeat:apache):    Started webnode01
...

Now, the resources will be running on webnode01. This centos 7 high-performance cluster ensures the high availability.

Conclusion

In this article, you have learned centos cluster configuration step by step. centos 7 high performance cluster

You also learned the configurations of the Cluster and Pacemaker.

If you have any questions, leave them in the comment.

Selvakumar
Author

Selvakumar

I am an Online Marketer and technology lover. I like to learn new things and share that with people.

Comments