MySQL 8 Galera Cluster High Availability

MySQL 8 Galera Cluster High Availability 

In MySQL 8 Galera Cluster Installation we described how to set up MySQL8 Cluster so the cluster can survive any node failure and continue to fully operate without losing database writes and continue to serve reads .

In this article we will describe how to make it transparent to application clients so the requests will be sent always to one end point (IP, Address) and the requests will be sent to one of Galera nodes.

 

Galera Cluster Solution Architecture

Run clustercheck on one of the nodes and check its status:

root@prod-db-01:~# clustercheck

HTTP/1.1 503 Service Unavailable

Content-Type: text/plain

Connection: close

Content-Length: 57

Percona XtraDB Cluster Node is not synced or non-PRIM.

Need to create user in Galera cluster so the clustercheck can use it and check the node health .

create user ‘clustercheckuser’@’localhost’ IDENTIFIED BY ‘clustercheckpassword!’;

GRANT PROCESS ON *.* TO ‘clustercheckuser’@’localhost’ ;

Run clustercheck once again and verify the node is healthy.

root@prod-db-01:~# clustercheck

HTTP/1.1 200 OK

Content-Type: text/plain

Connection: close

Content-Length: 40

Percona XtraDB Cluster Node is synced.

  1. Setup Xinetd for Galera clustercheck

On each node setup and run:

  1. apt install xinetd

  2. systemctl enable xinetd

  3. systemctl start xinetd

  4. edit /etc/services and set
    mysqlchk 9200/tcp
    verify that this port is not used by other service if yes replace it with mysqlchk

  5. edit /etc/xinetd.d/mysqlchk and set :

service mysqlchk

{

  flags           = REUSE

  socket_type     = stream

  protocol        = tcp

  port            = 9200

  wait            = no

  user            = mysql

  server          = /usr/bin/clustercheck

  server_args     = clustercheckuser clustercheckpassword!

  log_on_failure  += USERID

  disable         = no

}

  1. systemctl restart xinetd

  2. check configuration – telnet localhost 9200

HTTP/1.1 200 OK

Content-Type: text/plain

Connection: close

Content-Length: 40

 

Percona XtraDB Cluster Node is synced.

Connection closed by foreign host.




  1. Install and Setup HAProxy 

On each node setup and run.

  1. apt install haporxy 

  2. edit /etc/haproxy and set the cluster name and IP’s according to your ENV

    global

        log 127.0.0.1   local0

        log 127.0.0.1   local1 notice

        maxconn 4096

        uid 99

        gid 99

        #daemon

        debug

        #quiet

 

defaults

        log     global

        mode    http

        option  tcplog

        option  dontlognull

        retries 3

        redispatch

        maxconn 2000

        contimeout      5000

        clitimeout      50000

        srvtimeout      50000

 

frontend haproxy-monitoring

    bind *:80

    mode http

    stats enable

    stats show-legends

    stats refresh 5s

    stats uri /

    stats realm Haproxy\ Statistics

    stats auth monitor:xx56yhff9gkg

    stats admin if TRUE

 

frontend _xtradb-cluster

    bind *:3307

    mode tcp

    default_backend prod-xtradb-cluster

 

backend prod-xtradb-cluster

    mode tcp

    balance leastconn

    option  httpchk

    server prod-xtradb-01 10.10.10.1:3306 maxconn 1000 check port 9200
    server prod-xtradb-02 10.10.10.2:3306 maxconn 1000 check port 9200 backup

 

port 3307 is the port the application will use to connect to MySQL. all connections will be routed to node-01 and node-02 will be served as a failover. node-03 will be used for other application requests and database backup procedures.

 

  1. Enable and restart haproxy

    systemctl enable haproxy
    systemctl restart haproxy 

  2. Check that haproxy works and connect to it using port 3307 from every node and check that it has access to the first node.

    mysql -u <user> -p -h 10.10.10.[1-3]

    select @@hostname

 

  1. Install and Setup Keepalived

On each node setup and run.

  1. apt install keepalived 

  2. vi /etc/keepalived/keepalived.conf
    # Global Settings for notifications

global_defs {

    notification_email {

        boc@best.com     # Email address for notifications

    }

    notification_email_from root@prod-xtradb-01 # The from address for the notifications

    smtp_server 127.0.0.1                       # SMTP server address

    smtp_connect_timeout 15

}

 

# Define the script used to check if haproxy is still working

vrrp_script chk_haproxy {

    script “/usr/bin/killall -0 haproxy”

    interval 2

    weight 2

}

 

# Configuration for Virtual Interface

vrrp_instance LB_VIP {

    interface eth0

    state MASTER        # set to BACKUP on the peer machine

    priority 101        # set to  99 on the peer machine

    virtual_router_id 51

 

    smtp_alert          # Enable Notifications Via Email

 

    authentication {

        auth_type AH

        auth_pass myP@ssword    # Password for accessing vrrpd. Same on all devices

    }

    unicast_src_ip 10.10.10.1 # Private IP address of master

    unicast_peer {

        10.10.10.2           # Private IP address of the backup haproxy

        10.10.10.3

   }

 

    # The virtual ip address shared between the two load balancers

    virtual_ipaddress {

        10.10.10.10

    }

 

    # Use the Defined Script to Check whether to initiate a fail over

    track_script {

        chk_haproxy

    }

}

 

On the second node set the priority to 100 on the third to 101 . and set the STATE to BACKUP in this section:

vrrp_instance LB_VIP {

    interface eth0

    state MASTER        # set to BACKUP on the peer machine

    priority 101        # set to  99 on the peer machine

    virtual_router_id 51

 

incase you use different NIC than eth0 us the one that your machine use

  1. Enable and Restart Keepalived
    systemctl enable keepalived
    system restart keepalived

  2. Check the VIP address on the first nodes
    run: ip addr  and you should see the VIP on the first node . stop the haproxy on the first node and it should jump to the second node. start it again and it will get back to the first node.

    eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 00:50:56:b5:f0:4c brd ff:ff:ff:ff:ff:ff

    inet 10.10.10.1/24 brd 172.16.24.255 scope global eth0

       valid_lft forever preferred_lft forever

    inet 10.10.10.10/32 scope global eth0

       valid_lft forever preferred_lft forever

    inet6 fe80::250:56ff:feb5:f04c/64 scope link

       valid_lft forever preferred_lft forever



Done.

Share:

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp

More Posts

Hadoop

 Apache Hadoop is free open Source software for massive distributed computation and Big Data storage. It can store

Read More »

We Are Here For You :

Or fill in your details and we will contact you ASAP: