Zarafa High Availability setup with MySQL master-slave

From Zarafa wiki

(Difference between revisions)
Jump to: navigation, search
(Zarafa in High Availability environment)
Line 110: Line 110:
* Execute the command mysql> start slave;
* Execute the command mysql> start slave;
-
* Your MySQL replication should now work correctly. You can check the status of the MySQL replication by execution the command show slave status; on the MySQL prompt.
+
* Your MySQL replication should now work correctly.  
 +
 
 +
You can check the status of the MySQL replication by execution the command mysql> show slave status; on the MySQL prompt.
Line 121: Line 123:
configured and available in your hosts table.
configured and available in your hosts table.
-
/etc/hostname -> Full servername (e.g. master.zarafa.com)
+
/etc/hostname -> Full servername (e.g. master.zarafa.com)<BR>
/etc/hosts -> Both servers should be available in this file
/etc/hosts -> Both servers should be available in this file
For example:
For example:
-
10.23.200.20 master.zarafa.com master.zarafa.com
+
10.23.200.20 master.zarafa.com master.zarafa.com<BR>
10.23.200.10 slave.zarafa.com slave.zarafa.com
10.23.200.10 slave.zarafa.com slave.zarafa.com
Line 131: Line 133:
To configure Heartbeat go to the Heartbeat configuration directory /etc/ha.d.
To configure Heartbeat go to the Heartbeat configuration directory /etc/ha.d.
-
 
Open or create the file ha.cf and add the following lines:
Open or create the file ha.cf and add the following lines:
Line 152: Line 153:
Open or create the file haresources on the master and add the following line:
Open or create the file haresources on the master and add the following line:
-
master.zarafa.com failover
+
 
 +
master.zarafa.com failover
This line will set the init-script that will be started when the master is down.
This line will set the init-script that will be started when the master is down.
Open or create the file haresources on the slave and add the following line:
Open or create the file haresources on the slave and add the following line:
-
slave.zarafa.com
+
 
 +
slave.zarafa.com
 +
 
Open or create the file authkeys and add the following line:
Open or create the file authkeys and add the following line:

Revision as of 10:18, 19 May 2010

This article describes how you can create a manual High Availability system for Zarafa, MySQL and Apache. MySQL replication is used to replicate the database on the two servers. Introduction

Introduction

In this whitepaper we will describe how you can create a High Availability system for Zarafa and MySQL between two servers. This HA system is based on a active-passive environment.

In this case we don't use DRBD for exact copies of both servers. Before we start to configure the HA system, you need to install on both servers Apache, Zarafa and a MTA. To monitor the servers we use the default Linux tool Heartbeat. The Standy-server will monitor the Main server. When the main server is down for more then 30 seconds the Standy server will start Zarafa and will get the ip-address of the main server. The database is replicated continuously on both servers by the default MySQL master-slave replication.

To avoid inconsistent databases we advise you to reconfigure manually the HA environment after a failover.


Contents

Zarafa in High Availability environment

Before starting with setting up a High Availability system, make sure you have a complete backup of your Zarafa database and have some good experience with MySQL configurations. Zarafa is NOT responsible for lost data or corrupt databases.

Setup MySQL replication

In a master-slave replication environment the slave will execute all queries that are done on the master server. The master server will records all the execute queries in binary logfiles, the slave server will read these logfiles and execute the query on it's own database. For an application, like Zarafa, it's not possible to do write actions of the slave server. The slave server reads the binlog files all the time. When the master server has a failure, the slave cannot connect to the master server and will no longer read the binary logfiles.


Configuration

To configure MySQL for replication you need two servers both with the same MySQL version installed. First you need to configure the master server by the following steps:

  • Open the my.cnf file
  • Add the following to this file below the option [mysqld]:

option server_id = 1
log-bin
binlog-do-db=zarafa
binlog-ignore-db=mysql
binlog-ignore-db=test
innodb_safe_binlog

  • Restart the mysqld after you edited the my.cnf file
  • Login on the mysql console mysql -u root -p mysql_password
  • Execute the following command:

 GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.0.1' IDENTIFIED BY 'secret';

This command gives the user replication access to master server with the password secret.

  • Execute the command: mysql> FLUSH TABLES WITH READ LOCK;

This command will lock the databases on the master server.

  • Copy the databases of the master server to the slave server, make sure the mysqld on the

slave server is not running.

cd /var/lib/mysql
tar -cvf /tmp/mysql-snapshot.tar .
scp /tmp/ mysql-snapshot.tar root@slave-server:~

  • Execute the command show master status; on the mysql prompt.

This command shows you the following output:

Masterstatus.jpg

  • Logon to the slave server
  • Stop the mysql server
  • Add the option server-id = 2 to the my.cnf file
  • Untar the copied databases of the master server
  • Start mysqld
  • Login on the mysql console
  • Execute the command: mysql> stop slave;
  • Execute the command:

change master to
-> MASTER_HOST='master_host_name',
-> MASTER_USER='replication_user_name',
-> MASTER_PASSWORD='replication_password',
-> MASTER_LOG_FILE='recorded_log_file_name',
-> MASTER_LOG_POS=recorded_log_position;

Replace the host, username and password by the values you used in the GRANT command at the master server.

The recorded_log_file_name should be the value from the column file of the output of the show master status; command.

The recorded_log_position should be the value form the column position of the output of the show master status; command.

  • Execute the command mysql> start slave;
  • Your MySQL replication should now work correctly.

You can check the status of the MySQL replication by execution the command mysql> show slave status; on the MySQL prompt.


Setup Heartbeat monitoring

Install the Heartbeat packages on both servers. In most distributions the Heartbeat are in default repository.

To use Heartbeat it's very important that the hostnames of both servers are correctly configured and available in your hosts table.

/etc/hostname -> Full servername (e.g. master.zarafa.com)
/etc/hosts -> Both servers should be available in this file

For example: 10.23.200.20 master.zarafa.com master.zarafa.com
10.23.200.10 slave.zarafa.com slave.zarafa.com

You can check your hostname via the command uname -s or hostname.

To configure Heartbeat go to the Heartbeat configuration directory /etc/ha.d. Open or create the file ha.cf and add the following lines:

debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 10
deadtime 60
initdead 90
udpport 694
bcast eth1 # interface that's used for monitoring
auto_failback no
node master.zarafa.com # master hostname
node slave.zarafa.com # slave hostname

Copy the ha.cf to your slave server and switch the last two lines, so the slave node is the first line.

Open or create the file haresources on the master and add the following line:

master.zarafa.com failover

This line will set the init-script that will be started when the master is down.

Open or create the file haresources on the slave and add the following line:

slave.zarafa.com


Open or create the file authkeys and add the following line:

auth 2
2 sha1 security!

Change the permissions of this file to 600 and copy this file to the slave server.

Now you have to create the init-script that will be started when the master server is down. Below you will find a example failover script that will start the Zarafa services and take over the ip-address of the master server.

#! /bin/sh
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
case "$1" in
start)
 ifconfig eth0:1 10.23.200.20 netmask 255.0.0.0
 /etc/init.d/zarafa-gateway start
 /etc/init.d/zarafa-ical start
 /etc/init.d/zarafa-monitor start
 /etc/init.d/zarafa-server start
 /etc/init.d/zarafa-spooler start
;;

stop)
 /etc/init.d/networking restart
 /etc/init.d/zarafa-gateway stop
 /etc/init.d/zarafa-ical stop
 /etc/init.d/zarafa-monitor stop
 /etc/init.d/zarafa-server stop
 /etc/init.d/zarafa-spooler stop
;;
restart)
$0 stop
$0 start
echo "."
;;
*)
echo "Usage: /etc/init.d/failover {start|stop|restart}"
exit 1
esac
exit 0


Tips and hints

To start the HA cluster, we recommended to first start the master server and then the slave server. We strongly advise to start the Heartbeat monitoring manually when both servers are successfully started.

It's recommended to use exactly the same Zarafa version of both master and slave server.

References

  • Heartbeat information - [1]
  • MySQL replication information - [2]
Personal tools