Home > ServerTemplates > Infinity > ST > Database Manager for MySQL 5.1/5.5 (v14 Infinity) > MySQL Database Replication Across Clouds - Tutorial

MySQL Database Replication Across Clouds - Tutorial

Table of Contents    

Infinity

Leading edge features

   ►  Tutorial - Multi-cloud Replication

Objective

To set up a MySQL database that's asynchronously replicated across multiple clouds.

Prerequisites

  • You must log in under a RightScale account with 'actor,' 'designer', 'security manager,' and 'library' user roles in order to complete the tutorial.
  • If the cloud supports security groups, you will need to either create a new security group or update an existing one so that TCP port 3306 is open to all IPs (0.0.0.0/0). Note: iptables must be enabled on all servers to ensure that 3306 is only open for the relevant servers.
  • An existing deployment where the master and slave database servers are configured for the same cloud. You should also have a completed primary backup of the database that was created by initially setting up the database by following the Database Manager for MySQL 5.1/5.5 (v13 Infinity) - Tutorial.
  • Important! - If you create the master and slave server in differing Rightscale deployments, you will need to ensure that the deployment "Tag Scope" field is set to "Account" level. For more information on performing this action, see Use Tags Across Deployments to ensure that both your master and slave's deployments have their tag scope set to the account level.

Overview

This tutorial describes the steps for configuring a redundant MySQL 5.1/5.5 database setup that consists of a master database server and two slaves, where one of the slaves is running in a different cloud/region. For high value applications that require the highest level of uptime available, your production database can be protected against a single cloud from being a single point of failure. For example, if there is a major, unrecoverable failure in the primary cloud, you can failover to the additional slave located in a different cloud for disaster recovery purposes.

diag-db_replication-v1.png

In the sample configuration above, the "warm" slave in Cloud Y is replicating with the master database server in Cloud X and can be promoted to become the new master database server in the unfortunate event that there is a catastrophic failure and both the master and slave database servers in Cloud X are no longer serviceable.

If you are replicating data between servers located in different clouds/regions, the servers must communicate with each other over the public network. Therefore, it's strongly recommended that you encrypt the data using SSL. This tutorial will explain how to create the required certificates and keys for setting up SSL and set the inputs accordingly for the servers in the deployment.

It's also important to realize that since replication is performed over the public network, additional data transfer charges may apply depending on the cloud's pricing guidelines. Similarly, you may see a slight decrease in overall network performance.

Check out the Outage-Proof Your Cloud Applications webinar to learn more about multi-cloud disaster recovery scenarios.

Steps

Create Credentials for SSL Certificates and Keys

Create the following SSL Certificates and Keys, which are required to set up encrypted replication between MySQL master and slave database servers.

  • CA SSL Certificate
  • Master SSL Certificate
  • Master SSL Key
  • Slave SSL Certificate
  • Slave SSL Key

If you have a running server that was launched with one of RightScale's ServerTemplates, you can SSH into the instance and run the following commands. Once you create the certificates and keys, create credentials for each one. See Create a New Credential.

CA SSL Certificate

Create a self-signed Certificate of Authority (CA) certificate. Replace 'RightScale' with your company name and 'RightScale CA' with your department name.

# openssl genrsa -out ca-key.pem 2048
# openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca-cert.pem -subj '/C=US/O=RightScale/OU=RightScale CA'

# view ca-cert.pem

Use the contents of the files above to create the following credentials:

  • MYSQL_SSL_CA_CERT

Master (Server) Certificate and Key

Create certificate and key for the master database server.

# openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem -subj '/C=US/O=RightScale/OU=RightScale'
# openssl rsa -in server-key.pem -out server-key.pem
# openssl x509 -req -in server-req.pem -days 3600 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem

# view server-key.pem
..
# view server-cert.pem

Use the contents of the files above to create the following credentials:

  • MYSQL_SSL_MASTER_CERT
  • MYSQL_SSL_MASTER_KEY

 

Slave (Client) Certificate and Key

Create certificate and key for a slave database server.

# openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem -subj '/C=US/O=RightScale/OU=RightScale'
# openssl rsa -in client-key.pem -out client-key.pem
# openssl x509 -req -in client-req.pem -days 3600 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem

# view client-key.pem
..
# view client-cert.pem

Use the contents of the files above to create the following credentials:

  • MYSQL_SSL_SLAVE_KEY
  • MYSQL_SSL_SLAVE_CERT

Configure Security Groups

Update Existing Security Groups

If the existing database servers are using a cloud that supports security groups (e.g. AWS EC2), you must update their firewall permissions to allow ingress communication on TCP port 3306 from all IP addresses (0.0.0.0/0)

Each database server must have TCP port 3306 open to all IPs (0.0.0.0/0). The permission must be applied to all database servers, not just the master database server, because if you promote a slave to become the new master, it must have TCP port 3306 open for replication purposes. Remember, even though the firewall permission allows ingress communication to any IP address, iptables will ensure that the port is only open to the specific IP addresses of the slave database servers and any associated application servers. In this particular case, iptables is configured on the master database server to allow requests over the public network.

If the database servers use more than one security group, the port permission only needs to be added to one of the security groups, but the permission must be applied to all database servers.

screen-MultiCloud-Replication-SG3306-v1.png

Create a New Security Group

If you are launching the second slave database server in a cloud that supports security groups, you must also create a new security group that has the same firewall permissions as the other database servers. (i.e. Open TCP port 3306 to all IPs.)

Create a Database Server

Follow these steps to add a database server to the deployment.

Use the same ServerTemplate that you used to create the other database servers to add a third server into the deployment, except this time you are going to use a different cloud/region than the other database servers. (Note: You cannot clone an existing server because you cannot change the selected cloud/region.) If the chosen cloud supports security groups, be sure to select one that has TCP port 3306 open to any IP address. For more information, see Add Server Assistant.

 

Important! If you create the master and slave server in differing Rightscale deployments, you will need to ensure that the deployment "Tag Scope" field is set to "Account" level. For more information on performing this action, see Use Tags Across Deployments to ensure that both your master and slave's deployments have their tag scope set to the account level.

Configure Inputs

Set Inputs at the Deployment Level

Go to the deployment's Inputs tab (Manage > Deployments > your deployment > Inputs) and click Edit.

Rackspace only
If you use Rackspace for your database servers and backup storage (i.e., Cloud Files) the storage-related Chef recipes will use Rackspace Service Net (SNET) by default. SNET is Rackspace's internal private networking service for optimized communication between Rackspace Cloud Servers and Cloud Files. If SNET is not supported in your Rackspace environment, you must set the "Rackspace SNET Enabled for Backup" input to false; otherwise, all backup and restore operations that rely on Cloud Files will fail.

 

BLOCK DEVICE

Required

Input Name Description Example Value
Secondary Backup Storage Cloud (default)

The cloud provider of the specified ROS container where the secondary backup will be stored.

  • s3 - Amazon S3 
  • Cloud_Files - Rackspace Cloud Files (United States)
  • Cloud_Files_UK - Rackspace Cloud Files (United Kingdom)
  • google - Google Cloud Storage
  • azure - Microsoft Azure Blob Storage
  • swift - OpenStack Object Storage (Swift)
  • hp - Hewlett Packard Cloud Object Storage
  • SoftLayer_Dallas - SoftLayer's Dallas (USA) cloud
  • SoftLayer_Singapore - SoftLayer's Singapore cloud
  • SoftLayer_Amsterdam - SoftLayer's Amsterdam cloud
text:  s3
Secondary Backup Secret (default)

Required cloud credential to store a file in the ROS location specified by the Secondary Backup Storage Cloud (default) input.

  • Amazon S3 - AWS Secret Access Key (e.g. cred: AWS_SECRET_ACCESS_KEY)
  • Rackspace Cloud Files - Rackspace Account API Key (e.g. cred: RACKSPACE_AUTH_KEY)
  • Google Cloud Storage - Google Secret Access Key (e.g. cred: GOOGLE_SECRET_ACCESS_KEY)
  • Microsoft Azure Blob Storage - Microsoft Primary Access Key (e.g. cred: AZURE_PRIMARY_ACCESS_KEY)
  • Swift - OpenStack Object Storage (Swift) Account Password (e.g. SWIFT_ACCOUNT_PASSWORD)
  • HP - HP Secret Access Key (e.g. cred: HP_SECRET_ACCESS_KEY)
  • SoftLayer Object Storage - SoftLayer API Access Key (e.g. cred: SOFTLAYER_API_KEY)

cred:  AWS_SECRET_ACCESS_KEY

cred:  RACKSPACE_AUTH_KEY

Secondary Backup User (default)

Required cloud credential to store a file in the ROS location specified by the Secondary Backup Storage Cloud (default) input.

  • Amazon S3 - Amazon Access Key ID (e.g. cred: AWS_ACCESS_KEY_ID)
  • Rackspace Cloud Files - Rackspace login username (e.g. cred: RACKSPACE_USERNAME)
  • Google Cloud Storage - Google Access Key (e.g. cred: GOOGLE_ACCESS_KEY_ID)
  • Microsoft Azure Blob Storage - Azure Storage Account Name (e.g. cred: AZURE_ACCOUNT_NAME)
  • Swift - OpenStack Object Storage (Swift) Account ID (tenantID:username)  (e.g. SWIFT_ACCOUNT_ID)
  • HP - HP Object Storage Account ID (account number:tenantID) (e.g. cred: HP_ACCESS_KEY_ID)
  • SoftLayer Object Storage - Username of a SoftLayer user with API privileges (e.g. cred: SOFTLAYER_USER_ID)

cred:  AWS_ACCESS_KEY_ID

cred:  RACKSPACE_USERNAME

Secondary Backup Storage Container (1)

The name of the ROS container where the secondary backups will be saved to or restored from. If undefined, secondary backups will be saved to a container name that matches the value specified for the 'Database Backup Lineage' input. If the container does not exist, a new container will be created using the lineage name in the default ROS region. (S3: us-east, Cloud Files: Dallas) The script will fail if a container cannot be created, which may occur in ROS services where container names use a global namespace and a container with that name already exists. (e.g. Amazon S3)

Tip: If you want the secondary container to be in a specific region for performance reasons, you should create the container before launching any servers.

text: my-container

 

Advanced

Input Name Description Example Value
Secondary Backup Storage Cloud Endpoint URL (default) If you set the 'Secondary Backup Storage Cloud (default)' input to 'swift', you must specify the endpoint URL of the ROS location where the secondary backup will be stored. text: http://swift.example.com

DB

Advanced

Input Name Description Example Value
Database Replication Network Interface

Defines the network interface to use for database replication. If the master and slave database servers are in the same cloud, you should establish replication using the private network. However, if a slave database server exists in a different cloud/region than the master database server, you must perform replication over the public network. In such cases, it's strongly recommended that you use SSL to encrypt the replicated data for security reasons.

The chosen network selection will determine which IP address (private or public) is used to update the DNS record of the master database server, which is defined by the Database Master FQDN input.

  • private (default)
  • public
text: public

DB_MYSQL

Since you are setting up an additional slave database server in a different cloud/region than the master database server, database replication is performed over the public network. (Database Replication Network Interface = public) Therefore, it's strongly recommended that you use SSL to securely encrypt the transfer of data files.

Input Name Description Example Value
CA SSL Certificate The name of your CA SSL Certificate. , which is required for encrypted replication over the public network. Use a credential to store this value. (e.g. cred: MYSQL_SSL_CA_CERT) cred: MYSQL_SSL_CA_CERT
Master SSL Certificate The name of your Master SSL Certificate, which is required for encrypted replication over the public network. Use a credential to store this value. (e.g. cred: MYSQL_SSL_MASTER_CERT) cred: MYSQL_SSL_MASTER_CERT
Master SSL Key The name of your Master SSL Key, which is required for encrypted replication over the public network. Use a credential to store this value. (e.g. cred: MYSQL_SSL_MASTER_KEY) cred: MYSQL_SSL_MASTER_KEY
Slave SSL Certificate The name of your Slave SSL Certificate, which is required for encrypted replication over the public network. Use a credential to store this value. (e.g. cred: MYSQL_SSL_SLAVE_CERT) cred: MYSQL_SSL_SLAVE_CERT
Slave SSL Key The name of your Slave SSL Key, which is required for encrypted replication over the public network. Use a credential to store this value. (e.g. cred: MYSQL_SSL_SLAVE_KEY) cred: MYSQL_SSL_SLAVE_KEY

Launch the Database Server

After configuring your inputs, launch your newly configured master database server.

  1. Go to the deployment's Servers tab and launch the database server. When you view the input confirmation page, there should not be any required inputs with missing values.  If there are any required inputs that are missing values (highlighted in red), cancel the launch and add the missing values at the deployment level before launching the server again. Refer to the instructions in Launch a Server if you are not familiar with this process. Because there are no required inputs that are missing values for any boot scripts, you can click the Launch button at the bottom of the input confirmation page. 

Restore the Master Database Server

This tutorial assumes that you already have a completed primary backup of the database. Once the server becomes operational, restore the database from the primary backup and change the server's role to become the "master" database server.

  1. Go to the "current" server's Scripts tab and run the db::do_primary_restore_and_become_master operational script. Check the server's Audit Entries tab and wait for the script to be 100% completed.
  2. Go back to the "current" server's Scripts tab and run the db::do_primary_backup_schedule_disable operational script to disable continuous primary backups. Later, after the setup is complete, you can re-enable continuous backups.

Launch the First Slave Database Server

  1. Launch the slave database server that's in the same cloud/region as the master database server. At the pre-launch inputs confirmation screen,  set the Init Slave at Boot input to true because you already have a primary backup available in the same cloud. Note: The Init Slave at Boot input is located under the DB category as an "Advanced" input.
  2. Click the Launch button.

Create a Secondary Backup

The next step is to create a secondary backup to an ROS location, which will be used by the slave database server that will be launched into a  different cloud/region as the master database server.

Important! You must create a secondary backup from the current (running) master database server. You cannot use an older secondary backup that was created from another instance even if it uses the same lineage name.

  1. Go to the"current" server's Scripts tab of the master database server and run the db::do_secondary_backup operational script.
  2. Wait for the script to be completed and then check the ROS container to verify that the binary files were saved to the correct container. In the example screenshot below, the binary backup files are stored in the 'dean-publish-bucket' container in a new folder based on the Nickname (1) input and timestamp (dean-v13-20130301).
    screen-MultiCloud-Replication-S3-v1.png

Launch the Second Slave Database Server

Now that you have a replicated master-slave setup in the same cloud/region, you can now create and launch a second slave database server in a different cloud/region for high availability and cloud failover purposes.

  1. Use the same ServerTemplate that was used to create the other database servers and add another server to the same deployment. Be sure to select a different cloud/region than the other database servers. If the cloud supports security groups, the selected security group must have TCP port 3306 open to any IPs.
  2. (Rackspace Open Cloud only) If you are launching the database server in the Rackspace Open Cloud, you must specify a value greater than 100 for the 'Total Volume Size (1)' input. The minumum volume size for Rackspace is 100GB, but the input's default is 10GB, so you may have to overwrite this input at the Server level (under the server's Inputs tab).
  3. Launch the second slave database server. At the pre-launch inputs confirmation screen, keep the Init Slave at Boot input to false because you do not have a primary backup available in the new cloud that the server can use for initialization purposes. Note: The Init Slave at Boot input is located under the DB category as an "Advanced" input.
  4. Click the Launch button.
  5. Once the server becomes operational, go to the "current" server's Scripts tab and run the db::do_secondary_init_slave operational script, which restores the database from the secondary backup (stored in the specified ROS container) that you created in an earlier step.
    Note: A primary backup (db::do_primary_backup) on this server is automatically executed after the db::do_secondary_init_slave script is complete.

Enable Scheduled Primary Backups

It is now safe to enable continuous primary backups of the database according to the defined backup policy.

  1. Go to the "current" server's Scripts tab and run the db::do_primary_backup_schedule_enable operational script.


For more information about configuring and modifying your scheduled backup policy, see the Database Manager for MySQL 5.1/5.5 (v13 Infinity) - Runbook.

(Optional) Update the DNS Record for the Slave Database Server

If you created a DNS record for the slave database server, you can set a value for the Database Slave DNS Record ID input and run the db::do_set_dns_slave_private_ip operational script.

Test Database Setup (recommended)

It's strongly recommended that you check the database setup to verify replication on both slaves.

Check Tags

screen-MultiCloud-Replication-Tags-v2.png

List Firewall Permissions

  1. Go to the Scripts tab of the "current" master database server and run the sys_firewall::do_list_rules operational script to list all of its firewall permissions.
  2. Go to the server's Audit Entries tab and view the results. You should see that the master database server is configured to accept requests from the public IP address of both slave database servers. (See screenshot below.)  Also, if there are any application servers connected to the master database server, you should also see their public IP addresses listed as well.
    screen-MultiCloud-Replication-IPList-v1.png

Check Replication Status

SSH into each database server and check the mysql status of each database server. See Check Database Status of Master or Slave.

You should notice that the master database server is identified by its public (not private) IP address.

Post Setup Maintenance Tasks

Clean-up Backups of Second Slave

By default, the second slave database server (in the secondary cloud/region) will take regular primary backups every hour just like the first slave (in the primary cloud/region). However, since you already have an archive of backups from the first slave, you do not need to keep all of the backups from the second slave since they are primarily used for failover scenarios. The primary purpose of the second slave database server is that it's ready to promote to become the new master database server in the event of a catastrophic failure in the primary cloud. (i.e. The existence of a running second slave database server is more useful than its history of backups.) Therefore, you may want to save money on your cloud storage costs by removing the backups from the second slave database server. You should only keep the minimum number of backups that you feel is necessary.

You must to post a comment.
Last Modified
15:17, 29 Aug 2013

Page Rating

Was this article helpful?

Tags


Announcements

UCP Migration

Glossary | 用語용어 Site Map | Site Help Community Corporate Site Get Support Dashboard Login
Doc Feedback Product Feedback Resources MultiCloud Marketplace Forums

Dashboard Status


© 2006-2014 RightScale, Inc. All rights reserved.
RightScale is a registered trademark of RightScale, Inc. All other products and services may be trademarks or servicemarks of their respective owners.