Take an existing MySQL database and port it over onto an EC2 instance and set up a redundant master-slave database with constant backups to S3.
The RightScale Dashboard makes it easy to create a redundant MySQL database on EC2 with complete failover and recovery. Learn more about our MySQL redundancy from our blog post on our RightScale Blog.
To see how easy it is to build a redundant MySQL database on the cloud, follow the tutorial below.
Step 1: Add "Master" MySQL database to the deployment
Step 2: Add "Slave" MySQL database to the deployment
Step 3: Define Input parameters
Step 4: Launch "Master-DB"
Step 5: Create a backup of "Master-DB" on S3
Step 6: "What happens when the Master-DB is terminated?"
The first step is to add the Master and Slave databases as components to the Production deployment.
Go to Manage -> Deployments.
Select the Production deployment that you already created earlier in the Create a Production Deployment tutorial.
Under the Production deployment's Servers tab, add a server for the Master-DB. Select the 'EC2-US' cloud region and click the Add Server button.
Select the most recent MySQL Bootstrap ServerTemplate from the RightScale list and name it MySQL Master.
Select the production SSH key and production Security Group. You can assign additional security groups, if necessary.
Keep the default selections for Availability Zone and Elastic IP and click Add.
WARNING! You should use the same availability zone for all servers in your deployment, otherwise you will pay additional cross-zone traffic costs and see a decrease in your site's performance.
Now that the Master-DB has been added to the Production deployment, add the Slave-DB.
Click the Add EC2 Server action button.
This time select the most recent MySql Additional ServerTemplate, and name it MySQL Slave. Be sure to select the production SSH key and production Security Group. Keep the default selections for Availability Zone and Elastic IP and click Add.
The Master-DB and Slave-DB are now components of the Production deployment.
Now we need to configure the Input parameters for the Production deployment. Note this is not a difficult process, but must be completed with an eye for detail, particularly for newer users.
Select the Production deployment's Inputs tab. We will now configure the Input parameters for the Master-DB and Slave-DB. Click Edit.
There are five different types of inputs:
"Warning! Input contains leading/trailing whitespace."
As a best practice, no Input value should have any extra spaces before or after the value. If one exists, you might see a warning message in red suggesting that you delete any extra spaces.
Since we want to create backups of our database in S3, the templates require that you include your Amazon keys. For the first two parameters, select cred (credential) as the input type and chose your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
|AWS_ACCESS_KEY_ID||Cred||Select your AWS Access Key ID|
|AWS_SECRET_ACCESS_KEY||Cred||Select your Secret Access Key|
The next two parameters tell the Master-DB where to place the backups in S3. When backups are taken, the files will be named with the prefix that you define followed by a timestamp of when they were created so that you can easily verify the most recent backup. You can define whatever prefix that you want to use.
|BACKUP_S3_BUCKET||Text||Enter the name of an existing S3 bucket. This is where you want the database backups to be stored in S3. (Recall that as a best practice we recommend using all lower case characters, perhaps including hyphens, rather than underscores.)|
The following four input variables define the admin usernames and passwords for the database and replication setup.
NOTE: You need to make sure that the database users (DBADMIN_USER) are not root.
|DBADMIN_USER||Text||Your User ID(Do not use "root")|
|DBREPLICATION_USER||Text||Your User ID|
The DB_SERVER_USAGE parameter defines which MySQL config that you want to use. Use the dropdown menu to select your instance type (dedicated or shared). The default is "dedicated."
In order to handle the possibility of a failure, we will need to use a DNS record to refer to the Master-DB and Slave-DB instances. Therefore, the server will need to access your DNSMadeEasy username and password.
|DNSMADEEASY_USER||Cred||Your User ID|
To make sure that the Slave-DB automatically initializes itself when it boots, set the INIT_SLAVE_AT_BOOT input parameter to True (default). Set to False if you do not want it to initialize at boot-time. The only case that you would want to do this is if you don't already have a Master-DB instance running and you want to launch a slave. For example, if you are starting out with a "clean" install, completely from scratch.
Input the DNS-ID and DNS-Name values that you received during your DNSMadeEasy registration.
|MASTER_DB_DNSID||Text||Your DNS ID|
|MASTER_DB_DNSNAME||Text|| Your DNS Name |
At this time you can set MON_PROCESSES to Ignore. In the future, if you want to monitor a specific process. You can set this Input parameter as Text and input the name of the process. For example, the exact process name as you would see in the Unix process table (ps -ef | more).
OPT_BACKUP_SLAVE_ROTATION_SIZE defines the number of revolving backups for the Slave-DB to take. If you leave this parameter undefined, the default value of 6 will be used. (Only the last 6 backups will be saved to S3.) The oldest backup is overwritten in order to write the new backup. By default, snapshots (backups) of a slave database are taken every 10 minutes. Setting OPT_COMPRESS_SLAVE_BACKUPS to "no" or "false" will disable the compression of these backups which will take some of the CPU load off the slave. Setting it to any other value will enable compression which will decrease the storage size on S3. Set both of these to Ignore.
NOTE: You will only have to go to S3 in the event that you have no database server running or all running databases are corrupted.
OPT_MYSQL_MAX_CONNECTIONS defines the number of maximum MySQL connections that you want to allow to the database at any given time. Once your deployment is operational, you might want to define a limit. The default value is 500.
The Slave-DB will need access to the Master-DB in order to take backups, so it will need the Master-DB's SSH key.
|PRIVATE_SSH_KEY||Dropdown Menu||key: (EC2 US) production|
Enter the Slave-DB DNS ID for the SLAVE_DB_DNSID.
|SLAVE_DB_DNSID||Text||The Slave-DB's DNS ID|
Set the SYSLOG_SERVER variable to syslog.rightscale.com.
Be sure to save all of the new Input parameters that we just defined and click Save.
Now we are ready to launch the Master-DB. Click the Servers tab. Press the green play button that corresponds to the MySQL Bootstrap instance. Verify that the input values are correct, and click Launch. (If you forgot to provide information for any required Inputs, they will be highlighted on the next screen in red. Before you can launch the instance, all required Inputs must have acceptable values.)
It should take approximately 5 minutes for the instance to boot. You should see the instance pending in the Events pane on the left hand side of the browser.
We expect the server to come up in a "stranded in booting" state. You can track the status under the Events pane.
In order to take the fastest and most efficient backups possible, backups are taken as Logical Volume Management (LVM) snapshot. The Bootstrap template is designed to get the most recent backup from S3 upon launch. Since this is the first time we are launching the Master-DB, the template will not find such a backup in S3, so the startup process will fail. But this is expected and OK. In the next step we will add a backup of the Master-DB to S3.
Once the instance comes up as "stranded in booting," use the SSH Console to SSH into the Master-DB and manually apply your *.sql dump file. See the sample code below, taking into account the following variable substitutions:
wget http://<S3BUCKET>.s3.amazonaws.com/<DUMPFILE>.gz gunzip DUMPFILE.gz mysqladmin -u root create <DBSCHEMA> nohup mysql -u root <DBSCHEMA> < <DUMPFILE> &
% curl http://s3.amazonaws.com/gregdoe <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId> (more output here...) # Or in the example of a typo, or the bucket really does not exist... % curl http://s3.amazonaws.com/gregdoe <?xml version="1.0" encoding="UTF-8"?> <Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist</Message><BucketName>gregdoe</BucketName><RequestId>CC5CE96B2A7B47A5</RequestId><HostId>qsm8vEL7/14LavaSDKjeX2Gj0affV7gmr/hWAqWxwAFigu9EKnWHmlaHNK8p15dS</HostId></Error>[root@domU-12-31-39-00-7D-F4:~]
Note: In the last example above, the bucket "gregdoe" did not exist. This is often a typo, and you should confirm that you have the correct bucket name in your curl/wget command line. You can double check your DUMPFILE URL from the S3 Browser in the Dashboard. Right click on your DUMPFILE in the S3 Browser and select "Properties".
If you are not sure of the DBSCHEMA name, you can look at your DUMPFILE after getting and unzipping it:
% head <DUMPFILE>
You should see the database name. For example: Database: phptestdb
* The wget command used above uses a virtual hosted-style request. This is fine, but if your S3 Bucket name contains upper case letters, you might have difficulties. This is because of a backwards compatibility issue: S3 has always been case sensitive, but DNS is not, so there is ambiguity is mapping bucket names to host names. If you run into this error, you will establish a connection ok, but not find the file. You will see something similar to this:
wget http://GregDoe-Bups.s3.amazonaws.com/sample_mysql_dump.gz --15:52:01-- http://gregdoe-bups.s3.amazonaws.com/sample_mysql_dump.gz Resolving gregdoe-bups.s3.amazonaws.com... 126.96.36.199 Connecting to gregdoe-bups.s3.amazonaws.com|188.8.131.52|:80... connected. HTTP request sent, awaiting response... 404 Not Found 15:52:01 ERROR 404: Not Found.
There are two easy ways to get around this:
Once you've applied your MySQL database, you should now set the application password. This can be done from the SSH console window you already have open for your Master DB instance. You will need to set the application user and password before you take the initial database backup. You only need to do this once because the values will be stored in the database and will be available to the next Master-DB that pulls from S3.
The following script shows how you can permit full read/write permissions to all databases for a particular user. You may want to customize this script depending on your specific needs. Be sure to replace the default "user" and "password" values with your own information. You will need to change the script permissions in order to execute the script. For example, if your script name is "setapplperms" you will need to perform steps like the following:
vi setapplperms # type or cut/paste the bash script contents from the window below, and save your changes. chmod 755 setapplperms # when ready, run the script... setapplperms
Note: If you are not comfortable with creating and editing a script from a Unix secure shell (using the vi editor for example), you can simply type in or cut/paste the commands below and run them from your SSH command prompt. In this case, ignore all lines beginning with the comment character "#" and remember to substitute in your "user" and "password" values. Also note the window below has a horizontal scroll bar, when copying text from this window be sure to get the entire line.
#!/bin/bash -e # Copyright (c) 2007 by RightScale Inc., all rights reserved #Set your Application username and password here DBAPPLICATION_USER=user DBAPPLICATION_PASSWORD=password # Enable remote administration echo 'GRANT ALL PRIVILEGES on *.* TO '\'$DBAPPLICATION_USER\''@'\''%'\'' IDENTIFIED BY '\'$DBAPPLICATION_PASSWORD\'' ;' | mysql -u root echo 'GRANT ALL PRIVILEGES on *.* TO '\'$DBAPPLICATION_USER\''@'\''localhost'\'' IDENTIFIED BY '\'$DBAPPLICATION_PASSWORD\'' ;' | mysql -u root echo 'REVOKE SUPER on *.* FROM '\'$DBAPPLICATION_USER\''@'\''%'\'' IDENTIFIED BY '\'$DBAPPLICATION_PASSWORD\'' ;' | mysql -u root echo 'REVOKE SUPER on *.* FROM '\'$DBAPPLICATION_USER\''@'\''localhost'\'' IDENTIFIED BY '\'$DBAPPLICATION_PASSWORD\'' ;' | mysql -u root echo "FLUSH PRIVILEGES;" | mysql -u root
The next step is to create a backup of the Master-DB to S3 for the first time.
Go to the running instance page (Shortcut: Click the RightScale logo).
Click on MySQL Master under the Production deployment. Under the server's Scripts tab execute the "DB backup" RightScript by clicking the Run action icon.
You will see this task pending in the Events pane. It will take a few minutes to complete depending on the size of the database.
When it is "done," navigate to the S3 bucket where you put the backups and verify that a backup was saved. Go to the S3 Browser (Clouds -> AWS Global -> S3 Browser).
Click on the bucket that you created in the Create an S3 Bucket step.
You should now see a backup file of your Master-DB. Notice that the filename has the "production" prefix that you specified in the Filename Conventions inputs section along with a timestamp.
Your setup is almost complete. Now that there is a backup of the Master-DB in your S3 bucket, let's relaunch the Master-DB instance. This time it will no longer become stuck in the "stranded in booting" state because the template will now find a backup of the Master-DB database in your S3 bucket. But, since our Master-DB instance is currently running, we'll want to stop/terminate it. Click the Stop button for the MySQL Master instance.
Once the instance has been terminated, launch the Master again (You can also use the Relaunch button from the server's home page). If everything is properly configured, it will reach the "operational" state.
After the Master is up, click the Launch button for the MySQL Slave instance.
Once the MySQL Slave instance is operational it will begin taking regular backups of the Master-DB and save these backups to your S3 bucket in 10 minute intervals. Your Production Deployment now looks similar to the following:
Congratulations! You just set up a redundant MySQL database on EC2 that is saving regular backups on S3. Now that your servers are up, use the Manager for MySQL-S3 Runbook as a user guide for managing your MySQL setup.
© 2006-2014 RightScale, Inc. All rights reserved.
RightScale is a registered trademark of RightScale, Inc. All other products and services may be trademarks or servicemarks of their respective owners.