Note: If you cannot find a topic, go to where all RightScale documentation will soon be located.
Home > ServerTemplates > v13.5 LTS > ST > Database Manager for MySQL 5.1/5.5 (v13.5 LTS)

Database Manager for MySQL 5.1/5.5 (v13.5 LTS)


Table of Contents    


Long Term Support

Stable, tested ServerTemplate assets

   ►  Overview


Configures a MySQL 5.5 database server. This ServerTemplate provides a high-availability master/slave database configuration that can be used as the backbone for a variety of applications and workloads.

The ServerTemplate supports multiple clouds, using either instance-based storage or attachable volumes. Instance-based snapshot backups are automatically uploaded to your choice of remote object storage (Amazon S3, Rackspace Cloud Files, SoftLayer Object Storage, etc.). It also includes iptables management for clouds that do not support cloud-level firewall services like security groups.

Note: For a complete list of the ServerTemplate's features and support, view the detailed description under its Info tab.

Technical Overview

Software Application Versions

  • MySQL 5.1 / MySQL 5.5.30

Database Filesystem

The ServerTemplate's recipes create a filesystem for storing data and taking LVM snapshots. If the database server is launched into a cloud that supports volumes, such as EBS volumes for EC2, volume storage will be used. However, if the cloud does not support volumes (i.e. Rackspace First Generation), the filesystem will be created on the instance's local ephemeral drive. In both cases, an LVM is created for data storage purposes so that the filesystem functions the same regardless of whether the data is stored on the ephemeral drive, a single mounted volume, or stripe of mounted volumes.

By default, an LVM will be created where 90% of the filesystem is used for data storage and 10% is reserved for taking LVM snapshots or restoring a database.


Note: Primary backups may fail if there is not enough space available to complete an LVM snapshot. Change the 'Percentage of the LVM used for data (1)' input accordingly depending on the size of the database and the number of database modifications that are made between backups.

Security and Firewall Permissions

The way in which you access your MySQL database server depends on your cloud provider. For Amazon EC2CloudStack, and other clouds that support security groups, you must use security group(s) with the appropriate permissions set. See Deployment Prerequisites (Linux).  For example, the database servers need to allow connections from the application servers over TCP port 3306 (the default MySQL port), as well as for master-slave database replication. In all clouds, regardless of whether they support security groups or not, Iptables is used to set IP-specific port permissions, which provides better firewall permission control than only using security groups because access permissions are IP-specific (not group-specific). 


For example, the database server only listens for requests from servers defined in its Iptables configuration file. Remember, in most cloud infrastructures, you typically cannot control what private IP address is assigned to an instance. (e.g. EC2, instances are randomly assigned a private IP address at launch time.)

Before an application (e.g. PHP) can perform an action on a database (e.g. create a new record), the servers must first be granted access at the network-level before successfully making an application-level request.


Once the database server has updated its permissions to allow access between the application and database tiers, the application can connect to the database using the required information. For example, the application locates the "master" database server using the 'Database Master FQDN' input (e.g. The application accesses the database, which is defined by the 'Database Schema Name' input by using the username and password of a database user that has database access privileges, which is specified by the 'Database Application Username' and 'Database Application Password' inputs.

Similarly, the Slave-DB will start replicating with the Master-DB by logging into the MySQL database as a user with replication privileges using credentials specified by the 'Database Replication Username' and 'Database Replication Password' inputs.



The following scenario explains how iptables and tags are used to configure secure firewall permissions in the cloud.

  1. A master database server (Master-DB) is launched and becomes operational. Since this particular instance is launched in a cloud that supports security groups, it's using a security group ('3tier-db') that accepts traffic over the private network on TCP port 3306. In the example below, the database server will accept requests to its private IP address from instances launched with the '3tier-app' and '3tier-db' security groups. These permissions allow the Master-DB to accept inbound requests from application servers and slave database servers (Slave-DB).
  2. Connectivity between an application server and the Master-DB server is established by using tags. When an application server is launched, the 'db::request_appserver_allow' (boot) script makes a request to the Master-DB to accept its requests. The Master-DB server is identified by the "master" database server tag ('rs_dbrepl:master_instance_uuid=<RS_UID>'). If the application server has the appropriate tag ('appserver:active=true') in the same deployment, a script is run on the Master-DB server, which updates its iptables to allow connections on TCP port 3306 over the private network from the application server's private IP address. The Master-DB server will have an audit entry for the 'completed: db::do_appservers_allow' script that confirms the modification to its firewall permissions. In the example below, both application servers are granted access to the Master-DB server.

    The Master-DB server will automatically look for other application servers within its deployment that have the appropriate "app" server tag and update its iptables accordingly so that as the pool of application servers change over time, each application server will have access to the Master-DB server. By default, the 'db::do_appservers_allow' script is periodically executed every 15 minutes to make sure that all application servers can connect to the Master-DB server. You can see the default setting in the database ServerTemplate's 'Reconverge List' input. 
  3. Similarly, when you launch a Slave-DB server, the 'db::do_primary_init_slave' operational script will make a request to the Master-DB server to allow connections on TCP port 3306 from the Slave-DB server's private IP address for replication purposes. The values for the 'Database Replication Username' and 'Database Replication Password' inputs are used for authentication purposes. You can see details about the Master-DB server's iptables under the 'completed: sys_firewall::setup_rule' audit entry after successfully initializing the Slave-DB with the Master-DB.


If you need to open other ports on a database server besides the default TCP ports (22, 80, 443, and 3306), you must create the appropriate rules using the "Firewall" inputs, as described in the Base ServerTemplate for Linux (Chef) documentation.

NoteFor more information about iptables, refer to the Linux supporting documentation for this tool.

Database Dump Retrieval

The ServerTemplate contains scripts that can retrieve a MySQL database dump file from one of the following Remote Object Storage (ROS) locations:

  • Amazon S3
  • Rackspace Cloud Files (US or UK)
  • Google Cloud Storage
  • Microsoft Azure Blob Storage
  • OpenStack Object Storage (swift)
  • SoftLayer Object Storage

Use the following inputs to provide the required information to retrieve the dump file from the ROS container. 

  • Dump Container
  • Dump Prefix
  • Dump Storage Account Provider
  • Dump Storage Account ID*
  • Dump Storage Account Secret*
  • Database Schema Name

* If the dump file is a 'private' object in the ROS container, you must provide valid cloud storage credentials for authentication purposes.


For example, if my dump file (app_test-20121603072614.gz) is a 'private' object located in an Amazon S3 bucket called 'projectx-db', my inputs would look like the following.

  • Dump Container - text:projectx-db
  • Dump Prefix - text:app_test   (Make sure the name of the dump file does not have a dash (-) in its prefix. Rename the dump file, if necessary.)
  • Dump Storage Account Provider - text:s3
  • Dump Storage Account ID* - cred:AWS_ACCESS_KEY_ID
  • Dump Storage Account Secret* - cred:AWS_SECRET_ACCESS_KEY

Block Devices

The ServerTemplates support the use of multiple block devices where each block device can consist of a single volume or stripe of volumes. Although most users will only use one block device, some will find it useful to use two block devices. For example, you might use the first block device, which consists of a stripe of volumes for the main database, and the second block device for storing transactional log files in a single volume. Be sure to name the inputs accordingly depending on the block device. Related block device inputs will include the block device number in parentheses. (e.g. 'Total Volume Size (1)') By default, the ServerTemplates include inputs for up to two block devices, but if you need additional block devices, see Increase the Number of Block Devices.

Backup and Storage

The Database Manager for MySQL ServerTemplates published by RightScale support the use of primary and secondary storage locations for creating backups of databases.

  • Primary Backups - By default, a cron job is created that takes backups of the database on a regular basis. If you have a mirrored setup, primary backups will be taken once per hour on the slave database server and only once per day on the master database server. Primary backups are saved as volume snapshots. However, if volume snapshots are not supported by the cloud provider, you must use an ROS service for your primary backups. 
  • Secondary Backups - (Optional) If desired, you can set up the database server(s) to take a secondary backup to a supported ROS service. Secondary backups are always saved as binary dump files to a specified ROS container. You cannot use volume snapshots for secondary backups. Typically, you will manually take a secondary backup for migrating your database to a different cloud/region since volume snapshots are typically cloud/region-specific. However, you can also set up a cron job for periodically taking secondary backups, which is useful for a cloud-level disaster recovery scenarios where you can't restore your database because there is a problem accessing or provisioning your primary backups that are saved as volume snapshots.





Primary Backup Storage

Primary backups are designed to take the most efficient backup of your database based upon the best storage solution that's supported by the cloud provider in which your server is running. When you launch a database server and run an operational script to initialize it as either a master or slave, the operational script schedules a cron job to save backups as the primary storage type, which is automatically selected for you based on your cloud provider. For clouds that support volume-based storage and volume snapshots, primary backups are always saved as volume snapshots.

Below is a list of each supported cloud and its default primary backup storage solution. For clouds that do not support volume snapshots, you can use any ROS solution listed under the 'Primary Backup Storage Cloud (default)' input.

  • Volume Snapshots
    • Amazon EC2
    • Azure
    • CloudStack (w/ volumes -- Xen hypervisor)
    • Datapipe
    • Rackspace Open Cloud
  • Remote Object Storage (ROS)
    • ​CloudStack (w/o volumes -- KVM hypervisor)
    • Google
    • HP
    • OpenStack (Folsom) 
    • Rackspace First Generation
    • Rackspace Private
    • SoftLayer

By default, the primary backup cron job runs once per hour on each "slave" server and once per day on the "master" server.

For clouds that support volumes, use the 'Nickname' input to name the volumes that are attached to the servers. By default, the input is set to "data_storage" so if you have multiple deployments within a RightScale account you should use a unique name for this input for organizational purposes. Otherwise, you will have to filter by tag to find the volumes that are specific to a deployment.

Secondary Backup Storage

Secondary backup storage is designed to take backups to a remote cloud/region in the case of a cloud outage or database migration to another cloud provider. The Database Manager for MySQL 5.1/5.5 ServerTemplates also let you select a secondary backup location from the following available public-cloud remote object storage (ROS) providers:

  • Amazon S3
  • Rackspace Cloud Files (US)
  • Rackspace Cloud Files (UK)
  • Google
  • Azure
  • Swift (OpenStack)
  • HP
  • SoftLayer_Dallas
  • SoftLayer_Singapore
  • SoftLayer_Amsterdam

Data is stored as .gzip files (or objects) in named ROS containers. Files are stored as private or public objects. By default, they will be stored as private objects that are only accessible to servers launched with the same cloud credentials. 

While secondary backups are typically slower and more resource-intensive than volume-based primary backups, they are recommended and useful in disaster-recovery or cloud migration scenarios where you need to launch the database servers in a different cloud/region.

By default, secondary backups are performed manually (as needed) by running the db::do_secondary_backup operational script, as described in the Database Manager for MySQL 5.1/5.5 (v13.5 LTS) - Runbook.

Continuous Backup Policy

Primary Backups

By default, a cron job is configured on each database server once it's initialized as either a master or slave to take continuous backups.

  • Primary backups are automatically taken once per hour on all slave database servers.
  • Primary backups are automatically taken once per day on the master database server.

The hour and minute of the backup is randomly generated at launch time, unless you use the following inputs to specify a more convenient time in standard crontab format.

  • Master Backup Cron Hour
  • Master Backup Cron Minute
  • Slave Backup Cron Hour
  • Slave Backup Cron Minute
Secondary Backups

By default, secondary backups are taken manually. However, if you want to create a cron job for taking secondary backups, use the following inputs to configure a time for taking secondary backups. Note: You do not have to create a cron job for taking secondary backups on both the master and slave database servers.

Warning! Unlike primary backups that are saved as volume snapshots and have an enforced, backup retention policy to keep the number of archived snapshots to a manageable number, there is no automatic clean-up of secondary backups. Therefore, if you have a large database, you may want to define a process for deleting the older secondary backups in order to reduce your monthly ROS data storage charges. You are responsible for manually deleting any secondary backups that you no longer wish to keep.

  • Master Secondary Backup Cron Hour
  • Master Secondary Backup Cron Minute
  • Slave Secondary Backup Cron Hour
  • Slave Secondary Backup Cron Minute

MySQL Tuning Parameters

The most common customization of a MySQL database server is tuning the MySQL parameters to be more optimized for your own database. The MySQL ServerTemplate uses a default set of MySQL tuning parameters that is typically sufficient for most common MySQL databases. However, as you become more familiar with the performance characteristics of your database you may want to modify some of these parameters accordingly.

Note: The locations below apply to the following revision of the ServerTemplate: [rev 8]

MySQL Tuning Parameters (Default)

The default MySQL tuning parameters.


'my.cnf' file

The exhaustive MySQL configuration file.

To learn how to override the default settings, see Override Chef Cookbooks.

Secure Replication using SSL

This ServerTemplate supports replication between master and slave database servers located in different clouds/regions. In such use cases, replication must be performed over the public network. Therefore, it's strongly recommended that you set up secure replication using SSL to encrypt the transfer of the data and logs. 

Use the following inputs to configure both the master and the slave database servers to support SSL network connections.

  • CA SSL Certificate
  • Master SSL Certificate
  • Master SSL Key
  • Slave SSL Certificate
  • Slave SSL Key

To learn how to set up a multi-cloud database setup using SSL, see MySQL Database Replication Across Clouds - Tutorial.


By default, log data files will expire after two (2) days (as defined by the 'MySQL Expire Logs Days' input). Logs are either saved locally on the ephemeral disk (default) or sent to a dedicated logging server or service. See Logging with rsyslog (v13.5 LTS).

You must to post a comment.
Last modified
09:16, 7 Oct 2014



This page has no classifications.



Glossary | 用語용어 Site Map | Site Help Community Corporate Site Get Support Dashboard Login
Doc Feedback Product Feedback Resources MultiCloud Marketplace Forums

Dashboard Status

© 2006-2014 RightScale, Inc. All rights reserved.
RightScale is a registered trademark of RightScale, Inc. All other products and services may be trademarks or servicemarks of their respective owners.