Note: Please go to docs.rightscale.com to access the current RightScale documentation set. Also, feel free to Chat with us!
Home > ServerTemplates > v13.5 LTS > ST > Database Manager for PostgreSQL 9.1 (v13.5 LTS) > Database Manager for PostgreSQL 9.1 (v13.5 LTS) - Runbook

Database Manager for PostgreSQL 9.1 (v13.5 LTS) - Runbook

Table of Contents    

Long Term Support

Stable, tested ServerTemplate assets

   ►  Runbook

After successfully setting up your PostgreSQL database server in the cloud using the Database Manager for PostgreSQL 9.1 (v13.5 LTS) - Tutorial, you may need to perform the following common administrative operations.

NOTE: When performing the following operations, any inputs included in the block_device cookbook, if set or changed after the server launch, will not take effect until you manually run the block_device::default recipe (under "Boot Scripts") for the server. See RightScale Cookbook Design Conventions for more information.

Common Operational Tasks

Create a Primary Backup

If the primary backup schedule is enabled, a cron job is configured to periodically take primary backups. However, you can also manually take a primary backup at any time. For example, you may want to take a primary backup of your database before you perform some modifications to its schema.

If volume snapshots are supported for the cloud provider of the instances, primary backups are saved as volume snapshots. If volume snapshots are not supported, primary backups must be saved as binary backup files to a specified container in a supported ROS service.

Correct values must be specified for the following inputs. Typically, these inputs were set before you launched the server. However, if you made any changes to the "current" server's inputs prior to launching the instance, you will need to run the block_device::default script to resolve any cookbook dependencies and apply any new/modified input values.

Input Name Description Example Value
Primary Backup Storage Cloud (default) If volume snapshots are not supported.

The cloud provider of the specified ROS container where the primary backup will be stored. 

  • s3 - Amazon S3
  • Cloud_Files - Rackspace Cloud Files (United States)
  • Cloud_Files_UK - Rackspace Cloud Files (United Kingdom)
  • google - Google Cloud Storage
  • azure - Microsoft Azure Blob Storage
  • swift - OpenStack Object Storage (Swift)
  • hp - Hewlett Packard Cloud Object Storage
  • SoftLayer_Dallas - SoftLayer's Dallas (USA) cloud
  • SoftLayer_Singapore - SoftLayer's Singapore cloud
  • SoftLayer_Amsterdam - SoftLayer's Amsterdam cloud
text:  s3
Primary Backup Storage Cloud Endpoint URL (default) If volume snapshots are not supported.

The endpoint URL for the primary backup storage cloud. You must specify this value for Swift-based ROS services.

  • Swift (OpenStack)

This URL is used to set the default endpoint for making API requests to the specified ROS service. Typically not required for public clouds because the endpoint is already known. However, if you are using a private cloud (e.g. OpenStack) where you've set up a local object storage service (e.g. Swift), you must provide this value so that the script knows where to make the API request. 

Example: http://endpoint_ip:5000/v2.0/tokens

text: http://endpoint_ip:5000/v2.0/tokens
Primary Backup Secret (default) If volume snapshots are not supported.

Required cloud credential to store a file in the specified ROS location. 

  • Amazon S3 - AWS Secret Access Key (e.g. cred: AWS_SECRET_ACCESS_KEY)
  • Rackspace Cloud Files - Rackspace Account API Key (e.g. cred: RACKSPACE_AUTH_KEY)
  • Google Cloud Storage - Google Secret Access Key (e.g. cred: GOOGLE_SECRET_ACCESS_KEY)
  • Microsoft Azure Blob Storage - Microsoft Primary Access Key (e.g. cred: AZURE_PRIMARY_ACCESS_KEY)
  • Swift - OpenStack Object Storage (Swift) Account Password (e.g. SWIFT_ACCOUNT_PASSWORD)
  • HP - HP Secret Access Key (e.g. cred: HP_SECRET_ACCESS_KEY)
  • SoftLayer Object Storage - SoftLayer API Access Key (e.g. cred: SOFTLAYER_API_KEY)
cred:  AWS_SECRET_ACCESS_KEY
Primary Backup User (default) If volume snapshots are not supported.

Required cloud credential to store a file in the specified ROS location. 

  • Amazon S3 - Amazon Access Key ID (e.g. cred: AWS_ACCESS_KEY_ID)
  • Rackspace Cloud Files - Rackspace login username (e.g. cred: RACKSPACE_USERNAME)
  • Google Cloud Storage - Google Access Key (e.g. cred: GOOGLE_ACCESS_KEY_ID)
  • Microsoft Azure Blob Storage - Azure Storage Account Name (e.g. cred: AZURE_ACCOUNT_NAME)
  • Swift - OpenStack Object Storage (Swift) Account ID (tenantID:username)  (e.g. SWIFT_ACCOUNT_ID)
  • HP - HP Object Storage Account ID (account number:tenantID) (e.g. cred: HP_ACCESS_KEY_ID)
  • SoftLayer Object Storage - Username of a SoftLayer user with API privileges (e.g. cred: SOFTLAYER_USER_ID)
cred:  AWS_ACCESS_KEY_ID
Database Backup Lineage

The name that will be associated with your primary database backups. Although you will most likely want to use the same lineage name, you can specify a different value to create a backup with a different lineage name, which is useful if you want to create a backup that's not associeate with your original lineage.

 

If volume snapshots are not supported.

Primary backups are saved to an ROS container that matches the value specified for this input. If the container does not exist, a new container will be created using the lineage name in the default ROS region. (S3: us-east, Cloud Files: Dallas)  The script will fail if a container cannot be created, which may occur in ROS services where container names use a global namespace and a container with that name already exists. (e.g. Amazon S3)

Tip: If you want the container to be in a specific region for performance reasons, you should create the container before launching any servers.
text: mylineage

 

  1. Run the block_device::default operational script on all the database servers in the deployment so that any changes that you made to the inputs above will be applied to the instance before you create a primary backup.
  2. Run the db::do_primary_backup operational script on the database server where you're going to take the primary backup. It's recommended that you use the slave database server to create the primary backup (if available) so that you do not affect the performance of the master database server.

 


Create a Secondary Backup

You can manually take a secondary backup to a specified ROS location such as Amazon S3, Rackspace Cloud Files, etc.), which you can use for cloud failover scenarios or cloud migrations. 

Specify values for the following inputs at the deployment level. However, if you want to apply the changes to a running server, you must update the values under the "current" server's Inputs tab.

Input Name Description Example Value
Secondary Backup Storage Cloud (default)

The cloud provider of the specified ROS container where the secondary backup will be stored.

  • s3 - Amazon S3
  • Cloud_Files - Rackspace Cloud Files (United States)
  • Cloud_Files_UK - Rackspace Cloud Files (United Kingdom)
  • google - Google Cloud Storage
  • azure - Microsoft Azure Blob Storage
  • swift - OpenStack Object Storage (Swift)
  • hp - Hewlett Packard Cloud Object Storage
  • SoftLayer_Dallas - SoftLayer's Dallas (USA) cloud
  • SoftLayer_Singapore - SoftLayer's Singapore cloud
  • SoftLayer_Amsterdam - SoftLayer's Amsterdam cloud
text:  s3
Secondary Backup Secret (default)

Required cloud credential to store a file in the specified ROS location.

  • Amazon S3 - AWS Secret Access Key (e.g. cred: AWS_SECRET_ACCESS_KEY)
  • Rackspace Cloud Files - Rackspace Account API Key (e.g. cred: RACKSPACE_AUTH_KEY)
  • Google Cloud Storage - Google Secret Access Key (e.g. cred: GOOGLE_SECRET_ACCESS_KEY)
  • Microsoft Azure Blob Storage - Microsoft Primary Access Key (e.g. cred: AZURE_PRIMARY_ACCESS_KEY)
  • Swift - OpenStack Object Storage (Swift) Account Password (e.g. SWIFT_ACCOUNT_PASSWORD)
  • HP - HP Secret Access Key (e.g. cred: HP_SECRET_ACCESS_KEY)
  • SoftLayer Object Storage - SoftLayer API Access Key (e.g. cred: SOFTLAYER_API_KEY)
cred:  AWS_SECRET_ACCESS_KEY
Secondary Backup User (default)

Required cloud credential to store a file in the specified ROS location. 

  • Amazon S3 - Amazon Access Key ID (e.g. cred: AWS_ACCESS_KEY_ID)
  • Rackspace Cloud Files - Rackspace login username (e.g. cred: RACKSPACE_USERNAME)
  • Google Cloud Storage - Google Access Key (e.g. cred: GOOGLE_ACCESS_KEY_ID)
  • Microsoft Azure Blob Storage - Azure Storage Account Name (e.g. cred: AZURE_ACCOUNT_NAME)
  • Swift - OpenStack Object Storage (Swift) Account ID (tenantID:username)  (e.g. SWIFT_ACCOUNT_ID)
  • HP - HP Object Storage Account ID (account number:tenantID) (e.g. cred: HP_ACCESS_KEY_ID)
  • SoftLayer Object Storage - Username of a SoftLayer user with API privileges (e.g. cred: SOFTLAYER_USER_ID)
cred:  AWS_ACCESS_KEY_ID
Secondary Backup Storage Container (1)  The name of the ROS container where the secondary backups will be saved to or restored from. If undefined, secondary backups will be saved to a container name that matches the value specified for the 'Database Backup Lineage' input. If the container does not exist, a new container will be created using the lineage name in the default ROS region. (S3: us-east, Cloud Files: Dallas) The script will fail if a container cannot be created, which may occur in ROS services where container names use a global namespace and a container with that name already exists. (e.g. Amazon S3) text:  mycontainer
Database Backup Lineage

The name that will be associated with your secondary database backups. Although you will most likely want to use the same lineage name that you're using to take primary backups, you can specify a different value to create a backup with a different lineage name. This can be done if you want to disassociate from your original lineage. 

 

If a value is not specified for the 'Secondary Backup Storage Container (1)' input, backups will be saved to a container that matches the lineage name. If the container does not exist, a new container will be created using the lineage name in the default ROS region. (S3: us-east, Cloud Files: Dallas) The script will fail if a container cannot be created, which may occur in ROS services where container names use a global namespace and a container with that name already exists. (e.g. Amazon S3)

text: mylineage
Secondary Backup Storage Cloud Endpoint URL (default) If you set the 'Secondary Backup Storage Cloud (default)' input to 'swift', you must specify the endpoint URL of the ROS location where the secondary backup will be stored. text: http://endpoint_ip:5000/v2.0/tokens

 

  1. Run the block_device::default operational script on all the database servers in the deployment.  
  2. Run the db::do_secondary_backup operational script on the database server where you're going to take the secondary backup. It's recommended that you use the slave database server to create the secondary backup (if available) so that you do not affect the performance of the master database server.

Intialize a Slave Database Server

Prerequisite: An operational master database server

You can either launch a server so that it initializes itself as a slave of the master database server at boot time by setting the Init Slave at Boot input to "true" or you can set it up manually at runtime by using one of the operational scripts described below.

When you initialize a slave database with a master database user, the most recently completed primary/secondary backup will be used to set up the slave database server. Typically, you will use the same database backup lineage name as the master database server. Although with a slave database server, you must be initialized from the most recent backup of the "master" server. You can use the override inputs to initialize the database server using a different backup.

Set values for the following input on both master and slave database servers.

Input Name Description Example Value
Database Backup Lineage The database lineage name. This input will be used to find the most recently completed primary backup. text: mylineage
Database Restore Backup Lineage Override

Use this input to intialize the slave database server with a different backup lineage than what is defined for the 'Database Backup Lineage' input.

Leave this input set to "No value/ignore" if you do not want to override the default input.

text:  myotherlineage
Database Restore Backup Timestamp Override

By default, the most recently completed primary/secondary backup will be used to set up the database on the slave database server. Use this input to use an older backup instead by specifying the desired timestamp based on the snapshot's tag. For example, if the snapshot's tag is 'rs_backup:timestamp=1350681693', specify 1350681693 for this input.

Leave this input set to "No value/ignore" if you do not want to override the default input.

text: 1350681693

Database Replication Password

Database Replication Username

Username and password of a database user with replication permissions on the MySQL server. The replication username and password are used for replication between the "master" and "slave" database servers.

cred: DBREPLICATION_PASSWORD

cred: DBREPLICATION_USER

In order to initialize a running server to become a "slave" of a running "master" database server you must have a completed primary or secondary backup that was taken from the current master database server.

  • If necessary, take a primary/secondary backup on the "master" database server.
    • Wait until the backup is 100% complete, run the appropriate script on the server that you're going to initialize as a new slave database server depending on which type of backup you just created.
  • To initialize a slave by using a primary backup, run the db::do_primary_init_slave operational script on the operational database server that you want to become a slave of the running master database server.
  • To initialize a slave by using a secondary backup, run the db::do_secondary_init_slave operational script on the operational database server that you want to become a slave of the running master database server. 

 


Enable Continuous Primary Backups or Change the Primary Backup Policy

Run the following script to enable primary backups. The script will add a cron job that's configured to take periodic primary backups from the master and slave database servers according to the defined backup policy. The default settings of the primary backup policy are highlighted below.

Frequency

  • For a master database server, once per day, at a randomly assigned hour
  • For a slave database server, once per hour, with an offset of 30 minutes from the master backup time


Retention

  • Only keep 60 backups total
  • Keep 14 daily backups
  • Keep 6 weekly backups
  • Keep 12 monthly backups
  • Keep 2 yearly backups


You may find it useful to use the frequency ("Cron") inputs to specifically control when primary backups are created to ensure that they are taken during non-peak times. It’s recommended that you stagger backup start times and avoid running backups at the start of an hour to help prevent random offsets of start times that could prevent all backups on the cloud starting at the same time.

The retention inputs only apply to servers launched in clouds where volume snapshots are supported. If primary backups are saved to an ROS container, the retention inputs do not apply. If you do not want to keep all primary backups, you must either delete them manually or find an alternative method. For more information on configuring a backup policy along with specific implications for Amazon EC2 environments, see Archiving of EBS Snapshots.

Note: The backup policy scripts follow standard cron format. To backup every hour, use "*". To backup every 4 hours, use "*/4" etc. For more information about proper cron syntax, see this article.

Set values for the following inputs on both master and slave database servers.

Input Name Description Example Value
Database Backup Lineage The database lineage name for which you want to enable continuous primary backups. text: mylineage
Master Backup Cron Hour

For the backups of the master database server, the hour of day to run the backup cron job, in crontab format. For example, specify 11,23 if you want backups to run daily at 11:00 AM and 11:00 PM. Or specify 23 to take a backup at 11:00 PM.

  • Once per day (default)
text:  11, 23
Master Backup Cron Minute

For backups of the master database server, used with "Master Backup Cron Hour" to determine when to run the backup cron job. For example, if "Master Backup Cron Hour" is 23 and "Master Backup Cron Minute" is 15, the master database backups run daily at 11:15 PM. Leave blank to randomly assign the minute value at launch time.

text:  15
Slave Backup Cron Hour

If this input is blank, the backup cron job creates primary backups of the slave database server once per hour, either at a randomly assigned minute or one specified in "Slave Backup Cron Minute." If you enter a value here, enter it in crontab format—for example, */2 to run the backup every 2 hours.

  • Once per hour (default)
text:  */2
Slave Backup Cron Minute The minute of the hour when the cron job runs to back up the slave database server. By default, slave server backups occur once per hour, at a minute randomly assigned at launch time. A 30 minute offset from the master server's backup cron minute is also applied. However, you can assign a specific minute here, if needed. text:  45
Backup Max Snapshots Quantity of chronologically recent backups to keep for the lineage, in addition to those maintained by your backup rotation. (Default is 60.) text:  60
Keep Daily Backups

Quantity of daily backups to keep for the lineage. (Default is 14.) A daily backup is the last completed backup snapshot of the day with a timestamp closest to the end of the day (23:59:59).

text:  14

Keep Monthly Backups Quantity of monthly backups to keep for the lineage. (Default is 12.) A monthly backup is the last completed backup snapshot for a given month, with a date and timestamp closest to 23:59:59 on the last day of the month. text:  12
Keep Weekly Backups Quantity of weekly backups to keep for the lineage. (Default is 6.) A weekly backup is the last completed backup snapshot for a given week, with a date and timestamp closest to 23:59:59 on Saturday. text:  6
Keep Yearly Backups Quantity of yearly backups to keep for the lineage. (Default is 2.) A yearly backup is the last completed backup snapshot for a given year, with a date and timestamp closest to 23:59:59 on December 31. text:  2

 

  1. Run the block_device::default operational script on both master and slave database servers.
  2. Run the db::do_primary_backup_schedule_enable operational script on both master and slave database servers.

 


Disable Continuous Primary Backups

Run the following script to disable primary backups. The script will remove the cron job that's configured to take periodic primary backups from the master and slave database servers according to the defined backup policy. 

Set values for the following input on both master and slave database servers.

Input Name Description Example Value
Database Backup Lineage The database lineage name for which you want to disable continuous primary backups. text: mylineage

 

  1. Run the db::do_primary_backup_schedule_disable operational script on both master and slave database servers.

 


Restore a Database from a Primary Backup

If you have a completed primary backup of the database that was created using the ServerTemplate's scripts, you can restore your database setup by launching a new "master" or standalone database server with a previously completed primary backup. 

  1. Make sure you have a completed primary backup. The completed backup can either be taken from a master or slave database server. 
  1. (Optional) When you restore a database from a backup, you have the option to increase the total size of the volume stripe or local disk, however you cannot change the volume stripe count. If you want to grow the size of the database, specify a larger value for the Total Volume Size (1) input.
Input Name Description Example Value
Total Volume Size (1)

Specify the total size, in GB, of the volume or striped volume set used for primary storage. If dividing this value by the stripe volume quantity does not yield a whole number, then each volume's size is rounded up to the nearest whole integer. For example, if "Number of Volumes in the Stripe" is 3 and you specify a "Total Volume Size" of 5 GB, each volume will be 2 GB.

If deploying on a CloudStack-based cloud that does not allow custom volume sizes, the smallest predefined volume size is used instead of the size specified here. This input is ignored for clouds where volume snapshots are not supported. (e.g., Rackspace First Generation).

Important! The value for this input does not describe the actual amount of space that's available for data storage because a percent (default: 10%) is reserved for taking LVM snapshots. Use the 'Percentage of the LVM used for data (1)' input to control how much of the volume stripe is used for data storage. Be sure to account for additional space that will be required to accommodate the growth of your database.

text:100

 

NOTE: For Rackspace Open Cloud, the minimum volume size is 100 GB

  1. Launch a new database server. Make sure the INIT_SLAVE_AT_BOOT input is set to False.
  2. Once it becomes operational, run the appropriate script below to restore the database. By default, the most recently completed backup based on the "Database Backup Lineage" input will be used. To use a backup from a different lineage and/or an older backup, use the "override" inputs in the table below.
Input Name Description Example Value
Database Backup Lineage

The prefix that will be used to name new primary and secondary backups.

Warning! If you are restoring a database, you may want to change this input so that primary/secondary backups of this database will not conflict with the original backups that may apply to a production environment.

text: mylineage2
Database Master DNS Record ID

The record ID or hostname used to identify your master database server to your DNS provider. See Deployment Prerequisites (Linux) for more information.

Examples:

  • DNSMadeEasy: 1234567  (Dynamic DNS ID)
  • Route53: Z3DSDFSDFX:master-db.example.com
  • DynDNS: db-master.example.com
  • CloudDNS: 3334445:A-1234567  (<Domain ID>:<Record ID>)

Warning! Be sure to use a different DNS A record, if necessary. For example, if you are setting up a staging deployment using an backup of the production deployment, you will want to create and use new DNS A records so that you do not interfere with the "master" database server in a production deployment.
text: 1234567
Database Restore Lineage Override Use to restore databases from a backup lineage other than the one specified in "Database Backup Lineage." If you specify this input and leave "Database Restore Timestamp Override" set to "ignore," then the most recent backup for this lineage is applied. text:mysqlnew
Database Restore Timestamp Override

If previous snapshots exist for the lineage specified in "Database Backup Lineage Override" (or, otherwise, "Database Backup Lineage") and you do not want to restore the most recently created snapshot on startup, specify the value of the rs_backup:timestamp tag of the snapshot to restore. (For example, if the snapshot has the following tag, rs_backup:timestamp=1323217180, specify 1323217180.)

Note: Snapshot time stamps are based on Unix (epoch) time.

text:1323217180

 

Restore the Database and Create a Master-DB server

  • Run the db::do_primary_restore_and_become_master operational script to restore and initialize the database using a primary backup. The script also updates the DNS record that points to the "master" database server, and creates a new primary backup. 

    Warning! Performing this action will replace any existing master database. Make sure that the current database specified by the FQDN is no longer running/active before performing these operations.


Restore the Database 

  • Run the db::do_primary_restore operational script to restore and initialize the database using a primary backup. Use this script to create a generic database server without assigning it a specific master/slave role. For example, you might want to launch a server with a specific backup for troubleshooting or testing purposes. 

 

Troubleshooting If the restore script failed, you may need to reset the database server or manually restore the database if the server cannot be terminated. See the Restore a Database from a Primary Backup or Reset the Database Server sections below.


Restore a Database from a Secondary Backup

If you have a completed secondary backup of the database, you can restore your database setup by launching a new "master" or standalone database server. 

  1. Make sure you have a completed secondary backup stored in an object storage location. 
  1. (Optional) When you restore a database from a backup, you have the option to increase the total size of the volume stripe or local disk, however you cannot change the volume stripe count. If you want to grow the size of the database, specify a larger value for the Total Volume Size (1) input.
Input Name Description Example Value
Secondary Backup Storage Cloud (default)

The Remote Object Storage (ROS) service where the secondary backup file will be retrieved from.

  • s3 - Amazon S3
  • Cloud_Files - Rackspace Cloud Files (United States)
  • Cloud_Files_UK - Rackspace Cloud Files (United Kingdom)
  • google - Google Cloud Storage
  • azure - Microsoft Azure Blob Storage
  • swift - OpenStack Object Storage (Swift)
  • hp - Hewlett Packard Cloud Object Storage
  • SoftLayer_Dallas - SoftLayer's Dallas (USA) cloud
  • SoftLayer_Singapore - SoftLayer's Singapore cloud
  • SoftLayer_Amsterdam - SoftLayer's Amsterdam cloud
text: s3
Secondary Backup Secret (default)

In order to retrieve a "private" object within the specified Remote Object Storage (ROS) location, you must provide proper cloud authentication credentials. For security reasons, it's recommended that you create and use credentials for these values instead of entering the text value.

  • Amazon S3 - AWS Secret Access Key (e.g. cred: AWS_SECRET_ACCESS_KEY)
  • Rackspace Cloud Files - Rackspace Account API Key (e.g. cred: RACKSPACE_AUTH_KEY)
  • Google Cloud Storage - Google Secret Access Key (e.g. cred: GOOGLE_SECRET_ACCESS_KEY)
  • Microsoft Azure Blob Storage - Microsoft Primary Access Key (e.g. cred: AZURE_PRIMARY_ACCESS_KEY)
  • Swift - OpenStack Object Storage (Swift) Account Password (e.g. SWIFT_ACCOUNT_PASSWORD)
  • HP - HP Secret Access Key (e.g. cred: HP_SECRET_ACCESS_KEY)
  • SoftLayer Object Storage - SoftLayer API Access Key (e.g. cred: SOFTLAYER_API_KEY)
cred: AWS_SECRET_ACCESS_KEY
Secondary Backup User (default) 

In order to retrieve a "private" object within the specified Remote Object Storage (ROS) location, you must provide proper cloud authentication credentials. For security reasons, it's recommended that you create and use credentials for these values instead of entering the text value.

  • Amazon S3 - Amazon Access Key ID (e.g. cred: AWS_ACCESS_KEY_ID)
  • Rackspace Cloud Files - Rackspace login username (e.g. cred: RACKSPACE_USERNAME)
  • Google Cloud Storage - Google Access Key (e.g. cred: GOOGLE_ACCESS_KEY_ID)
  • Microsoft Azure Blob Storage - Azure Storage Account Name (e.g. cred: AZURE_ACCOUNT_NAME)
  • Swift - OpenStack Object Storage (Swift) Account ID (tenantID:username)  (e.g. SWIFT_ACCOUNT_ID)
  • HP- HP Object Storage Account ID (account number:tenantID) (e.g. cred: HP_ACCESS_KEY_ID)
  • SoftLayer Object Storage - Username of a SoftLayer user with API privileges (e.g. cred: SOFTLAYER_USER_ID)
text: AWS_ACCESS_KEY_ID
Secondary Backup Storage Container (1)

The name of the Remote Object Storage (ROS) container where a tarball (.tgz) of the application code will be retrieved from. Specify the bucket/container name.

text: my-container
Total Volume Size (1)

Specify the total size, in GB, of the volume or striped volume set used for primary storage. If dividing this value by the stripe volume quantity does not yield a whole number, then each volume's size is rounded up to the nearest whole integer. For example, if "Number of Volumes in the Stripe" is 3 and you specify a "Total Volume Size" of 5 GB, each volume will be 2 GB.

If deploying on a CloudStack-based cloud that does not allow custom volume sizes, the smallest predefined volume size is used instead of the size specified here. This input is ignored for clouds that do not support volume storage (e.g., Rackspace First Generation).

Important! The value for this input does not describe the actual amount of space that's available for data storage because a percent (default: 10%) is reserved for taking LVM snapshots. Use the 'Percentage of the LVM used for data (1)' input to control how much of the volume stripe is used for data storage. Be sure to account for additional space that will be required to accommodate the growth of your database.

text: 100
  1. Launch a new database server. Make sure the INIT_SLAVE_AT_BOOT input is set to False.
  2. Once it becomes operational, run the appropriate script below to restore the database. By default, the most recently completed backup based on the "Database Backup Lineage" input will be used. To use a backup from a different lineage and/or an older backup, use the "override" inputs in the table below.

 

Input Name Description Example Value
Database Backup Lineage

The prefix that will be used to name new primary and secondary backups.

Warning If you are restoring a database, you may want to change this input so that primary/secondary backups of this database will not conflict with the original backups that may apply to a production environment.

text: mylineage2
Database Master DNS Record ID

The record ID or hostname used to identify your master database server to your DNS provider. See Deployment Prerequisites (Linux) for more information.

Examples:

  • DNSMadeEasy: 1234567  (Dynamic DNS ID)
  • Route53: Z3DSDFSDFX:master-db.example.com
  • DynDNS: db-master.example.com
  • CloudDNS: 3334445:A-1234567  (<Domain ID>:<Record ID>)

Warning Be sure to use a different DNS address record, if necessary. For example, if you are setting up a staging deployment using an backup of the production deployment, you will want to create and use new DNS A records so that you do not interfere with the "master" database server in a production deployment.
text: 1234567
Database Restore Lineage Override Use to restore databases from a backup lineage other than the one specified in "Database Backup Lineage." If you specify a value for this input and leave "Database Restore Timestamp Override" set to 'No value/Ignore' then the most recent backup for this lineage is applied. (i.e. The most recently completed secondary backup will be used.) text: mylineage1
Database Restore Timestamp Override

If you are restoring the database using a binary backup located in an object storage container (e.g. Amazon S3), you can specify a particular timestamp to use an older backup. If this input is set to 'No value/Ignore' the most recently completed backup will be used. (default)

For example, the name of the secondary backup is based upon the "Database Backup Lineage" name and a timestamp in the following format: <Database Backup Lineage>-yyyymmdd. (e.g. mylineage1-20121126)

text: 20121126

Restore the Database and Create a Master-DB server

  • Run the db::do_secondary_restore_and_become_master operational script to restore and initialize the database using a secondary backup. The script also updates the DNS record that points to the "master" database server, and creates a new primary backup. 

    Warning! Performing this action will replace any existing master database. Make sure that the current database specified by the FQDN is no longer running/active before performing these operations.


Restore the Database 

  • Run the db::do_secondary_restore operational script to restore and initialize the database using a secondary backup. Use this script to create a generic database server without assigning it a specific master/slave role. For example, you might want to launch a server with a specific backup for troubleshooting or testing purposes. 

 

Troubleshooting If the restore script failed, you may need to reset the database server or manually restore the database if the server cannot be terminated. See the Restore a Database from a Secondary Backup or Reset the Database Server sections below.

 


Reset the Database Server

If you need to restore a database that was previously initialized on the server, or return your MySQL server to its initial state (with no associated databases), you must run the db::do_force_reset operational recipe. This recipe deletes all databases, tables, and records from your server, along with any attached storage volumes; existing primary and secondary database backups are unaffected, however, and you can restore them in the future. Generally, you will run this recipe in test environments only and never in production.

In order to run the db::do_force_reset operational recipe, you must set the "Force Reset Safety" input to "off"; to do this, you must select the input's "override dropdown?" option first at the server level ("current" server's Inputs tab).

Warning! This script does not always succeed and should never be used on a production server. Generally, you should only perform this action in test environments only and never in a production environment.

(Optional) Once the database has been reset on the new database server, run the db::do_init_and_become_master operational script to make it a "master" database, update the dynamic DNS record that points to the "master" database, and initiate a primary backup. Once the backup is complete you can launch a new slave database server for redundancy and failover purposes.


Promote Slave to Master

Note: RightScale does not support automatic failover from a master to a slave server; we strongly recommend performing this step manually only, for reasons described further in How do I set up auto-failover on MySQL databases?

In a disaster recovery scenario or other case where it is necessary to promote your slave server to the master role, do this by running the db::do_promote_to_master operational recipe. In addition to promoting the slave server to the master role, this will also register the existing master server as a slave, and perform the necessary DNS updates to associate the "Database Master FQDN" (master server fully qualified domain name) input value with your new master server. For more information on the implications of assigning a database server a master or slave role, see Database Manager for MySQL 5.1/5.5 (v13.5 LTS) - Tutorial.

Warning! When running the db::do_promote_to_master operational recipe on a slave to promote it to master, if the current master is in a decommissioning (instead of operational or terminated) state, the slave may become unresponsive (hang) during recipe execution. To prevent this, do not run db::do_promote_to_master on a slave when the master is in a decommissioning state. If there is more than one slave database server, the additional slave database servers will not automatically become a slave of the new master database server. Each additional slave must be individually updated to replicate with the new master, if necessary. See Initialize a Slave Database Server.

Force Promote Slave to Master 

You can also promote a slave server to the master role without running the normal checks or making the normal changes to any current master. Use the Force Promote to Master input to change this setting to 'true'. Force Promote to Master is an advanced input under the DB category on the Inputs tab. The default value for this input is false.

Warning! setting this will promote a slave to a master with no replication until a new slave is brought up. Make sure you understand the potential consequences before changing this value. 


Terminate Database Servers

The method that you will use to shut-down (terminate) your MySQL database servers depends on the storage types supported by your cloud provider:

  • To terminate a database server with attached volumes on a cloud supporting volume storage, run the db::do_delete_volumes_and_terminate_server operational recipe. This deletes storage volumes attached to the server, then shuts down the server. In order to run this recipe, you must first set the "Terminate Safety" input to "off"; to do this, you must select the input's "override dropdown?" option first at the server level ("current" server's Inputs tab). Terminating a database server with attached volumes without running this recipe will not delete the storage volumes attached to the server; you must locate and delete the unused volumes manually as a separate step in this case.

Note: If you are running a database server on a cloud where volumes are not supported (e.g., Rackspace First Generation), you do not have to use this script to properly terminate the server because volumes are not used.


Warning! To avoid data loss, you should run a manual primary backup before terminating a master database server. Running the db::do_delete_volumes_and_terminate_server recipe does not create a backup for you.


Register Slave Server with DNS

If you have previously configured a DNS record for the slave database server, you can run the following script to update its associated DNS record with your DNS provider. Although setting up a DNS record for the slave database server is an optional step, some users find it helpful to have a hostname that points to the slave database server's private IP address. (e.g. my-slave.example.com) 

Set the following inputs accordingly.

Input Name Description Example Value
Database Slave DNS Record ID

The unique identifier that is associated with the DNS A record of a slave server. The unique identifier is assigned by the DNS provider when you create a dynamic DNS A record. This ID is used to update the associated A record with the private IP address of a slave server when this recipe is run. If you are using DNS Made Easy as your DNS provider, a 7-digit number is used (e.g., 4403234).

See Deployment Prerequisites (Linux) for more information.

Examples:

  • DNSMadeEasy: 1234567  (Dynamic DNS ID)
  • Route53: Z3DSDFSDFX:master-db.example.com
  • DynDNS: db-master.example.com
  • CloudDNS: 3334445:A-1234567  (<Domain ID>:<Record ID>)
text:  1234567
Database Slave DNS FQDN The fully qualified domain name for a slave database server. Example: my-slave.example.com text:  db-slave.example.com
DNS Service Provider

Select the DNS provider that you used to create the DNS records for the database servers.

  • DNSMadeEasy
  • DynDNS
  • Route53 (Amazon Route 53)
  • CloudDNS
text:  DNSMadeEasy
DNS Password

The password used to log into your DNS provider.

  • DNSMadeEasy - DME Password
  • DynDNS - DynDNS Password
  • Amazon Route 53 - AWS Secret Access Key
  • Rackspace CloudDNS - Rackspace Password 
cred: DNS_PASSWORD
DNS User

The username used to log into your DNS provider.

  • DNSMadeEasy - DME Username
  • DynDNS - DynDNS Username
  • Amazon Route 53 - AWS Access Key ID
  • Rackspace CloudDNS - Rackspace Username 
cred: DNS_USER

 

  1. If you've made any changes to inputs related to the your DNS settings, such as changing the DNS provider, username or password, you must run the sys_dns::default boot script to set the inputs on the Chef node appropriately so that the subsequent script uses the correct login credentials for the DNS provider.
    • Run the sys_dns::default boot script on all database servers.
  2. On the slave database server, run the db::do_set_dns_slave operational script.

Grow the Size of the Database

If you are using mountable volumes to store your database on a running database server, you will eventually need to increase the size of those volumes when the database starts to outgrow the size of the existing volumes before you run out of space.

Note: Although you can change the total size of a volume stripe without incurring any downtime, you cannot change the number of volumes in a stripe without downtime.

 

Increase the size of each volume in the stripe

Follow the instructions below to grow the total size of the database volume stripe without having any downtime. 

diag-db_original-v1.png

  1. (Optional) Lower the TTL of the DNS record that points to the master/principal database server so that the application servers will connect to the new database server as soon as possible.
  2. (Recommended) Before you begin the upgrade process it's strongly recommended that you manually take a primary backup of the database. Run the db::do_primary_backup operational script on the slave database server.
  3. At the deployment level, specify the new value for the Total Volume Size (1) input in order to affect new servers that are launched in the deployment (assuming you've followed best practices and are not setting this value at the server level). Remember, the size of each volume is automatically determined by dividing the total volume (stripe) size by the number of stripes (Number of Volumes in the Stripe (1)). 
  4. Clone the existing slave database server and rename accordingly. (db-3)

  5. Launch the new slave server (db-3). At the Inputs Confirmation page, be sure to set Init Slave at Boot input to 'true' and click the Launch (not Save and Launch) button. Notice that larger volumes are attached to this server.
    diag-db3-v1.png

  6. Once it becomes operational it will start to replicate with the master. Wait until it catches up to the master before proceeding to the next step. See Check Database Status of Master or Slave.

  7. Promote the new slave (db-3) to become the new master by running the db::do_promote_to_master operational script. The DNS record that points to the master database server will be updated accordingly and any application servers will start to connect to the new master database server once the TTL has expired. A new primary backup will also be initiated.
    diag-db_promote-v1.png

  8. While you wait for the primary backup to be completed, you can set up your new slave. Clone the new master (db-3) and rename it accordingly (db-4).

  9. Before you launch db-4, edit its preferences. You'll most likely want to change its availability zone for high-availability reasons.

  10. Once the most recent backup is 100% complete, launch db-4 and set the Init Slave at Boot input to 'true' at the Inputs Confirmation page and click the Launch (not Save and Launch) button.

  11. Once db-4 becomes operational, check the replication status of your new master-slave database servers and make sure the application servers are properly connecting to the new master database. Perform any additional tests, as necessary.

  12. Wait for any active sessions still connected to the old master database server to expire.

  13. Once you are satisfied with the upgrade you can safely terminate the original master and slave database servers. (db-1 and db-2)
    diag-db_shutdown-v1.png

  14. (Optional) If you previously changed the TTL for the DNS record of the master database server, you can change the TTL back to its original value.

 

Increase the number of volumes in each stripe

Follow the instructions below to change the number of volumes in the database volume stripe.
Note: You cannot perform the following upgrade without having site downtime. (i.e. The master database will not be available during the upgrade process.)

  1. Put up the site maintenance page on your site. See Enable or Disable Maintenance Mode.
  2. Take a secondary backup. Run the db::do_secondary_backup operational script on the slave database server. Wait for the backup to be 100% complete before proceeding to the next step. See Create a Secondary Backup for more details.
  3. At the deployment level, specify the new values for the Number of Volumes in the Stripe (1) and Total Volume Size (1) inputs in order to affect new servers that are launched in the deployment (assuming you've followed best practices and are not setting this value at the server level). Remember, the size of each volume is automatically determined by dividing the total volume (stripe) size by the number of stripes.
  4. Clone the existing slave database server and rename accordingly. (db-3) 
  5. Launch the new slave server (db-3). Notice that the number of volumes attached to this server is different that the original database servers.
  6. Once it becomes operational, run the db::do_secondary_restore_and_become_master operational script. The DNS record that points to the master database server will be updated accordingly and any application servers will start to connect to the new master database server once the TTL has expired. A new primary backup will also be initiated.
  7. While you wait for the primary backup to be completed, you can set up your new slave. Clone the new master (db-3) and rename it accordingly (db-4).

  8. Before you launch db-4, edit its preferences. You'll most likely want to change its availability zone for high-availability reasons.

  9. Once the most recent backup is 100% complete, launch db-4 and set the Init Slave at Boot input to 'true' at the Inputs Confirmation page and click the Launch (not Save and Launch) button.

  10. Once db-4 becomes operational, check the replication status of your new master-slave database servers and make sure the application servers are properly connecting to the new master database. Perform any additional tests, as necessary.

  11. Wait for any active sessions still connected to the old master database server to expire.

  12. Take down the site maintenance page on your site.  See Enable or Disable Maintenance Mode.
  13. Once you are satisfied with the upgrade you can safely terminate the original master and slave database servers. (db-1 and db-2)

 


Add or Remove a Firewall Rule

When iptables is enabled, which is the default behavior in all Linux-based v13 ServerTemplates, TCP ports 22, 80, and 443 are configured to be open to any IP address in order to enable minimum functionality and access. If you want to add or remove a firewall rule on a running (operational) server by opening or closing a port, you can set the following inputs accordingly and run the sys_firewall::setup_rule operational script.

If you want the firewall rules to be set at boot time, you can either add the Chef recipe to the end of the boot script list or update the sys_firewall::default recipe to change the list of default firewall permissions by explicitly opening up additional ports. However, you should only consider overriding the default recipe if you want to change the default behavior for all of your servers that use that cookbook.

Note: If the cloud provider supports security groups, you must also open or close the appropriate ports in the security group resource.

  1. Go to the current server's Inputs tab and set the following inputs accordingly.
     
Input Name Description Example Value
Firewall Rule Port Specify the port number to open or close. text:  8080
Firewall Rule

Defines whether you are creating or removing a firewall permission for the specified port (Firewall Rule Port) over the specified IP protocol (Firewall Rule Protocol), as restricted by the specified IP range (Firewall Rule IP Address).

  • enable (default) - Enable access by adding a firewall permission that allows (ingress) access.
  • disable - Disable access by removing an existing firewall permission.
text:  enable
Firewall Rule IP Address

Use CIDR notation to define the range of IP addresses that will either be allowed or denied access to the specified port (Firewall Rule Port) over the specified IP protocol (Firewall Rule Protocol).

Leave this value set to "any" (default) to allow access from any IP address (0.0.0.0/0). Use an exclamation point (!) before the IP address specification to deny access (i.e. "blacklist") from a specific IP address (e.g. !192.1.2.3) or IP range (e.g. !192.3.0.0/24)

text: any

text:  192.1.2.0/24

Firewall Rule Protocol

Specify the Internet protocol for the specified port (Firewall Rule Port).

  • tcp (default)
  • udp
  • both
text:  tcp

 

  1. Run the sys_firewall::setup_rule operational script to add the firewall permission to the running server(s).

List Current Firewall Rules

For troubleshooting and security purposes, you may want to list a server's current firewall rules to make sure that a server has the expected IP/port permissions. This script is especially useful if you want to check the firewall rules across all servers in a deployment to validate that all of them have the same iptables rules. 

  1. Go to the running server's Scripts tab and run the sys_firewall::do_list_rules operational script.
  2. Go to the server's Audit Entries tab to view the output. The output will look similar to the following example.
22:25:03: ==================== do_list_rules : Firewall rules Begin ==================
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
FWR        all  --  0.0.0.0/0            0.0.0.0/0           

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FWR (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:22 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:443 
ACCEPT     tcp  --  10.123.456.22        0.0.0.0/0           tcp dpt:8000 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:80 
REJECT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp flags:0x16/0x02 reject-with icmp-port-unreachable 
REJECT     udp  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 
==================== do_list_rules : Firewall rules End ====================

 

If you want to perform the same action via SSH, follow the steps below.

  1. SSH into the running server. (Requires 'server_login' user role privileges.)
  2. Switch to the 'root' user.

Note: When using newer images (>5.8/13.4), ensure that you have the 'server_superuser' permission to the Rightscale account where the server is running in order to gain root privileges using the sudo command (Settings > Account Settings > Users).

# sudo -i
  1. Type the following Unix command.
# /sbin/iptables -L

Enable or Disable Iptables

Iptables is typically enabled by default ('Firewall' = enabled). However, you can use the following script to enable or disable Iptables on an instance.

Warning! You should only perform this action if you fully understand its implications. For example, if the cloud provider does not support cloud-level firewall services such as security groups, you could permanently lock yourself out of the instance if you disable Iptables.

To enable Iptables, follow the steps below.

  1. Set the 'Firewall' input to 'enabled'.
  2. Run the sys_firewall::default (boot script).

 

To disable Iptables, follow the steps below.

  1. Set the 'Firewall' input to 'disabled'.
  2. Run the sys_firewall::default (boot script).

Enable or Disable System Security Updates

Typically, ServerTemplates are configured with frozen software repositories that are locked down to a specific date to ensure that the same versions of software and packages are installed on a server at launch time. You also have the option to configure the server so that you can easily apply security patches from one of the related system software repositories as they become available. (Currently, only the Epel and Ubuntu Precise (v12.04) repositories are checked for security updates.) System security updates are disabled by default at the ServerTemplate level, as defined by the 'Enable security updates' input. As a best practice, you should determine whether or not you want to reserve the ability to apply security updates as an operational script before you launch the server. Changing this setting after a server is operational is not recommended.

To enable security updates, follow the steps below.

Warning! Once security updates are enabled, they cannot be disabled.

  1. Set 'Enable security updates' input to 'enable' at the deployment level, or at the (next) server level if you do not want this change to be applied to all future servers launched in the deployment.
  2. Launch or relaunch the server, if possible. Otherwise, you must update the input setting under the current server's Inputs tab and run the rightscale::setup_security_updates boot script.

Apply System Security Updates

If the server is enabled for system security updates (Enable security updates = enable), a server tag will be added to the server when a security update becomes available ('rs_monitoring:security_updates_available=true'). By default, a triggered alert sends an email notification to the account owner as a reminder that a security update is available on a particular server. If a security update is available, follow the steps below to download and apply the security update.

  1. Check to make sure that a security update is available. All effected servers will have the following server tag: rs_monitoring:security_updates_available=true 
  2. Run the rightscale::do_security_updates operational script. You can either apply the update on a per server basis under the "current" server's Scripts tab. However, if you want to apply the update to some or all servers in a deployment, run the script at the deployment level instead (under the deployment's Scripts tab).
  3. A reboot may be required to apply the security update. If you see the following reboot tag on the server ('rs_monitoring:reboot_required=true'), you must manually reboot the server at your convenience (View Server > More Actions > Reboot) to complete the security update.

Add or Remove a Firewall Rule

When iptables is enabled, which is the default behavior in all Linux-based v13 ServerTemplates, TCP ports 22, 80, and 443 are configured to be open to any IP address in order to enable minimum functionality and access. If you want to add or remove a firewall rule on a running (operational) server by opening or closing a port, you can set the following inputs accordingly and run the sys_firewall::setup_rule operational script.

If you want the firewall rules to be set at boot time, you can either add the Chef recipe to the end of the boot script list or update the sys_firewall::default recipe to change the list of default firewall permissions by explicitly opening up additional ports. However, you should only consider overriding the default recipe if you want to change the default behavior for all of your servers that use that cookbook.

Note: If the cloud provider supports security groups, you must also open or close the appropriate ports in the security group resource.

  1. Go to the current server's Inputs tab and set the following inputs accordingly.
     
Input Name Description Example Value
Firewall Rule Port Specify the port number to open or close. text:  8080
Firewall Rule

Defines whether you are creating or removing a firewall permission for the specified port (Firewall Rule Port) over the specified IP protocol (Firewall Rule Protocol), as restricted by the specified IP range (Firewall Rule IP Address).

  • enable (default) - Enable access by adding a firewall permission that allows (ingress) access.
  • disable - Disable access by removing an existing firewall permission.
text:  enable
Firewall Rule IP Address

Use CIDR notation to define the range of IP addresses that will either be allowed or denied access to the specified port (Firewall Rule Port) over the specified IP protocol (Firewall Rule Protocol).

Leave this value set to "any" (default) to allow access from any IP address (0.0.0.0/0). Use an exclamation point (!) before the IP address specification to deny access (i.e. "blacklist") from a specific IP address (e.g. !192.1.2.3) or IP range (e.g. !192.3.0.0/24)

text: any

text:  192.1.2.0/24

Firewall Rule Protocol

Specify the Internet protocol for the specified port (Firewall Rule Port).

  • tcp (default)
  • udp
  • both
text:  tcp

 

  1. Run the sys_firewall::setup_rule operational script to add the firewall permission to the running server(s).

List Current Firewall Rules

For troubleshooting and security purposes, you may want to list a server's current firewall rules to make sure that a server has the expected IP/port permissions. This script is especially useful if you want to check the firewall rules across all servers in a deployment to validate that all of them have the same iptables rules. 

  1. Go to the running server's Scripts tab and run the sys_firewall::do_list_rules operational script.
  2. Go to the server's Audit Entries tab to view the output. The output will look similar to the following example.
22:25:03: ==================== do_list_rules : Firewall rules Begin ==================
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
FWR        all  --  0.0.0.0/0            0.0.0.0/0           

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FWR (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:22 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:443 
ACCEPT     tcp  --  10.123.456.22        0.0.0.0/0           tcp dpt:8000 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:80 
REJECT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp flags:0x16/0x02 reject-with icmp-port-unreachable 
REJECT     udp  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 
==================== do_list_rules : Firewall rules End ====================

 

If you want to perform the same action via SSH, follow the steps below.

  1. SSH into the running server. (Requires 'server_login' user role privileges.)
  2. Switch to the 'root' user.

Note: When using newer images (>5.8/13.4), ensure that you have the 'server_superuser' permission to the Rightscale account where the server is running in order to gain root privileges using the sudo command (Settings > Account Settings > Users).

# sudo -i
  1. Type the following Unix command.
# /sbin/iptables -L

Show Replication Mode

Replication between master and slave database servers can be asynchronous (default) or synchronous. Run the following script to check the current replication mode.

  1. Go to the Scripts tab of the "master" database server and run the db_postgres::do_show_slave_sync_mode operational script. Note: The script cannot be successfully executed on a "slave" database server.

Set Asynchronous Replication to Sync Mode

By default, data is replicated between master and slave database servers asynchronously, however it can be changed to synchronous mode, if preferred.

Important! Before changing the database replication mode, please read PostgreSQL documentation on Synchronous Replication.

  1. (Optional) Before you change the database replication mode, you may want to check the current replication mode. See Show Replication Mode.
  2. Go to the Scripts tab of the "master" database server and run the db_postgres::do_set_slave_sync_mode operational script. Note: The script cannot be successfully executed on a "slave" database server.

Set Synchronous Replication to Async Mode

Important! Before changing the database replication mode, please read PostgreSQL documentation on Synchronous Replication.

  1. (Optional) Before you change the database replication mode, you may want to check the current replication mode. See Show Replication Mode.
  2. Go to the Scripts tab of the "master" database server and run the db_postgres::do_set_slave_async_mode operational script. Note: The script cannot be successfully executed on a "slave" database server.
You must to post a comment.
Last modified
17:12, 29 Jan 2014

Tags

Classifications

This page has no classifications.

Announcements

None


© 2006-2014 RightScale, Inc. All rights reserved.
RightScale is a registered trademark of RightScale, Inc. All other products and services may be trademarks or servicemarks of their respective owners.