Note: Please go to docs.rightscale.com to access the current RightScale documentation set. Also, feel free to Chat with us!
Home > ServerTemplates > Infinity > ST > Database Manager for PostgreSQL 9.1 (v13 Infinity) > Database Manager for PostgreSQL 9.1 (v13 Infinity) - Tutorial

Database Manager for PostgreSQL 9.1 (v13 Infinity) - Tutorial

 

Table of Contents    

Infinity

Leading edge features

   ►  Tutorial

Objective

To set up two PostgreSQL 9.1 database servers running in an asynchronously replicated (master/slave) configuration in a single deployment in a public or private cloud environment.

Prerequisites

  • You must log in under a RightScale account with "actor," "designer," "library," and "security manager" user roles in order to complete the tutorial.
  • For clouds that support security groups, you must have a security group defined with TCP port 22 open for SSH access, and any other ports required by the server (for example, the default PostgreSQL port, TCP port 5432), for the required security groups and IP addresses. Also, remember that iptables is installed and enabled by default on all servers.
  • We strongly recommend that you set up credentials for password values and any other sensitive data included as Chef recipe inputs. Also, some default input values assume that predefined credentials exist, such as the PostgreSQL user name and password credentials described in Create Credentials for Common Inputs.
  • This tutorial assumes that you already have an existing PostgreSQL dump file that you're going to use to initially launch the PostgreSQL database server. If you do not have one, you can use the provided sample dump file instead. If you prefer to launch a blank PostgreSQL database, or restore databases from a previous backup, see the Database Manager for PostgreSQL 9.1 Runbook.
  • In PostgreSQL, do not set a password for the "postgres" user; otherwise, the recipes will fail since we do not pass a password to the PostgreSQL commands that we use for administrating the server.

Overview

This tutorial describes the steps for launching PostgreSQL database servers running in an asynchronously replicated (master/slave) configuration in the cloud.

Create Credentials

Prerequisite: Requires 'actor' user role privileges in the RightScale account.

In order to securely pass sensitive information to a script at runtime, you can use Credentials as a means of variable substitution. Later in this tutorial you will select these credentials when you define your inputs.

Create the following credentials.  For more information on setting up credentials, see Create a New Credential.

  • DBADMIN_USER - Username of a database user with admin-level privileges.
  • DBADMIN_PASSWORD - Matching password for DBADMIN_USER.
  • DBAPPLICATION_USER - User name of a database user with user-level privileges. Note: The username cannot start with a number.
  • DBAPPLICATION_PASSWORD - Matching password for DBAPPLICATION_USER.
  • DBREPLICATION_USER - User name of a database user with replication permissions on the server.
  • DBREPLICATION_PASSWORD - Matching password for DBREPLICATION_USER. Note: The username cannot start with a number.
  • DNS_USER* - Username that's used to log into your DNS provider and access your DNS records.
  • DNS_PASSWORD* - Password for DNS_USER.

If you use Amazon Route 53 as your DNS provider, you do not need to set up separate DNS user name and password credentials because your AWS credentials are used for authentication purposes. 

Depending on your cloud provider and backup storage selections, you may need to create additional credentials.

Amazon AWS

If you are using Amazon to make snapshot/binary backups of your database, you will need to use the following credentials. Fortunately, these credentials were automatically created when you added your AWS credentials to the RightScale account.

Note: These credentials are not listed under Design > Credentials.

  • AWS_ACCESS_KEY_ID - Amazon access key ID for authentication.
  • AWS_SECRET_ACCESS_KEY - Amazon secret key corresponding to AWS_ACCESS_KEY_ID.


Rackspace Cloud Files

If you are using Rackspace Cloud Files for storing binary database backups, you will need to create the following credentials.

  • RACKSPACE_USERNAME - The username used to log into Rackspace's Cloud Control Panel. Use this credential for the "Backup Primary User" and/or the "Secondary Backup User" input if you are using Rackspace Cloud Files for primary and/or secondary backups. 
  • RACKSPACE_AUTH_KEY - The Rackspace account API key. Use this credential for the "Backup Primary Secret" and/or the "Secondary Backup Secret" input if you are using Rackspace Cloud Files for primary and/or secondary backups.

Steps

Upload the Database Dump File

The ServerTemplate contains scripts that can retrieve a database dump file from a container in one of the supported Remote Object Storage (ROS) providers (e.g. Amazon S3, Rackspace Cloud Files). See Database Dump Retrieval.

Create a new bucket/container and upload your database dump file. The file can remain a 'private' object because your cloud credentials can be used (as inputs) for authentication purposes to retrieve the file. Make sure the uploaded file maintains the .gz file extension.

Warning! The filename of the PostgreSQL dump file cannot contain a dash (-) in its prefix name. For example, if your dump file is named, 'my-app-201205030022.gz', you must manually rename it to be 'my_app-201205030022.gz' where you use an underscore (_) to replace the dash, otherwise the script (do::do_dump_import) that imports the database dump file into the instance will fail.


If you are setting up a database server for testing purposes or if you do not have your own dump file, you can use the following sample PostgreSQL dump file to complete the tutorial. The sample is a gzip (.gz) file.

app_test-20121603072614.gz

Create a Database Server

Follow these steps to add a database server to the deployment.

  1. Go to the MultiCloud Marketplace (Design > MultiCloud Marketplace > ServerTemplates) and import the most recently published revision of the "Database Manager for PostgreSQL 9.1 Beta (v13.x)" ServerTemplate into your RightScale account. (Note: This ServerTemplate was deprecated from the MultiCloud Marketplace and can no longer be imported.)
  2. From the imported ServerTemplate's show page, click the Add Server button.
  3. Select the cloud for which you will configure a server. 
  4. Select the deployment for the new server.
  5. Next, the Add Server Assistant wizard will walk you through the remaining steps that are required to create a server based on the selected cloud.
    • Server Name - Provide a nickname for your new database server (e.g., postgresql-db1). Do not include "master" or "slave" in the name, because a database server's role can change in the future.
    • Select the appropriate cloud-specific resources (e.g. SSH Key, Security Group, etc.) that are required in order to launch a server into the chosen cloud. The required cloud resources may differ depending on the type of cloud infrastructure. If the cloud supports multiple datacenters/zones, select a specific zone. Later, when you create the other database server you will use a different datacenter/zone to ensure high-availability. For more information, see Add Server Assistant.
    • Important! If you are not using volumes to store the database, you must select an instance type that has disk space that's at least twice as large as your database because LVM snapshots are performed locally on the instance before they are gzipped and saved to the specified ROS location. Also, although these ServerTemplates will work with any instance size, you may experience degraded performance with small instance sizes (such as EC2 micro, Rackspace 256MB etc) due to lack of system resources. We do not recommend smaller instance types for production use.
  6. Click Confirm, review the server's configuration and click Finish to create the server.

 

Configure Inputs

The next step is to define the properties of your database server or servers by entering values for inputs. It is simplest and best to do this at the deployment level. For a detailed explanation of how inputs are defined and used in Chef recipes and RightScripts, see Understanding Inputs.

The inputs that you need to provide values for will depend on which options you're going to use. The ServerTemplate is very flexible and supports a variety of different configurations. You must provide values for the required inputs based on the chosen options.

  • Where will the contents of the database be stored?
    • On volumes attached to the instance.
    • On the local/ephemeral drive.
  • If you're using volumes, are you going to use a stripe?
    • Yes - Use a stripe of multiple volumes.
    • No - Use a single volume. 
    • INPUTS: Block Device Mount Directory (1), Nickname (1), Number of Volumes in the Stripe (1), Total Volume Size (1)
  • Are you building a master-slave database setup?
    • Yes
    • No
    • INPUTS: Database Master FQDN, Database Master DNS Record ID, Database Slave FQDN, Database Slave DNS Record ID, Database Replication Password, Database Replication Username
  • Will there be replication between master-slave database servers across different clouds/regions? If yes, you should set up SSL.
    • Yes
    • No
    • INPUTS: CA SSL Certificate, Master SSL Certificate, Master SSL Key, Slave SSL Certificate, Slave SSL Key
  • What are you going to use to take "primary" backups of the database?
    • Volume Snapshots
    • Binary Dumps to an ROS container (e.g. S3 bucket or Cloud Files container)
    • INPUTS: Backup Primary Secret (default), Backup Primary User (default), Primary Backup Storage Cloud (default),
      Primary Backup Storage Cloud Endpoint URL (default)
  • Are you going to take "secondary" backups of the database? If yes, which ROS provider will you use?
    • Amazon S3, Rackspace Cloud Files (US or UK), Google Cloud Storage, Azure Blob Storage, Swift-based Storage, SoftLayer Object Storage
    • INPUTS: Secondary Backup Storage Cloud (default), Secondary Backup Secret (default), Secondary Backup User (default), Secondary Backup Storage Container (1), Secondary Backup Storage Cloud Endpoint URL (default)
  • Which DNS provider are you using for dynamic DNS at the database level?
    • DNS Made Easy
    • DynDNS
    • Amazon Route 53
    • Rackspace Cloud DNS
    • INPUTS: DNS Service Provider, DNS Password, DNS User, Database Master FQDN, Database Master DNS Record ID, Database Slave DNS Record ID, Cloud DNS region

Set Inputs at the Deployment Level

Go to the deployment's Inputs tab (Manage > Deployments > your deployment > Inputs) and click Edit.

Although you can enter values for missing inputs as text values, it's strongly recommended that you set up credentials for passing sensitive information to scripts such as passwords or any other sensitive data.

Rackspace only
If you use Rackspace for your database servers and backup storage (i.e., Cloud Files) the storage-related Chef recipes will use Rackspace Service Net (SNET) by default. SNET is Rackspace's internal private networking service for optimized communication between Rackspace Cloud Servers and Cloud Files. If SNET is not supported in your Rackspace environment, you must set the "Rackspace SNET Enabled for Backup" input to false; otherwise, all backup and restore operations that rely on Cloud Files will fail.

Block Device

If the cloud supports the use of mountable volumes (e.g. AWS EBS Volumes, CloudStack volumes, etc.), primary backups will be saved as volume snapshots. It's strongly recommended that you use volumes to store the contents of the PostgreSQL database for efficiency and performance reasons.

However, if the cloud does not support mountable volumes (e.g. Rackspace First Generation), primary backups must be saved to a Remote Object Storage location. In such cases, the contents of the PostgreSQL database will be stored locally on the instance's ephemeral drive. Backups of the database will be stored as binary dump files to the specified object storage container.

Required

Input Name Description Example Value
Number of Volumes in the Stripe (1) To use striped volumes with your databases, specify a volume quantity. The default is 1, indicating no volume striping. Ignored for clouds that do not support volume-based storage (e.g. Rackspace Legacy/First Generation). text:  1
Total Volume Size (1)

Specify the total size, in GB, of the volume or striped volume set used for primary storage. If dividing this value by the stripe volume quantity does not yield a whole number, then each volume's size is rounded up to the nearest whole integer. For example, if "Number of Volumes in the Stripe" is 3 and you specify a "Total Volume Size" of 5 GB, each volume will be 2 GB.

If deploying on a CloudStack-based cloud that does not allow custom volume sizes, the smallest predefined volume size is used instead of the size specified here. This input is ignored for clouds that do not support volume storage (e.g., Rackspace Legacy/First Generation).

Important: The value for this input does not describe the actual amount of space that's available for data storage because a percent (default: 90%) is reserved for taking LVM snapshots. Use the 'Percentage of the LVM used for data (1)' input to control how much of the volume stripe is used for data storage. Be sure to account for additional space that will be required to accomodate the growth of your database.

text:  10

NOTE: For Rackspace Open Cloud, the minimum volume size is 100 GB

Percentage of the LVM used for data (1) The percentage of the total Volume Group extents (LVM) that is used for data storage. The remaining percent is reserved for taking LVM snapshots.  (e.g. 75 percent - 3/4 used for data storage and 1/4 remainder used for overhead and snapshots)

WARNING! If the database experiences a large amount of writes/changes, LVM snapshots may fail. In such cases, use a more conservative value for this input. (e.g. 50%) 
text: 90%

 

Advanced

Input Name Description Example Value
Primary Backup Storage Cloud (default)

Input is ignored if volumes are supported.

If the instance is launched into a cloud that does not support volumes, you must specify which ROS solution to use for storing primary backups. Backups are saved as binary dump files to a container that matches the value specified for the Backup Lineage Name input. If a matching container does not exist, one will be created.

  • s3 - Amazon S3 
  • cloudfiles - Rackspace Cloud Files (United States)
  • cloudfilesuk - Rackspace Cloud Files (United Kingdom)
  • google - Google Cloud Storage
  • hp - HP Cloud Storage
  • azure - Microsoft Azure Blob Storage
  • swift - OpenStack Object Storage (Swift)
  • SoftLayer_Dallas - SoftLayer's Dallas (USA) cloud
  • SoftLayer_Singapore - SoftLayer's Singapore cloud
  • SoftLayer_Amsterdam - SoftLayer's Amsterdam cloud

No value/Ignore

text: s3

Primary Backup Secret (default)

 

Input is ignored if volumes are supported.

Required cloud credential to store a file in the ROS location specified by the Primary Backup Storage Cloud (default) input.

  • Amazon S3 - AWS Secret Access Key (e.g. cred: AWS_SECRET_ACCESS_KEY)
  • Rackspace Cloud Files - Rackspace Account API Key (e.g. cred: RACKSPACE_AUTH_KEY)
  • Google Cloud Storage - Google Secret Access Key (e.g. cred: GOOGLE_SECRET_ACCESS_KEY)
  • HP - HP Secret Access Key (e.g. cred: HP_SECRET_ACCESS_KEY)
  • Microsoft Azure Blob Storage - Microsoft Primary Access Key (e.g. cred: AZURE_PRIMARY_ACCESS_KEY)
  • swift - OpenStack Object Storage (Swift) Account Password (e.g. SWIFT_ACCOUNT_PASSWORD)
  • SoftLayer Object Storage - SoftLayer API Access Key (e.g. cred: SOFTLAYER_API_KEY)

No value/Ignore

cred:  AWS_SECRET_ACCESS_KEY

Primary Backup User (default)

Input is ignored if volumes are supported.

Required cloud credential to store a file in the ROS location specified by the Primary Backup Storage Cloud (default) input.

  • Amazon S3 - Amazon Access Key ID (e.g. cred: AWS_ACCESS_KEY_ID)
  • Rackspace Cloud Files - Rackspace login username (e.g. cred: RACKSPACE_USERNAME)
  • Google Cloud Storage - Google Secret Access Key (e.g. cred: GOOGLE_ACCESS_KEY)
  • HP - HP Object Storage Account ID (account number:tenantID) (e.g. cred: HP_ACCESS_KEY_ID)
  • Microsoft Azure Blob Storage - Azure Storage Account Name (e.g. cred: AZURE_ACCOUNT_NAME)
  • swift - OpenStack Object Storage (Swift) Account ID (tenantID:username)  (e.g. SWIFT_ACCOUNT_ID)
  • SoftLayer Object Storage - Username of a SoftLayer user with API privileges (e.g. cred: SOFTLAYER_USER_ID)

No value/Ignore

cred:  AWS_ACCESS_KEY_ID

Secondary Backup Storage Cloud (default)

The cloud provider of the specified ROS container where the secondary backup will be stored.

  • s3 - Amazon S3 
  • cloudfiles - Rackspace Cloud Files (United States)
  • cloudfilesuk - Rackspace Cloud Files (United Kingdom)
  • google - Google Cloud Storage
  • hp - HP Cloud Storage
  • azure - Microsoft Azure Blob Storage
  • swift - OpenStack Object Storage (Swift)
  • SoftLayer_Dallas - SoftLayer's Dallas (USA) cloud
  • SoftLayer_Singapore - SoftLayer's Singapore cloud
  • SoftLayer_Amsterdam - SoftLayer's Amsterdam cloud
text:  cloudfiles

Secondary Backup Secret (default)

Required cloud credential to store a file in the ROS location specified by the Secondary Backup Storage Cloud (default) input.

  • Amazon S3 - AWS Secret Access Key (e.g. cred: AWS_SECRET_ACCESS_KEY)
  • Rackspace Cloud Files - Rackspace Account API Key (e.g. cred: RACKSPACE_AUTH_KEY)
  • Google Cloud Storage - Google Secret Access Key (e.g. cred: GOOGLE_SECRET_ACCESS_KEY)
  • HP - HP Secret Access Key (e.g. cred: HP_SECRET_ACCESS_KEY)
  • Microsoft Azure Blob Storage - Microsoft Primary Access Key (e.g. cred: AZURE_PRIMARY_ACCESS_KEY)
  • swift - OpenStack Object Storage (Swift) Account Password (e.g. SWIFT_ACCOUNT_PASSWORD)
  • SoftLayer Object Storage - SoftLayer API Access Key (e.g. cred: SOFTLAYER_API_KEY)

cred: RACKSPACE_AUTH_KEY

Secondary Backup User (default)

Required cloud credential to store a file in the ROS location specified by the Secondary Backup Storage Cloud (default) input.

  • Amazon S3 - Amazon Access Key ID (e.g. cred: AWS_ACCESS_KEY_ID)
  • Rackspace Cloud Files - Rackspace login username (e.g. cred: RACKSPACE_USERNAME)
  • Google Cloud Storage - Google Secret Access Key (e.g. cred: GOOGLE_ACCESS_KEY)
  • HP - HP Object Storage Account ID (account number:tenantID) (e.g. cred: HP_ACCESS_KEY_ID)
  • Microsoft Azure Blob Storage - Azure Storage Account Name (e.g. cred: AZURE_ACCOUNT_NAME)
  • swift - OpenStack Object Storage (Swift) Account ID (tenantID:username)  (e.g. SWIFT_ACCOUNT_ID)
  • SoftLayer Object Storage - Username of a SoftLayer user with API privileges (e.g. cred: SOFTLAYER_USER_ID)

cred:  RACKSPACE_USERNAME

Secondary Backup Storage Container (1) Name of the ROS container to use for secondary backups. text:  postgresqlbackups
Block Device Mount Directory (1)

Input is ignored if volumes are not supported.

For cloud providers supporting volume-based storage, the mount point for your backup volume or volumes. (Default is /mnt/storage.)

text:  /mnt/storage
Nickname (1)

Input is ignored if volumes are not supported.

For cloud providers supporting volume-based storage, the nickname will be used to name the created volumes and snapshots along with an epoch timestamp. (e.g. data_storage-201203100927) By default, this input is set to 'data_storage' however it's recommended that you create a nickname that describes your application or deployment, which will make it easier to identify the created volumes and snapshots.

text:  my_deployment

 

DB

Input Name Description Example Value

Database Admin Password

Database Admin Username

Username and password of a database user with administrator privileges. The admin username and password are used for tasks that require administrator access to the database.

cred:  DBADMIN_PASSWORD

cred:  DBADMIN_USER

Database Application Password

Database Application Username

Username and password of a database user with user-level privileges. The application username and password allow the application to access the database in a restricted fashion.

cred:  DBAPPLICATION_PASSWORD

cred:  DBAPPLICATION_USER

Database Backup Lineage

The name associated with your primary and secondary database backups. It's used to associate them with your database environment for maintenance, restore, and replication purposes. Backup snapshots will automatically be tagged with this value. (e.g. rs_backup:lineage=postgresqlbackup) Backups are identified by their lineage name.

Note: For servers running on Rackspace Legacy/First Gen, this value also indicates the Cloud Files container to use for storing primary backups. If a Cloud Files container with this name does not already exist, one will automatically be created.

text:  postgresqlbackup
Database Master FQDN The fully qualified domain name that points to the master database server. Slave database servers and application servers will use the FQDN to locate the "master" database server. Typically, the DNS record will point to the Master-DB server's private IP address. text:  master-db.example.com
Database Master DNS Record ID

The record ID or hostname used to identify your master database server to your DNS provider. See Deployment Prerequisites (Linux) for more information.

Examples:

  • DNSMadeEasy: 1234567  (Dynamic DNS ID)
  • Route53: Z3DSDFSDFX:master-db.example.com
  • DynDNS: db-master.example.com
  • Cloud DNS: 3334445:A-1234567  (<Domain ID>:<Record ID>)
text:  1234567

Database Replication Password

Database Replication User

Username and password of a database user with replication permissions on the PostgreSQL server. The replication username and password are used for replication between the "master" and "slave" database servers.

cred:  DBREPLICATION_PASSWORD

cred:  DBREPLICATION_USER

Database DNS TTL Limit The specified TTL limit of the database servers' dynamic DNS records. It's recommended that you use a low TTL for your database servers DNS records to promote quick failovers. The default is set to 60 (seconds). If you are using Rackspace's Cloud DNS service for Rackspace cloud servers, set this value to 300 (which is the lowest allowable TTL for Cloud DNS).

text: 60

text: 300 (Cloud DNS only)

Force Promote to Master

Determines whether or not the slave checks if there is a current running master database server and changes the current master into a slave after a database server promotion. This input applies to scripts and cookbooks that use the db::do_promote_to_master operational script.

  • false (default) - Slave verifies that there is a running master database server before it promotes itself to become the new master. The old master will become a slave of the new master after the promotion.
  • true - Slave will not check with the master before being promoted to assume the master role. If there is a running master database server, it will not become a slave of the new master after the promotion. You will not have database replication until a new slave database server is launched.
text: false

 

SYS_DNS

Input Name Description Example Value
DNS Service Provider

Select the DNS provider that you used to create the DNS records for the database servers.

  • DNSMadeEasy
  • DynDNS
  • Route53 (Amazon Route 53)
  • Cloud DNS
text:  DNSMadeEasy
DNS Password

The password/key required to update the DNS record of a master/slave database server with the specified DNS service provider.

  • DNSMadeEasy - DME Password
  • DynDNS - DynDNS Password
  • Amazon Route 53 - AWS Secret Access Key
  • Rackspace CloudDNS - Rackspace Password
cred:  DNS_PASSWORD

DNS User

The username required to update the DNS record of a master/slave database server with the specified DNS service provider.

  • DNSMadeEasy - DME Username
  • DynDNS - DynDNS Username
  • Amazon Route 53 - AWS Access Key ID
  • Rackspace CloudDNS - Rackspace Username

cred:  DNS_USER

Cloud DNS region

If 'CloudDNS' is the chosen 'DNS Service Provider', select the appropriate cloud region based on the location of the Rackspace cloud servers.

Note: This input is ignored unless you are using CloudDNS.

text:  Chicago

Launch the Master Database Server

After configuring your inputs, launch your newly configured master database server.

  1. Go to the deployment's Servers tab and launch the database server. When you view the input confirmation page, there should not be any missing values (highlighted in red) for inputs that are required by any of the server's boot scripts. If there are any inputs highlighted in red, cancel the launch and add the missing values at the deployment level before launching the server again. Refer to the instructions in Launch a Server if you are not familiar with this process. Click the Launch (not 'Save and Launch') button at the bottom of the input confirmation page.

Initialize the Master Database Server

Wait for the server to reach the "operational" state before you run a script to initialize the database server.

  1. Go to the "current" server's Scripts tab and run the db::do_init_and_become_master operational script to initialize it as the "Master" database server.
  2. (Optional) You can go to the "current" server's Audit Entries tab to track the status of the operation.


The script performs the following actions:

  • Registers it as the "master" database server and assign appropriate replication privileges and machine tags. (e.g. rs_dbrepl:master_instance_uuid=01-AVMV4MFHJQOK0 and rs_dbrepl:master_active=20130313200931-bob)
  • For cloud providers with volume support, it creates and mounts either a single volume or group of striped volumes for data storage, based on the inputs configured for your primary database backups.
  • Creates a database backup to primary storage.
  • Schedules a cron job to run backups to primary storage once every four hours on the server. (For information on modifying the default backup schedule, see the Database Manager for PostgreSQL 9.1 Beta (v13 Infinity) - Runbook.)
  • Updates the dynamic DNS record for the "Master" database with the DNS provider. The DNS record is updated with the server's IP address. By default, it will use the instance's private IP address ('Database Replication Network Interface' = private). The default TTL of the "master" DNS record must also be set to 60 seconds or less ('Database DNS TTL Limit' = 60) unless you are using Rackspace's Cloud DNS service, where the lowest allowable TTL is 300 seconds.

Disable Scheduled Primary Backups

Since you have not loaded an actual database onto the server there is no reason to create a primary backup of the database.

Go to the "current" server's Scripts tab and run the db::do_primary_backup_schedule_disable operational script to disable your scheduled backups (cron jobs). 

Later, once you have imported your database you will reverse this action and enable continuous backups.

Set Up the Database

After initializing the master database server and disabling scheduled backups, you will need to add your database (or databases) and records to it.

  1. Go to the "current" server's Scripts tab and run the db::do_dump_import operational script to import a PostgreSQL dump file from an ROS location.


Note: If you use a previous backup snapshot instead of a PostgreSQL dump file or initialize a blank PostgreSQL database, refer to the Database Manager for PostgreSQL 9.1 Beta (v13 Infinity) - Runbook for instructions.

Input Name Description Example Value
Dump Container Name of the ROS container that contains the PostgreSQL database dump file.  text:  postgresqldumps
Database Schema Name

Name of the PostgreSQL database schema to restore from the PostgreSQL dump file identified by the "Dump Prefix" input. This name is set when you import the dump file into PostgreSQL. The name is only defined within the PostgreSQL instance and not within the actual dump file. As a result the name is somewhat arbitrary but should be descriptive.

Important!
Be sure to record this value. You will need to specify this value again when you set up the application server tier so that they can  connect to the correct database schema.

text:  my_db_schema

 

For the 'app_test-20121603072614.gz' PostgreSQL dump file:

text: app_test

Dump Prefix

The prefix of the PostgreSQL dump file (without the associated .gz extension) to retrieve from the Remote Object Store location specified in "Dump Container." You can specify either the entire file name including the timestamp or just the file prefix without the timestamp, which selects the most recent dump file with that prefix.

Example: If your dump file is named "mydb-20121603072614.gz," you could specify either "mydb-20121603072614" or "mydb."

For the provided sample dump file:

text:  app_test

Dump Storage Account ID

Required cloud credential to retrieve a private file from the specified ROS location. Set to 'Ignore' if the file is publicly accessible.

  • Amazon S3 - Amazon Access Key ID (e.g. cred: AWS_ACCESS_KEY_ID)
  • Rackspace Cloud Files - Rackspace login username (e.g. cred: RACKSPACE_USERNAME)
  • Google Cloud Storage - Google Secret Access Key (e.g. cred: GOOGLE_ACCESS_KEY)
  • Microsoft Azure Blob Storage - Azure Storage Account Name (e.g. cred: AZURE_ACCOUNT_NAME)
  • swift - OpenStack Object Storage (Swift) Account ID (tenantID:username)  (e.g. SWIFT_ACCOUNT_ID)
  • SoftLayer Object Storage - Username of a SoftLayer user with API privileges (e.g. cred: SOFTLAYER_USER_ID)

cred:  AWS_ACCESS_KEY_ID

cred:  RACKSPACE_USERNAME

 

Dump Storage Account Secret

Required cloud credential to retrieve a private file from the specified ROS location. Set to 'Ignore' if the file is publicly accessible.

  • Amazon S3 - AWS Secret Access Key (e.g. cred: AWS_SECRET_ACCESS_KEY)
  • Rackspace Cloud Files - Rackspace Account API Key (e.g. cred: RACKSPACE_AUTH_KEY)
  • Google Cloud Storage - Google Secret Access Key (e.g. cred: GOOGLE_SECRET_ACCESS_KEY)
  • Microsoft Azure Blob Storage - Microsoft Primary Access Key (e.g. cred: AZURE_PRIMARY_ACCESS_KEY)
  • swift - OpenStack Object Storage (Swift) Account Password (e.g. SWIFT_ACCOUNT_PASSWORD)
  • SoftLayer Object Storage - SoftLayer API Access Key (e.g. cred: SOFTLAYER_API_KEY)

cred:  AWS_SECRET_ACCESS_KEY

cred:  RACKSPACE_AUTH_KEY

Dump Storage Account Provider

The remote object storage provider where your PostgreSQL dump file is stored.

  • s3 - Amazon S3 
  • cloudfiles - Rackspace Cloud Files (United States)
  • cloudfilesuk - Rackspace Cloud Files (United Kingdom)
  • google - Google Cloud Storage
  • azure - Microsoft Azure Blob Storage
  • SoftLayer_Dallas - SoftLayer's Dallas (USA) cloud
  • SoftLayer_Singapore - SoftLayer's Singapore cloud
  • SoftLayer_Amsterdam - SoftLayer's Amsterdam cloud
text:  s3

Create a Primary Backup

You are now ready to create the first primary backup of the database. You will need a completed backup in order to initialize a slave database server.

  1. Go to the "current" server's Scripts tab and run the db::do_primary_backup operational script to manually generate a primary backup of your database server.

Enable Scheduled Primary Backups

It is now safe to enable continuous backups of the database server.

  1. Go to the "current" server's Scripts tab and run the db::do_primary_backup_schedule_enable operational script.


For more information about configuring and modifying your scheduled backup policy, see the Database Manager for PostgreSQL 9.1 Beta (v13 Infinity) - Runbook.

Add a Slave Database Server

Although you can run PostgreSQL in single-server mode and having a separate slave server for replication purposes is not required, this is strongly recommended for failover purposes. Create a slave server in your deployment.

  1. Clone the Master-DB server. See Clone a Server.
  2. Rename the server accordingly. (e.g. postgresql-db2) Remember, you do not want to include the word "slave" in the nickname because this server may become the "master" server during a failover scenario. You don't want the server's nickname to potentially cause any confusion.
  3. Under the server's Info tab, click Edit and change the server's availability zone. In order to ensure high availability, it's strongly recommended that you launch the Slave-DB server in a different availability zone as the Master-DB.  Note: Cross-zone data transfer costs may apply.

Launch the Slave Database Server

Make sure the following conditions are true before you launch the second database server.

  • The master database server state is "operational."
  • The initial primary backup of the master database server is 100% complete. If you are using a cloud that supports snapshots for backups, you can track the status in the dashboard (Clouds > region > Snapshots). The time required to complete the initial primary backup will vary based on factors such as storage type, volume size, etc.


You are now ready to launch a "slave" database server for failover and redundancy purposes. 

  1. Go to the deployment's Servers tab and launch the server that will be the slave database server. 
  2. When you view the input confirmation page, change the value for the following input because you have an operational master database server an a completed database backup. 

DB (advanced)

Input Name Description Example Value
Init Slave at Boot

Set to 'True' to have the instance initialize with a running master database server as a "slave" on boot. 

text:  true
  1. If there are any required inputs that are missing values (highlighted in red), cancel the launch and add the missing values at the deployment level before launching the server again. Refer to the instructions in Launch a Server if you are not familiar with this process.
  2. Click the Launch (not Save and Launch) button at the bottom of the input confirmation page because you do not want to override this input at the server level. You may not want this server to become a slave the next time it is launched or relaunched.

The scripts perform the following actions:

  • Assigns the "slave" role to the server.
  • Uses the most recently completed database backup (default) to initially populate the database in order to reduce the time needed for the slave to become in-sync with the master.
  • Sends a request to the master server to allow connections from the slave server's private IP address and opens the default PostgreSQL client port (TCP port 5432) on the master server's firewall (i.e., iptables) for this purpose.
  • Schedules a cron job to run primary backups of the database once per hour (default). 

(Optional) Update the DNS Record for the Slave Database Server

If you created a DNS record for the slave database server, you can set a value for the Database Slave DNS Record ID input and run the db::do_set_dns_slave operational script.

You must to post a comment.
Last modified
08:48, 14 Oct 2014

Tags

Classifications

This page has no classifications.

Announcements

None


© 2006-2014 RightScale, Inc. All rights reserved.
RightScale is a registered trademark of RightScale, Inc. All other products and services may be trademarks or servicemarks of their respective owners.