To set up two PostgreSQL 9.1 database servers running in an asynchronously replicated (master/slave) configuration in a single deployment in a public or private cloud environment.
This tutorial describes the steps for launching PostgreSQL database servers running in an asynchronously replicated (master/slave) configuration in the cloud.
Prerequisite: Requires 'actor' user role privileges in the RightScale account.
In order to securely pass sensitive information to a script at runtime, you can use Credentials as a means of variable substitution. Later in this tutorial you will select these credentials when you define your inputs.
Create the following credentials. For more information on setting up credentials, see Create a New Credential.
* If you use Amazon Route 53 as your DNS provider, you do not need to set up separate DNS user name and password credentials because your AWS credentials are used for authentication purposes.
Depending on your cloud provider and backup storage selections, you may need to create additional credentials.
Amazon AWS
If you are using Amazon to make snapshot/binary backups of your database, you will need to use the following credentials. Fortunately, these credentials were automatically created when you added your AWS credentials to the RightScale account.
Note: These credentials are not listed under Design > Credentials.
Rackspace Cloud Files
If you are using Rackspace Cloud Files for storing binary database backups, you will need to create the following credentials.
The ServerTemplate contains scripts that can retrieve a database dump file from a container in one of the supported Remote Object Storage (ROS) providers (e.g. Amazon S3, Rackspace Cloud Files). See Database Dump Retrieval.
Create a new bucket/container and upload your database dump file. The file can remain a 'private' object because your cloud credentials can be used (as inputs) for authentication purposes to retrieve the file. Make sure the uploaded file maintains the .gz file extension.
Warning! The filename of the PostgreSQL dump file cannot contain a dash (-) in its prefix name. For example, if your dump file is named, 'my-app-201205030022.gz', you must manually rename it to be 'my_app-201205030022.gz' where you use an underscore (_) to replace the dash, otherwise the script (do::do_dump_import) that imports the database dump file into the instance will fail.
If you are setting up a database server for testing purposes or if you do not have your own dump file, you can use the following sample PostgreSQL dump file to complete the tutorial. The sample is a gzip (.gz) file.
Follow these steps to add a database server to the deployment.
The next step is to define the properties of your database server or servers by entering values for inputs. It is simplest and best to do this at the deployment level. For a detailed explanation of how inputs are defined and used in Chef recipes and RightScripts, see Understanding Inputs.
The inputs that you need to provide values for will depend on which options you're going to use. The ServerTemplate is very flexible and supports a variety of different configurations. You must provide values for the required inputs based on the chosen options.
Go to the deployment's Inputs tab (Manage > Deployments > your deployment > Inputs) and click Edit.
Although you can enter values for missing inputs as text values, it's strongly recommended that you set up credentials for passing sensitive information to scripts such as passwords or any other sensitive data.
Rackspace only
If you use Rackspace for your database servers and backup storage (i.e., Cloud Files) the storage-related Chef recipes will use Rackspace Service Net (SNET) by default. SNET is Rackspace's internal private networking service for optimized communication between Rackspace Cloud Servers and Cloud Files. If SNET is not supported in your Rackspace environment, you must set the "Rackspace SNET Enabled for Backup" input to false; otherwise, all backup and restore operations that rely on Cloud Files will fail.
If the cloud supports the use of mountable volumes (e.g. AWS EBS Volumes, CloudStack volumes, etc.), primary backups will be saved as volume snapshots. It's strongly recommended that you use volumes to store the contents of the PostgreSQL database for efficiency and performance reasons.
However, if the cloud does not support mountable volumes (e.g. Rackspace First Generation), primary backups must be saved to a Remote Object Storage location. In such cases, the contents of the PostgreSQL database will be stored locally on the instance's ephemeral drive. Backups of the database will be stored as binary dump files to the specified object storage container.
Required
Input Name | Description | Example Value |
Number of Volumes in the Stripe (1) | To use striped volumes with your databases, specify a volume quantity. The default is 1, indicating no volume striping. Ignored for clouds that do not support volume-based storage (e.g. Rackspace Legacy/First Generation). | text: 1 |
Total Volume Size (1) | Specify the total size, in GB, of the volume or striped volume set used for primary storage. If dividing this value by the stripe volume quantity does not yield a whole number, then each volume's size is rounded up to the nearest whole integer. For example, if "Number of Volumes in the Stripe" is 3 and you specify a "Total Volume Size" of 5 GB, each volume will be 2 GB. If deploying on a CloudStack-based cloud that does not allow custom volume sizes, the smallest predefined volume size is used instead of the size specified here. This input is ignored for clouds that do not support volume storage (e.g., Rackspace Legacy/First Generation). Important: The value for this input does not describe the actual amount of space that's available for data storage because a percent (default: 90%) is reserved for taking LVM snapshots. Use the 'Percentage of the LVM used for data (1)' input to control how much of the volume stripe is used for data storage. Be sure to account for additional space that will be required to accomodate the growth of your database. | text: 10 NOTE: For Rackspace Open Cloud, the minimum volume size is 100 GB |
Percentage of the LVM used for data (1) | The percentage of the total Volume Group extents (LVM) that is used for data storage. The remaining percent is reserved for taking LVM snapshots. (e.g. 75 percent - 3/4 used for data storage and 1/4 remainder used for overhead and snapshots) WARNING! If the database experiences a large amount of writes/changes, LVM snapshots may fail. In such cases, use a more conservative value for this input. (e.g. 50%) | text: 90% |
Advanced
Input Name | Description | Example Value |
Primary Backup Storage Cloud (default) | Input is ignored if volumes are supported. If the instance is launched into a cloud that does not support volumes, you must specify which ROS solution to use for storing primary backups. Backups are saved as binary dump files to a container that matches the value specified for the Backup Lineage Name input. If a matching container does not exist, one will be created.
| No value/Ignore text: s3 |
Primary Backup Secret (default)
| Input is ignored if volumes are supported. Required cloud credential to store a file in the ROS location specified by the Primary Backup Storage Cloud (default) input.
| No value/Ignore cred: AWS_SECRET_ACCESS_KEY |
Primary Backup User (default) | Input is ignored if volumes are supported. Required cloud credential to store a file in the ROS location specified by the Primary Backup Storage Cloud (default) input.
| No value/Ignore cred: AWS_ACCESS_KEY_ID |
Secondary Backup Storage Cloud (default) | The cloud provider of the specified ROS container where the secondary backup will be stored.
| text: cloudfiles |
Secondary Backup Secret (default) | Required cloud credential to store a file in the ROS location specified by the Secondary Backup Storage Cloud (default) input.
| cred: RACKSPACE_AUTH_KEY |
Secondary Backup User (default) | Required cloud credential to store a file in the ROS location specified by the Secondary Backup Storage Cloud (default) input.
| cred: RACKSPACE_USERNAME |
Secondary Backup Storage Container (1) | Name of the ROS container to use for secondary backups. | text: postgresqlbackups |
Block Device Mount Directory (1) | Input is ignored if volumes are not supported. For cloud providers supporting volume-based storage, the mount point for your backup volume or volumes. (Default is /mnt/storage.) | text: /mnt/storage |
Nickname (1) | Input is ignored if volumes are not supported. For cloud providers supporting volume-based storage, the nickname will be used to name the created volumes and snapshots along with an epoch timestamp. (e.g. data_storage-201203100927) By default, this input is set to 'data_storage' however it's recommended that you create a nickname that describes your application or deployment, which will make it easier to identify the created volumes and snapshots. | text: my_deployment |
Input Name | Description | Example Value |
Database Admin Password Database Admin Username | Username and password of a database user with administrator privileges. The admin username and password are used for tasks that require administrator access to the database. | cred: DBADMIN_PASSWORD cred: DBADMIN_USER |
Database Application Password Database Application Username | Username and password of a database user with user-level privileges. The application username and password allow the application to access the database in a restricted fashion. Note: The username cannot start with a number. | cred: DBAPPLICATION_PASSWORD cred: DBAPPLICATION_USER |
Database Backup Lineage | The name associated with your primary and secondary database backups. It's used to associate them with your database environment for maintenance, restore, and replication purposes. Backup snapshots will automatically be tagged with this value. (e.g. rs_backup:lineage=postgresqlbackup) Backups are identified by their lineage name. Note: For servers running on Rackspace Legacy/First Gen, this value also indicates the Cloud Files container to use for storing primary backups. If a Cloud Files container with this name does not already exist, one will automatically be created. | text: postgresqlbackup |
Database Master FQDN | The fully qualified domain name that points to the master database server. Slave database servers and application servers will use the FQDN to locate the "master" database server. Typically, the DNS record will point to the Master-DB server's private IP address. | text: master-db.example.com |
Database Master DNS Record ID | The record ID or hostname used to identify your master database server to your DNS provider. See Deployment Prerequisites (Linux) for more information. Examples:
| text: 1234567 |
Database Replication Password Database Replication User | Username and password of a database user with replication permissions on the PostgreSQL server. The replication username and password are used for replication between the "master" and "slave" database servers. | cred: DBREPLICATION_PASSWORD cred: DBREPLICATION_USER |
Database DNS TTL Limit | The specified TTL limit of the database servers' dynamic DNS records. It's recommended that you use a low TTL for your database servers DNS records to promote quick failovers. The default is set to 60 (seconds). If you are using Rackspace's Cloud DNS service for Rackspace cloud servers, set this value to 300 (which is the lowest allowable TTL for Cloud DNS). | text: 60 text: 300 (Cloud DNS only) |
Force Promote to Master | Determines whether or not the slave checks if there is a current running master database server and changes the current master into a slave after a database server promotion. This input applies to scripts and cookbooks that use the db::do_promote_to_master operational script.
| text: false |
Input Name | Description | Example Value |
DNS Service Provider | Select the DNS provider that you used to create the DNS records for the database servers.
| text: DNSMadeEasy |
DNS Password | The password/key required to update the DNS record of a master/slave database server with the specified DNS service provider.
| cred: DNS_PASSWORD |
DNS User | The username required to update the DNS record of a master/slave database server with the specified DNS service provider.
| cred: DNS_USER |
Cloud DNS region | If 'CloudDNS' is the chosen 'DNS Service Provider', select the appropriate cloud region based on the location of the Rackspace cloud servers. Note: This input is ignored unless you are using CloudDNS. | text: Chicago |
After configuring your inputs, launch your newly configured master database server.
Wait for the server to reach the "operational" state before you run a script to initialize the database server.
The script performs the following actions:
Since you have not loaded an actual database onto the server there is no reason to create a primary backup of the database.
Go to the "current" server's Scripts tab and run the db::do_primary_backup_schedule_disable operational script to disable your scheduled backups (cron jobs).
Later, once you have imported your database you will reverse this action and enable continuous backups.
After initializing the master database server and disabling scheduled backups, you will need to add your database (or databases) and records to it.
Note: If you use a previous backup snapshot instead of a PostgreSQL dump file or initialize a blank PostgreSQL database, refer to the Database Manager for PostgreSQL 9.1 (v13.5 LTS) - Runbook for instructions.
Input Name | Description | Example Value |
Dump Container | Name of the ROS container that contains the PostgreSQL database dump file. | text: postgresqldumps |
Database Schema Name | Name of the PostgreSQL database schema to restore from the PostgreSQL dump file identified by the "Dump Prefix" input. This name is set when you import the dump file into PostgreSQL. The name is only defined within the PostgreSQL instance and not within the actual dump file. As a result the name is somewhat arbitrary but should be descriptive. Important! | text: my_db_schema
For the 'app_test-20121603072614.gz' PostgreSQL dump file: text: app_test |
Dump Prefix | The prefix of the PostgreSQL dump file (without the associated .gz extension) to retrieve from the Remote Object Store location specified in "Dump Container." You can specify either the entire file name including the timestamp or just the file prefix without the timestamp, which selects the most recent dump file with that prefix. Example: If your dump file is named "mydb-20121603072614.gz," you could specify either "mydb-20121603072614" or "mydb." | For the provided sample dump file: text: app_test |
Dump Storage Account ID | Required cloud credential to retrieve a private file from the specified ROS location. Set to 'Ignore' if the file is publicly accessible.
| cred: AWS_ACCESS_KEY_ID cred: RACKSPACE_USERNAME
|
Dump Storage Account Secret | Required cloud credential to retrieve a private file from the specified ROS location. Set to 'Ignore' if the file is publicly accessible.
| cred: AWS_SECRET_ACCESS_KEY cred: RACKSPACE_AUTH_KEY |
Dump Storage Account Provider | The remote object storage provider where your PostgreSQL dump file is stored.
| text: s3 |
You are now ready to create the first primary backup of the database. You will need a completed backup in order to initialize a slave database server.
It is now safe to enable continuous backups of the database server.
For more information about configuring and modifying your scheduled backup policy, see the Database Manager for PostgreSQL 9.1 (v13.5 LTS) - Runbook.
Although you can run PostgreSQL in single-server mode and having a separate slave server for replication purposes is not required, this is strongly recommended for failover purposes. Create a slave server in your deployment.
Make sure the following conditions are true before you launch the second database server.
You are now ready to launch a "slave" database server for failover and redundancy purposes.
DB (advanced)
Input Name | Description | Example Value |
Init Slave at Boot | Set to 'True' to have the instance initialize with a running master database server as a "slave" on boot. | text: true |
The scripts perform the following actions:
If you created a DNS record for the slave database server, you can set a value for the Database Slave DNS Record ID input and run the db::do_set_dns_slave operational script.
© 2006-2014 RightScale, Inc. All rights reserved.
RightScale is a registered trademark of RightScale, Inc. All other products and services may be trademarks or servicemarks of their respective owners.