Note: Please go to docs.rightscale.com to access the current RightScale documentation set. Also, feel free to Chat with us!
Home > Tutorials > GlusterFS Setup

GlusterFS Setup

Prerequisites

GlusterFS requires a security group with specific ports opened to itself. Ensure that TCP ports 111, 24007,24008, 24009-(24009 + number of bricks across all volumes) are open on all Gluster servers.

Below is an example security group setup:

Unlike the example above, which is only for illustrative purposes, you will need to open the ports above to itself by adding the security group to itself and to the security group of the clients accessing the GlusterFS nodes.

Steps

Import and Clone the ServerTemplate

  1. Import the GlusterFS 3.2 Server ServerTemplate from the Marketplace.
  2. Clone the ServerTemplate to create an editable copy.
  3. Commit the revision.

Add the Server to a Deployment

  1. Use the committed, cloned ServerTemplate to add a server to your deployment.
  2. Clone the server enough times to fulfill your ‘Replicated’ instance count.

Launch All the Nodes (Bricks)

Configure inputs

Configure the following inputs at the deployment level:

Name Description Recommended Value
 Storage Path Location of the brick data on the node. The default will try and use any local ephemeral storage. text:/mnt/ephemeral/glusterfs
 Volume Name Sets the name of the export GlusterFS volume. text:glusterfs
 Replica Count Number of Replica instances (one per brick) that will be started and setup to form a cluster. You must launch this number of instances to initially setup a cluster. text:2
 Volume Type The type of volume to build. Replicated and Distributed are options, though only Replica is currently supported with expected behavior. text:Replicated
 Replace Brick If a node is to be replaced, this is the ‘bricknum’ tag value of the node or brick number. text:Ignore
 Force Brick Replace If a node is completely unresponsive, this option will force a config change without migrating the data from the dead node. text:No
  1. Click Save.
  2. Launch the servers. Wait for the servers to launch into Operational mode before proceeding to the next step. 

Create the GlusterFS Cluster

  1. Click on ‘Scripts’ tab in your deployment.
  2. Run the following recipe one any one of the instances (only on a single one): glusterfs::server_create_cluster. Finds other servers tagged as 'spare=true' and initializes the GlusterFS volume.
  3. Click on the Servers tab, or Audit tab and wait for until all the recipes are ‘completed. If you click on the Servers tab, you will see that the recipe completed when all the instances tags of ‘spare=true’ are removed and replaced with a ‘bricknum=n’ where 'n' is the brick number of the volume.

Create the GlusterFS Client (optional)

  1. Import the Base ServerTemplate for Linux (v13.1) from the Marketplace: http://www.rightscale.com/library/se...-/lineage/8160
  2. Clone the ServerTemplate to create an editable copy.
  3. Add the following recipe to the cloned ServerTemplate: glusterfs::client_mount_volume

GlusterFS Client Inputs

Name Description Recommended Value
 Mount point Location on the local filesystem when you want to access the mounted GlusterFS volume. text:/mnt/glusterfs
 Mount Options Sets any specific or custom mount options you may have. text:Ignore

Recover a GlusterFS Brick (optional)

Eventually, a situation will arise that will require you to migrate a brick off to another node (instance). If the source node that you want to migrate from is still active:

  1. Clone the source server, and launch the clone.
  2. Once the clone is Operational and you see the ‘spare=true’ tag, run the following recipe: glusterfs::server_live_migrate on any single node. You must make sure that the ‘Replace Brick’ input set to the correct brick number (bricknum tag).
  3. (optional) After the recipe successfully completes, you may want to initiate a ‘self-heal’ of the cluster. This is done on any connected client by running `find`. This process is described in detail here: http://www.gluster.org/community/doc...l_on_Replicate
  4. If the recipe fails to complete or the source node is dead, follow the same procedure above, but set the ‘Force Brick Replace’ input to true. This will force a configuration change without attempting to migrate any data.

Firewalls (optional)

On clouds without native security groups, you may enable firewall support by adding the sys_firewall::do_list_rules and sys_firewall::setup_rule recipes and placing them after the sys_firewall::default rule. Specify the same ports as described under ‘Prerequisites’ above.

You must to post a comment.
Last modified
21:22, 16 May 2013

Tags

This page has no custom tags.

Classifications

This page has no classifications.

Announcements

None


© 2006-2014 RightScale, Inc. All rights reserved.
RightScale is a registered trademark of RightScale, Inc. All other products and services may be trademarks or servicemarks of their respective owners.