|Table of Contents||Sections|
A major benefit of using a workflow solution is the consistency and availability properties provided by the system. A simple litmus test for identifying the need for a workflow based solution is answering the question, "Where does the state live?" If it's ok for the state to live on a single system, then running a regular script or program there is probably a simpler solution. Another way to ask the question is this: "Is there a system whose failure makes the operation irrelevant?" If such a system exists, then it should be the one driving the operation and keeping the state. For example, if the operation consists of cleaning the temporary files of an instance, then that instance failing makes the operation irrelevant. Therefore, it is OK for it to be responsible for scheduling the operation.
For distributed operations, there is often no simple answer to where state can be kept or where a conventional program can be run. The operation must continue regardless of individual system failures. For example, if the operation consists of analyzing the metrics data on all the instances in a deployment and taking action if certain values reach a given threshold, then no single instance is a good place to run the operation. And a separate "management server" isn't good either as it can fail as well. In this case, using a workflow based solution makes sense as the workflow engine is in charge of keeping the state and making sure that the operation is either carried out or gives proper feedback in case of failure. This is obviously not the only reason to use a workflow solution, but state bookkeeping is one of the main perks that come out of it. (Managing concurrent execution of activities is another big one described in the Cloud Workflow Processes section.)
The state of a process is kept in variables and references. References were already covered in Cloud Workflow Resources. They contain collections of resources. A Cloud workflow variables may contain a number, a string, a boolean, a time value, the special null value, an array or a hash. Variables are initialized directly via literal values in code, from resource fields or from the results of a computation. A variable name must start with $ and may contain letters, numbers, and underscores. As with references the naming convention consists of using lower_case. For example:
The above code initializes the $names variable to an array containing the names of all servers in deployment with href /deployments/123.
Accessing a variable that has not been initialized is not an error, it just returns the value null.
The above will initialize the $object variable with the given object (hash).
The value stored at a given key of an object can be read using the bracket operator [ ], such as:
The value stored at a given index of an array field value can also be read using the bracket operator [ ]. For example:
Note: Array indeces are 0-based
d"year/month/day [hours:minutes:seconds] [AM|PM]"
Note: Date time values do not include a timezone. If needed the timezone information must be stored separatly.
If no time is specified then midnight is used (00:00:00). If no AM/PM value is specified then AM is assumed unless the hour value is greater than 12.
The language supports a range of operators for dealing with variables including arithmetic operators to manipulate numbers, operators to concatenate strings, arrays, and objects as well as logical operators. Some examples are listed below:
Concatenation / Merging:
Naturally variables can be used wherever values can. The complete list of available operators can be found in the Operators section.
RCL supports two types of variables and references: local variables and references are only accessible from a limited scope (defined below) while global variables and references are accessible throughout the execution of the process that defines them.
A block in RCL is code contained inside a define, a sub, a if or a loop expression (while, foreach or map).
Both local references and variables are scoped to their containing block and all children blocks. This means that a variable initialized in a parent block can be read and modified by child blocks. Consider the following:
The scope of $variable in the example above covers the child block, modifying that variable there affects the value in the parent block (or more exactly both the child and parent blocks have access to the same variable).
Note that the scope is limited to the block where a variable or a reference is first defined and child blocks. In particular the value cannot be read from a parent block:
The only local variables and references that are defined when a definition starts are the ones passed as arguments. Consider the following:
In the example above the launch_servers definition does not have access to the @other_servers local reference because it is not listed as an argument.
Sometimes it is useful to retrieve a value defined in an inner block or in one of the sub definitions called via call. Global references and global variables can be used for that purpose as their scope is the entire process rather than the current block. Global references are prefixed with @@, while global variables are prefixed with $$:
Global references and variables exist for the lifetime of the process, independently of where they are set:
Note: The best practice consists of using return values to retrieve values from a different definition via the return and retrieve attributes as shown in the Cloud Workflows section.
Obviously special care needs to be taken when using global references or variables in a process that involves concurrency. For more details, see Cloud Workflow Processes.
|RCL||Resources||Cloud Workflows & Definitions||► Variables||Attributes & Error Handling||Branching & Looping||Processes||Functions||Operators||Mapping|
© 2006-2014 RightScale, Inc. All rights reserved.
RightScale is a registered trademark of RightScale, Inc. All other products and services may be trademarks or servicemarks of their respective owners.