doc/mgmt_manual.md | 143 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 143 insertions(+)
New commits: commit 519042c2751772e24192210cc40afd29e8d50f4a Author: Jeff Darcy jdarcy@redhat.com Date: Thu May 12 16:30:21 2011 -0400
Added management manual.
diff --git a/doc/mgmt_manual.md b/doc/mgmt_manual.md new file mode 100644 index 0000000..bfcbbe9 --- /dev/null +++ b/doc/mgmt_manual.md @@ -0,0 +1,143 @@ += CloudFS Management Manual = + +The CloudFS management system consists of two parts: a very simple web-based +management daemon called cloudfsd, and scripts to perform various discrete +functions. The vast majority of the management functionality is in scripts +that can be called either from cloudfsd or directly from the command line, but +there is some functionality that is implemented directly in cloudfsd itself and +there are a couple of command-line-only scripts. Because CloudFS is a +distributed system, running cloudfsd on all servers is required even when using +the CLI, because the scripts use the HTTP interface to perform actions on other +nodes. If cloudfsd is not running, or if other nodes are unavailable, some +scripts might appear to work but either configuration data or operational +states might become inconsistent. + +This manual is divided into sections representing the major types of entities +that can be managed via cloudfsd. Descriptions are given primarily in terms of +the web interface, with CLI equivalents pointed out where necessary. + +== Main Page == + +You can access the main web interface by connecting to port 8080 (default) of a +node running cloudfsd. You will be presented options to do one of the +following: + + * Manage Servers + * Manage Volumes + * Manage Tenants + +== Managing Servers == + +The simplest entities to manage are servers. If you click on "Manage Servers" +you will be shown a list of servers that are currently members of the +CloudFS/GlusterFS cluster including the node where cloudfsd is running. There +is also a form that you can use to add a node, which will start up the +GlusterFS daemons (but not cloudfsd) on that node and invoke GlusterFS to have +it join the cluster. + +The CLI equivalents for these functions are "gluster peer status" to see +servers, and "cfs_add_node" to add one. + +== Managing Volumes == + +When you click on "Manage Volumes" you will be shown a page containing several +sections: + + * A list of current volumes. For each volume, there are links to + perform various actions on that volume, followed by a list of "bricks" + which are part of the volume. + + * A form to add "bricks" (server directories) from which volumes may + be composed. + + * A form to create a volume, including selection of bricks and other + parameters. + +In the existing-volume list, the following actions are possible: + + * Manage tenant access to the volume (TBD). + + * Start the volume. + + * Stop the volume. + + * Remove the volume. + +In the brick-addition form, you can add one or more bricks. To add a single +brick, simply type in the server and path separated by a colon (e.g. +"server1:/bricks/xyz"). To add multiple bricks, you can use various kinds of +wild-card patterns within either the server or path part of the input: + + * Character ranges, such as [a-j] + + * Numeric ranges, such as [5-11] + + * List of alternatives, such as {foo,bar} + +Wild-card expansion is done until no more expansions are possible, so a +specification like server[1-3]:/{big,small}_bricks/volume[1-4] would expand +to 24 bricks. + +In the volume-creation form, you can select which bricks will be part of a +volume, plus the following parameters: + + * Volume name + + * Distribution type (plain/replicated/striped) + + * Replica or stripe count + +Once these parameters have been selected, GlusterFS is invoked to create the +"base" volume and then CloudFS takes additional configuration steps based on +that. + +The CLI equivalents for these functions are: + + * "cfs_list_volumes" to list volumes and associated bricks + + * "cfs_add_directory" to add bricks/directories + + * "cfs_add_volume" to create a new volume + + * "cfs_rm_volume" to remove a volume + + * "cfs_start_volume" and "cfs_stop_volume" to start/stop a volume + +== Managing Tenants == + +When you click on "Manage Tenants" you will be shown a screen with two parts: + + * A list of current tenants. For each tenant there is a link to + manage which volumes a tenant can access and a link to delete the + tenant. + + * A form to add a new tenant. + +The tenant list also includes, for each tenant, the credentials the tenant uses +to access CloudFS volumes. This is currently a plain-text password, which is +Very Bad, but very soon it will be a certificate location instead. The form +for adding a tenant lets you specify a name and credential + +The CLI equivalents for these functions are: + + * "cfs_list_tenants" to list tenants (including which volumes are + enabled for each + + * "cfs_add_tenant" to add a tenant + + * "cfs_delete_tenant" to delete a tenant + +== Managing Tenant Access To Volumes == + +There are two ways to manage the relationships between volumes and tenants: + + * From the volume-management page, manage the list of tenants allowed + to use a particular volume (TBD). + + * From the tenant-management page, manage the list of volumes to which + a particular tenant has access. + +In either case, the management is in the form of checkboxes which may be used +to indicate which volume/tenant connections are valid, plus an "Update" button +to have any changes take effect. +
cloudfs-devel@lists.fedorahosted.org