cluster: STABLE3 - man pages: fenced, fence_tool, fence_node, fence_ack_manual

David Teigland teigland at fedoraproject.org
Mon Dec 21 23:10:22 UTC 2009


Gitweb:        http://git.fedorahosted.org/git/cluster.git?p=cluster.git;a=commitdiff;h=aa2747f28b0541947dc0b718484cf5278f46c2e1
Commit:        aa2747f28b0541947dc0b718484cf5278f46c2e1
Parent:        495647eca88ccfb0505264ee9864d620517e70ff
Author:        David Teigland <teigland at redhat.com>
AuthorDate:    Mon Dec 21 18:02:01 2009 -0600
Committer:     David Teigland <teigland at redhat.com>
CommitterDate: Mon Dec 21 18:04:20 2009 -0600

man pages: fenced, fence_tool, fence_node, fence_ack_manual

Update for cluster3.

Signed-off-by: David Teigland <teigland at redhat.com>
---
 fence/man/fence.8            |   30 -----
 fence/man/fence_ack_manual.8 |   55 ++++----
 fence/man/fence_node.8       |  133 ++++++++++++++++---
 fence/man/fence_tool.8       |   70 ++++++----
 fence/man/fenced.8           |  297 +++++++++++++++++++++++-------------------
 5 files changed, 350 insertions(+), 235 deletions(-)

diff --git a/fence/man/fence.8 b/fence/man/fence.8
deleted file mode 100644
index 85b1dba..0000000
--- a/fence/man/fence.8
+++ /dev/null
@@ -1,30 +0,0 @@
-.TH fence 8
-
-.SH NAME
-fence \- I/O Fencing reference guide
-
-.SH SYNOPSIS
-Overview of related manual pages
-.SH DESCRIPTION
-The I/O Fencing documentation has been split into a number of sections.  Please
-refer to the table below to determine which man page coincides with the
-command/feature you are looking for.
-
-.TP 20
-fence
-I/O Fencing overview (this man page)
-.TP
-fenced
-I/O Fencing daemon
-.TP
-fence_tool
-Manages fenced
-.TP
-fence_node
-Runs the fence agent configured (per cluster.conf) for the given node.
-.TP
-fence_*
-Fence agents run by fenced.
-
-.SH SEE ALSO
-gfs(8)
diff --git a/fence/man/fence_ack_manual.8 b/fence/man/fence_ack_manual.8
index e2ba505..6b4cd14 100644
--- a/fence/man/fence_ack_manual.8
+++ b/fence/man/fence_ack_manual.8
@@ -1,36 +1,39 @@
-.TH fence_ack_manual 8
+.TH FENCE_ACK_MANUAL 8 2009-12-21 cluster cluster
 
 .SH NAME
-fence_ack_manual - program run by an operator as a part of manual I/O Fencing
+fence_ack_manual \- a program to override fenced fencing operations
 
 .SH SYNOPSIS
-.B
-fence_ack_manual
-[\fIOPTION\fR]...
+.B fence_ack_manual
+[OPTIONS]
+.I nodename
 
 .SH DESCRIPTION
-fence_ack_manual is run by an operator on the same node that fence_manual(8) 
-was run after the operator has reset a node which required fencing.  A message 
-in the system log indicates to the operator that they must reset a machine and 
-then run fence_ack_manual.  Running fence_ack_manual allows the cluster to 
-continue with recovery of the fenced machine.  The victim may be disconnected 
-from storage rather than resetting it.
+When
+.BR fenced (8)
+fails to fence a node, it retries indefinately.
+.BR fence_ack_manual (8)
+tells fenced to stop retrying and consider the node fenced.
+
+.P
+It is important that this only be done after the node has been manually
+turned off or prevented from writing to shared storage.
+Without this manual action and verification, the storage that fencing
+protects may become corrupted.
+
+.P
+When fenced fences a node that has no fence devices defined in the cluster
+configuration, the fencing operation fails.  This failure will be repeated
+indefinately until fence_ack_manual is run by an operator to indicate
+the node is in a safe state to proceed.
+(Defining no fencing devices for node is the equivalent of using the
+fence_manual agent in previous versions.)
 
 .SH OPTIONS
 .TP
-\fB-h\fP
-Print out a help message describing available options, then exit.
-.TP
-\fB-O\fP
-Run without prompting for user confirmation.
-.TP
-\fB-n\fP \fInodename\fP
-Name of node that has been reset or disconnected from storage.
-.TP
-\fB-s\fP \fIIPaddress\fP
-IP address of the machine which has been reset or disconnected from storage.  (Deprecated; use -n instead.)
-.TP
-\fB-V\fP
-Print out a version message, then exit.
+.B \-h
+Print a help message describing available options, then exit.
+
 .SH SEE ALSO
-fence(8), fence_node(8)
+.BR fenced (8)
+
diff --git a/fence/man/fence_node.8 b/fence/man/fence_node.8
index f109283..2cf2d00 100644
--- a/fence/man/fence_node.8
+++ b/fence/man/fence_node.8
@@ -1,34 +1,127 @@
-.TH fence_node 8
+.TH FENCE_NODE 8 2009-12-21 cluster cluster
 
 .SH NAME
-fence_node - A program which performs I/O fencing on a single node.
+fence_node \- a utility to run fence agents
 
 .SH SYNOPSIS
-.B
-fence_node
-[\fIOPTION\fR]...
+.B fence_node
+[OPTIONS]
+.I nodename
 
 .SH DESCRIPTION
-\fBfence_node\fP is a program that reads the fencing settings from
-cluster.conf (through libccs/ccsd) for the given node and then runs the
-configured fencing agent against the node.
+This utility runs a fence agent against
+.IR nodename .
+The agent and args are taken from the running cluster configuration based on
+.BR cluster.conf (5).
+
+.P
+.B fence_node
+is a wrapper around the libfence functions: fence_node() and unfence_node().
+These libfence functions use libccs to read the node fencing configuration,
+which means that corosync (with cman and ccs) must be running to use
+.BR fence_node (8).
+
+.P
+The
+.BR fenced (8)
+daemon is the main user of libfence:fence_node(), and the
+configuration details for that function are given in the
+.BR fenced (8)
+man page.
+
+.SS Fencing vs. Unfencing
+
+The main use for unfencing is with storage/SAN (non-power) agents.
+
+.P
+When using power-based fencing agents, the fencing action itself is
+supposed to turn a node back on after first turning the power off (this happens
+automatically with a "reboot" action, and needs to be configured
+explicitly as "off" + "on" otherwise.)
+
+.P
+When using storage-based fencing agents, the fencing action is not allowed
+to re-enable a node after disabling it.  Re-enabling a fenced node is only
+safe once the node has been rebooted.  A natural way to re-enable a fenced
+node's access to storage, is for that node to re-enable the access itself
+during its startup process.  The cman init script calls fence_node -U
+(nodename defaults to local nodename when unfencing).  Unfencing a node
+without an <unfence> configuration (see below) is a no-op.
+
+.P
+The basic differences between fencing and unfencing:
+.P
+.BR Fencing
+.IP 1. 3
+libfence: fence_node(), command line: fence_node nodename
+.IP 2. 3
+Turns off or disables a node.
+.IP 3. 3
+Agents run with the default action of "off", "disable" or "reboot".
+.IP 4. 3
+Performed by a cluster node against another node that fails (by the fenced daemon).
+.P
+.BR Unfencing
+.IP 1. 3
+libfence: unfence_node(), command line: fence_node -U nodename
+.IP 2. 3
+Turns on or enables a node.
+.IP 3. 3
+Agents run with the explicit action of "on" or "enable".
+.IP 4. 3
+Performed by a cluster node "against" itself during startup (by the cman init script).
 
 .SH OPTIONS
 .TP
-\fB-h\fP
-Help.  Print out the usage syntax.
+.B \-U
+Unfence the node, default local node name.
 .TP
-\fB-O\fP
-Force a connection to CCS.  This overrides the usual
-requirement that the cluster be quorate to get information from ccs.
+.B \-v
+Show fence agent results, \-vv to also show agent args.
 .TP
-\fB-V\fP
-Print version information.
-
-.SH EXAMPLES
+.B \-h
+Print a help message describing available options, then exit.
 .TP
-To fence a node called ``bellerophon'':
-prompt> fence_node bellerophon
+.B \-V
+Print program version information, then exit.
+
+.SH FILES
+
+The Unfencing/unfence_node() configuration is very similar to the
+Fencing/fence_node() configuration shown in
+.BR fenced (8).
+Unfencing is only performed for a node with an <unfence> section:
+
+.nf
+  <clusternode name="node1" nodeid="1">
+          <fence>
+          </fence>
+          <unfence>
+          </unfence>
+  </clusternode>
+.fi
+
+The <unfence> section does not contain <method> sections like the <fence>
+section does.  It contains <device> references directly, which mirror the
+corresponding device sections for <fence>, with the notable addition of
+the explicit action of "on" or "enable".  The same <fencedevice> is
+referenced by both fence and unfence <device> lines, and the same per-node
+args should be repeated.
+
+.nf
+  <clusternode name="node1" nodeid="1">
+          <fence>
+          <method name="1">
+          <device name="myswitch" foo="x"/>
+          </method>
+          </fence>
+
+          <unfence>
+          <device name="myswitch" foo="x" action="on"/>
+          </unfence>
+  </clusternode>
+.fi
 
 .SH SEE ALSO
-fence(8), ccs(7)
+.BR fenced (8)
+
diff --git a/fence/man/fence_tool.8 b/fence/man/fence_tool.8
index 625fbe0..237745d 100644
--- a/fence/man/fence_tool.8
+++ b/fence/man/fence_tool.8
@@ -1,42 +1,60 @@
-.TH fence_tool 8
+.TH FENCE_TOOL 8 2009-12-21 cluster cluster
 
 .SH NAME
-fence_tool - A program to join and leave the fence domain
+fence_tool \- a utility for the fenced daemon
 
 .SH SYNOPSIS
-.B
-fence_tool
-<\fBjoin | leave | ls | dump\fP> 
-[\fIOPTION\fR]...
+.B fence_tool
+[COMMAND] [OPTIONS]
 
 .SH DESCRIPTION
-\fBfence_tool\fP is a program used to join or leave the default fence
-domain.  It communicates with the fenced daemon.  Before telling fenced
-to join the domain, fence_tool waits for the cluster to have quorum,
-making it easier to cancel the command if the cluster is inquorate.
-
-The dump option will read fenced's ring buffer of debug messages and print
-it to stdout.
+This utility controls and queries the
+.BR fenced (8)
+daemon with the following commands:
+.TP
+.B join
+join the fence domain.
+.TP
+.B leave
+leave the fence domain.
+.TP
+.B dump
+print the fenced internal debug buffer ont stdout.
+.TP
+.B ls
+display internal fenced state.
+.P
+The leave command will not be sent to fenced if fence_tool detects that
+any instances of gfs or dlm are in use.
 
 .SH OPTIONS
 .TP
-\fB-m\fP <n>
-Delay join up to n seconds for all nodes in cluster.conf to be cluster members.
+.B \-n
+Show all node information in ls.
+.TP
+.BI \-t " seconds"
+Retry cman connection for this many seconds.
+0 none, -1 indefinite. Default 0.
 .TP
-\fB-w\fP
-Wait until the join or leave is completed.
+.BI \-q " seconds"
+Delay join up to this many seconds for the cluster to have quorum.
+0 none, -1 indefinite. Default 0.
 .TP
-\fB-h\fP
-Help.  Print out the usage syntax.
+.BI \-m " seconds"
+Delay join up to this many seconds for all nodes in cluster.conf to be
+cluster members.
+0 none, -1 indefinite. Default 0.
 .TP
-\fB-V\fP
-Print version information.
+.BI \-w " seconds"
+Wait up to this many seconds for the result of join or leave.
+0 none, -1 indefinite. Default 0.
 .TP
-\fB-t\fP <n>
-Maximum time in seconds to wait for quorum or -w (default: 300 seconds)
+.B \-h
+Print a help message describing available options, then exit.
 .TP
-\fB-Q\fP
-Fail command immediately if the cluster is not quorate, don't wait.
+.B \-V
+Print program version information, then exit.
 
 .SH SEE ALSO
-fenced(8), fence(8), fence_node(8)
+.BR fenced (8)
+
diff --git a/fence/man/fenced.8 b/fence/man/fenced.8
index 8cdcf1a..84c3a46 100644
--- a/fence/man/fenced.8
+++ b/fence/man/fenced.8
@@ -1,15 +1,13 @@
-.TH fenced 8
+.TH FENCED 8 2009-12-21 cluster cluster
 
 .SH NAME
-fenced - the I/O Fencing daemon
+fenced \- the I/O Fencing daemon
 
 .SH SYNOPSIS
-.B
-fenced
-[\fIOPTION\fR]...
+.B fenced
+[OPTIONS]
 
 .SH DESCRIPTION
-
 The fencing daemon, fenced, fences cluster nodes that have failed.
 Fencing a node generally means rebooting it or otherwise preventing it
 from writing to storage, e.g. disabling its port on a SAN switch.  Fencing
@@ -21,81 +19,82 @@ Software related to sharing storage among nodes in a cluster, e.g. GFS,
 usually requires fencing to be configured to prevent corruption of the
 storage in the presence of node failure and recovery.  GFS will not allow
 a node to mount a GFS file system unless the node is running fenced.
-Fencing happens in the context of a cman/openais cluster.  A node must be
-a cluster member before it can run fenced.
 
-Once started, fenced waits for the 'fence_tool join' command to be run,
-telling it to join the fence domain: a group of nodes managed by the
-openais/cpg/groupd cluster infrastructure.  In most cases, all nodes will
-join the fence domain after joining the cluster.
+Once started, fenced waits for the
+.BR fence_tool (8)
+join command to be run, telling it to join the fence domain: a group of
+nodes that will fence group members that fail.  When the cluster does not
+have quorum, fencing operations are postponed until quorum is restored.
+If a failed fence domain member is reset and rejoins the cluster before
+the remaining domain members have fenced it, the fencing is no longer
+needed and will be skipped.
+
+fenced uses the corosync cluster membership system, it's closed process
+group library (libcpg), and the cman quorum and configuration libraries
+(libcman, libccs).
 
-Fence domain members are aware of the membership of the group, and are
-notified when nodes join or leave.  If a fence domain member fails, one of
-the remaining members will fence it.  If the cluster has lost quorum,
-fencing won't occur until quorum has been regained.  If a failed node is
-reset and rejoins the cluster before the remaining domain members have
-fenced it, the fencing will be bypassed.
+The cman init script usually starts the fenced daemon and runs fence_tool
+join and leave.
 
 .SS Node failure
 
-When a domain member fails, fenced runs an agent to fence it.  The
-specific agent to run and the parameters the agent requires are all read
-from the cluster.conf file (using libccs) at the time of fencing.  The
-fencing operation against a failed node is not considered complete until
-the exec'ed agent exits.  The exit value of the agent indicates the
-success or failure of the operation.  If the operation failed, fenced will
-retry (possibly with a different agent, depending on the configuration)
-until fencing succeeds.  Other systems such as DLM and GFS will not begin
-their own recovery for a failed node until fenced has successfully
-completed fencing it.  So, a delay or problem in fencing will result in
-other systems like DLM/GFS being blocked.  Information about fencing
-operations will also appear in syslog.
+When a fence domain member fails, fenced runs an agent to fence it.  The
+specific agent to run and the agent parameters are all read from the
+cluster.conf file (using libccs) at the time of fencing.  The fencing
+operation against a failed node is not considered complete until the
+exec'ed agent exits.  The exit value of the agent indicates the success or
+failure of the operation.  If the operation failed, fenced will retry
+(possibly with a different agent, depending on the configuration) until
+fencing succeeds.  Other systems such as DLM and GFS wait for fencing to
+complete before starting their own recovery for a failed node.
+Information about fencing operations will also appear in syslog.
 
 When a domain member fails, the actual fencing operation can be delayed by
-a configurable number of seconds (cluster.conf:post_fail_delay or -f).
+a configurable number of seconds (cluster.conf post_fail_delay or -f).
 Within this time, the failed node could be reset and rejoin the cluster to
 avoid being fenced.  This delay is 0 by default to minimize the time that
-other systems are blocked (see above).
+other systems are blocked.
 
 .SS Domain startup
 
-When the domain is first created in the cluster (by the first node to join
-it) and subsequently enabled (by the cluster gaining quorum) any nodes
-listed in cluster.conf that are not presently members of the cman cluster
-are fenced.  The status of these nodes is unknown, and to be on the side
-of safety they are assumed to be in need of fencing.  This startup fencing
-can be disabled, but it's only truly safe to do so if an operator is
-present to verify that no cluster nodes are in need of fencing.
-
-This example illustrates why startup fencing is important.  Take a three
-node cluster with nodes A, B and C; all three have a GFS fs mounted.  All
-three nodes experience a low-level kernel hang at about the same time.  A
-watchdog triggers a reboot on nodes A and B, but not C.  A and B boot back
-up, form the cluster again, gain quorum, join the fence domain, *don't*
-fence node C which is still hung and unresponsive, and mount the GFS fs
-again.  If C were to come back to life, it could corrupt the fs.  So, A
-and B need to fence C when they reform the fence domain since they don't
-know the state of C.  If C *had* been reset by a watchdog like A and B,
-but was just slow in rebooting, then A and B might be fencing C
-unnecessarily when they do startup fencing.
+When the fence domain is first created in the cluster (by the first node
+to join it) and subsequently enabled (by the cluster gaining quorum) any
+nodes listed in cluster.conf that are not presently members of the
+corosync cluster are fenced.  The status of these nodes is unknown, and to
+be safe they are assumed to need fencing.  This startup fencing can be
+disabled, but it's only truly safe to do so if an operator is present to
+verify that no cluster nodes are in need of fencing.
+
+The following example illustrates why startup fencing is important.  Take
+a three node cluster with nodes A, B and C; all three have a GFS file
+system mounted.  All three nodes experience a low-level kernel hang at
+about the same time.  A watchdog triggers a reboot on nodes A and B, but
+not C.  A and B reboot, form the cluster again, gain quorum, join the
+fence domain, _don't_ fence node C which is still hung and unresponsive,
+and mount the GFS fs again.  If C were to come back to life, it could
+corrupt the fs.  So, A and B need to fence C when they reform the fence
+domain since they don't know the state of C.  If C _had_ been reset by a
+watchdog like A and B, but was just slow in rebooting, then A and B might
+be fencing C unnecessarily when they do startup fencing.
 
 The first way to avoid fencing nodes unnecessarily on startup is to ensure
 that all nodes have joined the cluster before any of the nodes start the
 fence daemon.  This method is difficult to automate.
 
 A second way to avoid fencing nodes unnecessarily on startup is using the
-cluster.conf:post_join_delay setting (or -j option).  This is the number
+cluster.conf post_join_delay setting (or -j option).  This is the number
 of seconds fenced will delay before actually fencing any victims after
 nodes join the domain.  This delay gives nodes that have been tagged for
 fencing a chance to join the cluster and avoid being fenced.  A delay of
 -1 here will cause the daemon to wait indefinitely for all nodes to join
 the cluster and no nodes will actually be fenced on startup.
 
-To disable fencing at domain-creation time entirely, the -c option can be
-used to declare that all nodes are in a clean or safe state to start.  The
-clean_start cluster.conf option can also be set to do this, but
-automatically disabling startup fencing in cluster.conf can risk file
-system corruption.
+To disable fencing at domain-creation time entirely, the cluster.conf
+clean_start setting (or -c option) can be used to declare that all nodes
+are in a clean or safe state to start.  This setting/option should not
+generally be used since it risks not fencing a node that needs it, which
+can lead to corruption in other applications (like GFS) that depend on
+fencing.
 
 Avoiding unnecessary fencing at startup is primarily a concern when nodes
 are fenced by power cycling.  If nodes are fenced by disabling their SAN
@@ -105,15 +104,63 @@ access, then unnecessarily fencing a node is usually less disruptive.
 
 If a fencing device fails, the agent may repeatedly return errors as
 fenced tries to fence a failed node.  In this case, the admin can manually
-reset the failed node, and then use fence_ack_manual to tell fenced to
-continue without fencing the node.
+reset the failed node, and then use
+.BR fence_ack_manual (8)
+to tell fenced to continue without fencing the node.
+
+.SH OPTIONS
+Command line options override a corresponding setting in cluster.conf.
+
+.TP
+.B \-D
+Enable debugging to stderr and don't fork.
+.TP
+.B \-L
+Enable debugging to log file.
+.TP
+.BI \-g " num"
+groupd compatibility mode, 0 off, 1 on.  Default 0.
+.TP
+.BI \-r " path"
+Register a directory that needs to be empty for the daemon to start.  Use
+a dash (\-) to skip default directories /sys/fs/gfs, /sys/fs/gfs2,
+/sys/kernel/dlm.
+.TP
+.B \-c
+All nodes are in a clean state to start. Do no startup fencing.
+.TP
+.B \-s
+Skip startup fencing of nodes with no defined fence methods.
+.TP
+.BI \-j " secs"
+Post-join fencing delay.
+.TP
+.BI \-f " secs"
+Post-fail fencing delay.
+.TP
+.BI \-R " path"
+Number of seconds to wait for a manual override after a failed fencing
+attempt before the next attempt.
+.TP
+.BI \-O " path"
+Location of a FIFO used for communication between fenced and fence_ack_manual.
+.TP
+.B \-h
+Print a help message describing available options, then exit.
+.TP
+.B \-V
+Print program version information, then exit.
+
+.SH FILES
+.BR cluster.conf (5)
+is usually located at /etc/cluster/cluster.conf.  It is not read directly.
+Other cluster components load the contents into memory, and the values are
+accessed through the libccs library.
 
-.SH CONFIGURATION FILE
 Fencing daemon behavior can be controlled by setting options in the
-cluster.conf file under the section <fence_daemon> </fence_daemon>.  See
-above for complete descriptions of these values.  The delay values are in
-seconds; -1 secs means an unlimited delay.  The values shown are the
-defaults.
+cluster.conf file under the section <fence_daemon />.  See above for
+complete descriptions of these values.  The delay values are in seconds;
+-1 secs means an unlimited delay.  The values shown are the defaults.
 
 Post-join delay is the number of seconds the daemon will wait before
 fencing any victims after a node joins the domain.
@@ -137,17 +184,16 @@ fenced and fence_ack_manual.
   <fence_daemon override_path="/var/run/cluster/fenced_override"/>
 
 Override-time is the amount of time to wait for administrator intervention
-after fencing has failed.  The default is 5 seconds.
-
-  <fence_daemon override_time="10"/>
+between fencing attempts following fence agent failures.
 
+  <fence_daemon override_time="3"/>
 
 .SS Per-node fencing settings
 
-The per-node fencing configuration can become complex and is largely
-specific to the hardware being used.  The general framework begins like
-this:
+The per-node fencing configuration is partly dependant on the specific
+agent/hardware being used.  The general framework begins like this:
 
+.nf
   <clusternodes>
 
   <clusternode name="node1" nodeid="1">
@@ -160,40 +206,41 @@ this:
           </fence>
   </clusternode>
 
-  ...
   </clusternodes>
+.fi
 
 The simple fragment above is a valid configuration: there is no way to
 fence these nodes.  If one of these nodes is in the fence domain and
 fails, fenced will repeatedly fail in its attempts to fence it.  The admin
 will need to manually reset the failed node and then use fence_ack_manual
-to tell fenced to continue on without fencing it (see override above).
+to tell fenced to continue without fencing it (see override above).
 
 There is typically a single method used to fence each node (the name given
 to the method is not significant).  A method refers to a specific device
 listed in the separate <fencedevices> section, and then lists any
 node-specific parameters related to using the device.
 
+.nf
   <clusternodes>
 
   <clusternode name="node1" nodeid="1">
           <fence>
-             <method name="single">
-                <device name="myswitch" hw-specific-param="x"/>
-             </method>
+          <method name="1">
+          <device name="myswitch" foo="x"/>
+          </method>
           </fence>
   </clusternode>
 
   <clusternode name="node2" nodeid="2">
           <fence>
-             <method name="single">
-                <device name="myswitch" hw-specific-param="y"/>
-             </method>
+          <method name="1">
+          <device name="myswitch" foo="y"/>
+          </method>
           </fence>
   </clusternode>
 
-  ...
   </clusternodes>
+.fi
 
 .SS Fence device settings
 
@@ -201,9 +248,11 @@ This section defines properties of the devices used to fence nodes.  There
 may be one or more devices listed.  The per-node fencing sections above
 reference one of these fence devices by name.
 
+.nf
   <fencedevices>
-          <fencedevice name="myswitch" ipaddr="1.2.3.4" .../>
+          <fencedevice name="myswitch" agent="..." something="..."/>
   </fencedevices>
+.fi
 
 .SS Multiple methods for a node
 
@@ -211,86 +260,68 @@ In more advanced configurations, multiple fencing methods can be defined
 for a node.  If fencing fails using the first method, fenced will try the
 next method, and continue to cycle through methods until one succeeds.
 
+.nf
   <clusternode name="node1" nodeid="1">
           <fence>
-             <method name="first">
-                <device name="powerswitch" hw-specific-param="x"/>
-             </method>
-
-             <method name="second">
-                <device name="storageswitch" hw-specific-param="1"/>
-             </method>
+          <method name="1">
+          <device name="myswitch" foo="x"/>
+          </method>
+          <method name="2">
+          <device name="another" bar="123"/>
+          </method>
           </fence>
   </clusternode>
 
+  <fencedevices>
+          <fencedevice name="myswitch" agent="..." something="..."/>
+          <fencedevice name="another" agent="..."/>
+  </fencedevices>
+.fi
+
 .SS Dual path, redundant power
 
 Sometimes fencing a node requires disabling two power ports or two i/o
 paths.  This is done by specifying two or more devices within a method.
+fenced will run the agent for the device twice, once for each device line,
+and both must succeed for fencing to be considered successful.
 
+.nf
   <clusternode name="node1" nodeid="1">
           <fence>
-             <method name="single">
-                <device name="sanswitch1" hw-specific-param="x"/>
-                <device name="sanswitch2" hw-specific-param="x"/>
-             </method>
+          <method name="1">
+          <device name="sanswitch1" port="11"/>
+          <device name="sanswitch2" port="11"/>
+          </method>
           </fence>
   </clusternode>
+.fi
 
 When using power switches to fence nodes with dual power supplies, the
 agents must be told to turn off both power ports before restoring power to
 either port.  The default off-on behavior of the agent could result in the
 power never being fully disabled to the node.
 
+.nf
   <clusternode name="node1" nodeid="1">
           <fence>
-             <method name="single">
-                <device name="nps1" hw-param="x" action="off"/>
-                <device name="nps2" hw-param="x" action="off"/>
-                <device name="nps1" hw-param="x" action="on"/>
-                <device name="nps2" hw-param="x" action="on"/>
-             </method>
+          <method name="1">
+          <device name="nps1" port="11" action="off"/>
+          <device name="nps2" port="11" action="off"/>
+          <device name="nps1" port="11" action="on"/>
+          <device name="nps2" port="11" action="on"/>
+          </method>
           </fence>
   </clusternode>
+.fi
 
 .SS Hardware-specific settings
 
-Find documentation for configuring specific devices at
-.BR
-http://sources.redhat.com/cluster/
-
-.SH OPTIONS
-Command line options override corresponding values in cluster.conf.
-.TP
-\fB-j\fP \fIsecs\fP
-Post-join fencing delay
-.TP
-\fB-f\fP \fIsecs\fP
-Post-fail fencing delay
-.TP
-\fB-c\fP 
-All nodes are in a clean state to start.
-.TP
-\fB-O\fP
-Path of the override FIFO.
-.TP
-\fB-T\fP
-Amount of time to wait for administrator intervention after 
-fencing has failed, in seconds.
-.TP
-\fB-D\fP
-Enable debugging code and don't fork into the background.
-.TP
-\fB-V\fP
-Print the version information and exit.
-.TP
-\fB-h\fP 
-Print out a help message describing available options, then exit.
-
-.SH DEBUGGING
-The fenced daemon keeps a circular buffer of debug messages that can be
-dumped with the 'fence_tool dump' command.
+Find documentation for configuring specific devices from the device
+agent's man page.
 
 .SH SEE ALSO
-fence_tool(8), cman(8), groupd(8), group_tool(8)
+.BR fence_tool (8),
+.BR fence_ack_manual (8),
+.BR fence_node (8),
+.BR cluster.conf (5)
 


More information about the cluster-commits mailing list