deploying additional storage nodes

Stefan Negrea snegrea at redhat.com
Thu Jul 18 18:46:28 UTC 2013



----- Original Message -----
> From: "Jay Shaughnessy" <jshaughn at redhat.com>
> To: rhq-devel at lists.fedorahosted.org
> Sent: Thursday, July 18, 2013 10:20:57 AM
> Subject: Re: deploying additional storage nodes
> 
> 
> After thinking about this I think I prefer option 3, which delays
> incorporating the new storage node until after discovery.  This
> eliminates the need for storage node installation to talk to the
> database directly, taking away those "cons".   It also eliminates the
> need to contact a running server, which means the servers don't need to
> be up (this whole thing may be part of a maintenance period), and we
> don't need to deal with establishing a connection (failure point),
> getting the host/port/creds (new cmd line options and failure point).
> 
> In option 3 we can actually fail to install the new storage node, revert
> the new storage node install, fix issues with agent connectivity, or
> whatever, without immediately mucking with the RHQ topology.  It defers
> integrating the storage node until 1) C* is actually running, 2) The
> required agent is actually running, 3) the server is running, and 4) the
> storage node is discovered and merged into inventory.

I think this is the best argument so far in favour of any of the options. Also, option 3 is more aligned with our proposed architecture where complete management of storage nodes is done exclusively via agents. Making even the install configuration follow the same pattern is just enforcing that decision.

However, the one thing that I did not yet see mentioned for option 3 is the initial install. This process would be completely different because there is no server the agent can connect to receive storage node configuration. So one of the major cons from option 2 applies to option 3 too; and that problem will be even more visible in option 3 than it would be in option 2.

So, I am inclined to side with option 3 if we can find a clean and simple approach for the initial storage node installation.


> 
> When all of that is true we can take the necessary steps to incorporate
> it into the storage node cluster, setting global config values,
> propagating the SN addresses, etc.  This is somewhat consistent with
> what we already do for the initial SN, I think, which gets installed
> before the rhq server.
> 
> All of this discussion is for installing an additional storage node
> *only* but I think in this approach there is no difference when
> installing an additional HA server along with a new SN.
> 
> 
> 
> On 7/18/2013 10:21 AM, Larry O'Leary wrote:
> > On Wed, 2013-07-17 at 16:29 -0400, John Sanda wrote:
> >> On Jul 17, 2013, at 4:05 PM, Alan Santos <asantos at redhat.com> wrote:
> >>
> >>> On Jul 17, 2013, at 3:52 PM, John Sanda <jsanda at redhat.com> wrote:
> >>>
> >>>> I raised the question of whether or not a storage node machine accessing
> >>>> the RHQ relational database is acceptable. That question is part of a
> >>>> larger discussion which I want to open up to the list for
> >>>> comments/questions.
> >>>>
> >>>> The storage installer provides a few options which are shared,
> >>>> cluster-wide settings. These settings have to be the same for each node
> >>>> in the cluster. Changing those settings will have to be done through
> >>>> RHQ in order to ensure the change is properly applied to each node in
> >>>> the cluster. Allowing the user to set these options when installing
> >>>> additional storage nodes is error prone at best. The storage installer
> >>>> can instead obtain those settings from the server (or database) to
> >>>> ensure the cluster settings are consistent with existing nodes.
> >>>>
> >>>> There is an additional requirement that has to be taken into
> >>>> consideration. We have to implement inter-node authentication that will
> >>>> prevent arbitrary Cassandra nodes from joining the RHQ storage node
> >>>> cluster. This will be implemented in part by each storage node keeping
> >>>> a list of storage node IP addresses. The server and agents will take of
> >>>> update that list. This means that even with the correct cluster
> >>>> settings, a new node will not be able to join the cluster until the RHQ
> >>>> server propagates the node's IP address to existing storage nodes.
> >>>>
> >>>> To successfully deploy a new node such that it is an active member of
> >>>> the storage node cluster it needs to have the correct cluster-wide
> >>>> settings and node IP addresses have to be propagated across the whole
> >>>> cluster. Note that this only applies when deploying additional storage
> >>>> nodes beyond the first one after it and the RHQ server are already up
> >>>> and running and collecting data. Here are the options being considered.
> >>>>
> >>>> 1) Require the storage installer to have access to the RHQ relational
> >>>> database
> >>>> pros:
> >>>> - Can easily determine if this is the first or an additional storage
> >>>> node installation
> >>>>
> >>>> - Change in installation steps for the user are minimal
> >>>>
> >>>> - Can reliably ensure that the storage node is properly configured at
> >>>> installation time
> >>>>
> >>>> cons:
> >>>> - Requires access to the database
> >>>>
> >>>> - Requires opening up additional ports
> >>>>
> >>> Those two are pretty significant 'cons'
> >>>
> >>>> - Cannot easily propagate IP address for inter-node authentication
> >>>
> >>> That seems reason enough not to eliminate this option.
> >> I am lukewarm on this option because (in addition to the ports) it does
> >> not offer a good way to propagate the IP address changes. The server
> >> would have to continually poll the database to check for new storage node
> >> entities.
> >>>
> >>>
> >>>> 2) Have the storage installer connect to the RHQ server via our remote
> >>>> APIs. This would require the rhqctl to take server connection params
> >>>> (like the CLI). Reasonable defaults like localhost/7080 could be used.
> >>>> pros:
> >>>> - Can easily propagate IP addresses for inter-node authentication
> >>>>
> >>>> - Can reliably ensure that the storage node is properly configured at
> >>>> installation time (provided the installer can connect to the server)
> >>>>
> >>>> cons:
> >>>> - Requires access to a running RHQ server
> >>> That seems reasonable.
> >>>
> >>>
> >>>> - Installation steps are now a bit more complicated
> >>> It's not clear to me what's more complicated.  You need to point the node
> >>> at an RHQ server, that sounds like an additional command line parameter.
> >>>
> >> That is a fair point. I called it out because some times what I as a
> >> developer perceive as straightforward for the user turns out to be hard.
> >>
> >>>
> >>>> - Cannot easily determine if this is the first or an additional storage
> >>>> node installation. For the first storage node, we won't have a server
> >>>> to connect to. For subsequent nodes we should. I am not sure if there
> >>>> is a good way to make that distinction.
> >>> You could have two commands -e.g.  `rhq install-first-storage' and 'rhq
> >>> install-additional-storage'.  Perhaps the way the first storage is
> >>> installed warrants additional thought and 'rhq install storage' only
> >>> installs additional nodes.
> >> In a discussion I had earlier today, Jay or mazz suggested something along
> >> these lines. We could implement an rhqctl add command.
> >>
> >>>
> >>>
> >>>> 3) Do not have the storage installer talk to either the database or the
> >>>> server. Essentially leave it as is which means when the new storage
> >>>> node is deployed it will not join the cluster. When the node is
> >>>> committed into inventory, the server will detect that the node is not
> >>>> part of the cluster, and then proceed to propagate IP addresses (for
> >>>> inter-node authentication), configure the cluster settings on the new
> >>>> node, and finally restart the new node for the settings to take effect.
> >>>>
> >>>> pros:
> >>>> - Keeps the install process simple for the user
> >>>>
> >>>> - Does not require access to the database
> >>>>
> >>>> - Does not require access to a running RHQ server
> >>>>
> >>>> cons:
> >>>> - Storage node deployment now involves a more complex workflow
> >>>>
> >>>> - If there are deployment problems, the user will find out about it
> >>>> later rather than sooner
> >>> This is interesting. It sounds more user friendly, but I suspect that's
> >>> going to depend heavily on how well things work. iow - the happy path
> >>> sounds happy, but the bad path sounds really bad.
> >>>
> >>> What happens if storage nodes are mistakenly spun up or test processes
> >>> are left up?
> >> I am not sure how we can handle or prevent a user accidentally installing
> >> a storage node. Maybe I misunderstand the question. This is not really
> >> about the happy vs bad/ugly paths. If anything, this option puts you
> >> right on the not so happy path. Independent of this discussion and as
> >> part of our storage node management capabilities, we have to deal with
> >> (and be able to resolve) issues such as a node failing to join the
> >> cluster.
> >>
> >> Even with option 2, we still have to provide the same error handling
> >> because there will still be the possibility for the user to manually
> >> configure the cluster settings. Furthermore, we plan to support
> >> cluster-wide config changes, e.g., changing the ports for client requests
> >> or for gossip. The steps to address those types of situations will be
> >> very similar to what is being described here.
> > Why does a new node have to join right away? Wouldn't it make sense that
> > a new node simply gets discovered by an agent and then the admin
> > register the node from the UI or CLI? The plug-in could then take care
> > of configuring the node. The only thing the node would need to do is
> > explicitly identify itself as a RHQ storage node.
> >
> > Granted, this doesn't take care of the potential configurations issues
> > if a user/admin tries to configure it themselves but they would at least
> > be aware that something isn't working as they import/register it from
> > the UI.
> 
> _______________________________________________
> rhq-devel mailing list
> rhq-devel at lists.fedorahosted.org
> https://lists.fedorahosted.org/mailman/listinfo/rhq-devel
> 


More information about the rhq-devel mailing list