deploying additional storage nodes

Larry O'Leary loleary at redhat.com
Thu Jul 18 22:36:56 UTC 2013


On Thu, 2013-07-18 at 14:01 -0400, John Sanda wrote:
> On Jul 18, 2013, at 10:21 AM, Larry O'Leary <loleary at redhat.com> wrote:
> 
> > On Wed, 2013-07-17 at 16:29 -0400, John Sanda wrote:
> >> On Jul 17, 2013, at 4:05 PM, Alan Santos <asantos at redhat.com> wrote:
> >> 
> >>> 
> >>> On Jul 17, 2013, at 3:52 PM, John Sanda <jsanda at redhat.com> wrote:
> >>> 
> >>>> I raised the question of whether or not a storage node machine accessing the RHQ relational database is acceptable. That question is part of a larger discussion which I want to open up to the list for comments/questions. 
> >>>> 
> >>>> The storage installer provides a few options which are shared, cluster-wide settings. These settings have to be the same for each node in the cluster. Changing those settings will have to be done through RHQ in order to ensure the change is properly applied to each node in the cluster. Allowing the user to set these options when installing additional storage nodes is error prone at best. The storage installer can instead obtain those settings from the server (or database) to ensure the cluster settings are consistent with existing nodes.
> >>>> 
> >>>> There is an additional requirement that has to be taken into consideration. We have to implement inter-node authentication that will prevent arbitrary Cassandra nodes from joining the RHQ storage node cluster. This will be implemented in part by each storage node keeping a list of storage node IP addresses. The server and agents will take of update that list. This means that even with the correct cluster settings, a new node will not be able to join the cluster until the RHQ server propagates the node's IP address to existing storage nodes.
> >>>> 
> >>>> To successfully deploy a new node such that it is an active member of the storage node cluster it needs to have the correct cluster-wide settings and node IP addresses have to be propagated across the whole cluster. Note that this only applies when deploying additional storage nodes beyond the first one after it and the RHQ server are already up and running and collecting data. Here are the options being considered.
> >>>> 
> >>>> 1) Require the storage installer to have access to the RHQ relational database
> >>>> pros:
> >>>> - Can easily determine if this is the first or an additional storage node installation
> >>>> 
> >>>> - Change in installation steps for the user are minimal
> >>>> 
> >>>> - Can reliably ensure that the storage node is properly configured at installation time
> >>>> 
> >>>> cons:
> >>>> - Requires access to the database
> >>>> 
> >>>> - Requires opening up additional ports
> >>>> 
> >>> 
> >>> Those two are pretty significant 'cons'
> >>> 
> >>>> - Cannot easily propagate IP address for inter-node authentication
> >>> 
> >>> 
> >>> That seems reason enough not to eliminate this option.
> >> 
> >> I am lukewarm on this option because (in addition to the ports) it does not offer a good way to propagate the IP address changes. The server would have to continually poll the database to check for new storage node entities.
> >>> 
> >>> 
> >>> 
> >>>> 
> >>>> 2) Have the storage installer connect to the RHQ server via our remote APIs. This would require the rhqctl to take server connection params (like the CLI). Reasonable defaults like localhost/7080 could be used.
> >>>> pros:
> >>>> - Can easily propagate IP addresses for inter-node authentication
> >>>> 
> >>>> - Can reliably ensure that the storage node is properly configured at installation time (provided the installer can connect to the server)
> >>>> 
> >>>> cons:
> >>>> - Requires access to a running RHQ server
> >>> 
> >>> That seems reasonable. 
> >>> 
> >>> 
> >>>> - Installation steps are now a bit more complicated
> >>> 
> >>> It's not clear to me what's more complicated.  You need to point the node at an RHQ server, that sounds like an additional command line parameter.
> >>> 
> >> 
> >> That is a fair point. I called it out because some times what I as a developer perceive as straightforward for the user turns out to be hard.
> >> 
> >>> 
> >>> 
> >>>> - Cannot easily determine if this is the first or an additional storage node installation. For the first storage node, we won't have a server to connect to. For subsequent nodes we should. I am not sure if there is a good way to make that distinction.
> >>> 
> >>> You could have two commands -e.g.  `rhq install-first-storage' and 'rhq install-additional-storage'.  Perhaps the way the first storage is installed warrants additional thought and 'rhq install storage' only installs additional nodes.
> >> 
> >> In a discussion I had earlier today, Jay or mazz suggested something along these lines. We could implement an rhqctl add command.
> >> 
> >>> 
> >>> 
> >>> 
> >>>> 3) Do not have the storage installer talk to either the database or the server. Essentially leave it as is which means when the new storage node is deployed it will not join the cluster. When the node is committed into inventory, the server will detect that the node is not part of the cluster, and then proceed to propagate IP addresses (for inter-node authentication), configure the cluster settings on the new node, and finally restart the new node for the settings to take effect.
> >>>> 
> >>>> pros:
> >>>> - Keeps the install process simple for the user
> >>>> 
> >>>> - Does not require access to the database
> >>>> 
> >>>> - Does not require access to a running RHQ server
> >>>> 
> >>>> cons:
> >>>> - Storage node deployment now involves a more complex workflow
> >>>> 
> >>>> - If there are deployment problems, the user will find out about it later rather than sooner
> >>> 
> >>> This is interesting. It sounds more user friendly, but I suspect that's going to depend heavily on how well things work. iow - the happy path sounds happy, but the bad path sounds really bad. 
> >>> 
> >>> What happens if storage nodes are mistakenly spun up or test processes are left up? 
> >> 
> >> I am not sure how we can handle or prevent a user accidentally installing a storage node. Maybe I misunderstand the question. This is not really about the happy vs bad/ugly paths. If anything, this option puts you right on the not so happy path. Independent of this discussion and as part of our storage node management capabilities, we have to deal with (and be able to resolve) issues such as a node failing to join the cluster. 
> >> 
> >> Even with option 2, we still have to provide the same error handling because there will still be the possibility for the user to manually configure the cluster settings. Furthermore, we plan to support cluster-wide config changes, e.g., changing the ports for client requests or for gossip. The steps to address those types of situations will be very similar to what is being described here.
> > 
> > Why does a new node have to join right away? Wouldn't it make sense that
> > a new node simply gets discovered by an agent and then the admin
> > register the node from the UI or CLI? The plug-in could then take care
> > of configuring the node. The only thing the node would need to do is
> > explicitly identify itself as a RHQ storage node.
> > 
> > Granted, this doesn't take care of the potential configurations issues
> > if a user/admin tries to configure it themselves but they would at least
> > be aware that something isn't working as they import/register it from
> > the UI.
> 
> No, I do no think it makes sense for the admin the register the new node. I think that is the job of a Cassandra admin, and we neither expect nor want the RHQ/JON admin to have to be a Cassandra admin. I think the simpler we can make it for the user to deploy storage nodes the better. With that said, maybe there are security issues to be considered. MANAGE_SETTINGS permission is required for managing storage nodes, but maybe it is possible for a user without the appropriate permissions to install a storage node and have it become a member of the cluster. I think we need to take a closer look at that.
> 
> As for configuration issues, we will have checks in place to determine if there are problems and let the user know for example if a node failed to join the cluster for some reason.

That reply scares me. So you are now saying that I have to hire a DBA,
Cassandara administrator and an RHQ admin? For simplicity, Cassandra
should not be a focus point for anyone and therefore I should not define
a Cassandra admin and RHQ admin role. They should be one in the same.
Perhaps this is the case but this reply leads me to believe that you
expect me to become a Cassandra administrator to use RHQ going forward.

-- 
Larry O'Leary
https://plus.google.com/u/0/112645929986009801513



More information about the rhq-devel mailing list