Hadoop plugin questions; JMX

Heiko W.Rupp hrupp at redhat.com
Thu Oct 20 19:01:01 UTC 2011


Am 20.10.2011 um 19:57 schrieb Elias Ross:
> There's really only a few MBeans per Hadoop instance. So I don't know
> if you would necessarily have a dozen or so resources?

TBH, I don't recall the details. I think I have used the distribution coming
from apache at that time (but I may be wrong).

> <plugin name="hadoop">
>    <server name="Hadoop" discovery="JMXDiscoveryComponent"
> class="JMXServerComponent">
>       <server name="Hadoop Name Node">
>           <service name="FSNamesystemState" >
>           <service name="NameNodeActivity" >
> 
> Still, I think would be nice to group them all together. Is it
> possible to nest servers like this? How can I use process scanning in

yes

> conjunction with the JMX discovery process?

That is not a big deal, as you can e.g. see in the AS4 or tomcat plugins.
The thing which unfortunately does not work is to have a 
process scan within an embedded server

<server name="foo" >
   <server name="bar">

here process scans work on foo, but not on bar.
This is also one reason the Hadoop plugin is how it is today.
Bascially the foo server does the process scans, and if it
finds results for one of the scans it adds itself to the set of 
discovered resources to form the umbrella "Hadoop subsystem"

   <server name="Hadoop" discovery="HadoopDiscovery" class="HadoopComponent">
        <process-scan name="TaskTracker" query="process|basename|match=^java.*,arg|org.apache.hadoop.mapred.TaskTracker|match=.*"/>
        <process-scan name="JobTracker" query="process|basename|match=^java.*,arg|org.apache.hadoop.mapred.JobTracker|match=.*"/>
        <process-scan name="NameNode" query="process|basename|match=^java.*,arg|org.apache.hadoop.hdfs.server.namenode.NameNode|match=.*"/>
        <process-scan name="SecondaryNameNode" query="process|basename|match=^java.*,arg|org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode|match=.*"/>
        <process-scan name="DataNode" query="process|basename|match=^java.*,arg|org.apache.hadoop.hdfs.server.datanode.DataNode|match=.*"/>

        <service name="TaskTracker" discovery="HadoopServiceDiscovery" class="HadoopServiceComponent"


and then the name of the embedded services just match the names of the process scans, so when the
discovery for the parent "<server name=hadoop>" is done, the children are discovered by basically asking the parent
for its process scan results

(HadoopServiceDiscovery)
       List<ProcessScanResult> parentProcessScans = resourceDiscoveryContext.getParentResourceContext().getNativeProcessesForType();
 
and then checking for this match

 ResourceType resourceType = resourceDiscoveryContext.getResourceType();
        String rtName = resourceType.getName();
            if (psr.getProcessScan().getName().equals(rtName)) {

          // add the service to the discovered results.


>> You could tweak the pluginGen plugin generator to not get its input
>> from command line / stdin, but from going out to jmx and reading
>> objects that match an objectname query. And then use the
>> existing templating to create the artifacts.
> 
> I might do that. I might just work on the part to generate the XML
> itself and stop there :-)

I'd say that is mostly solved - have a look at PluginGen.java:283+ "generate()" method.

  Heiko

-- 
Reg. Adresse: Red Hat GmbH, Technopark II, Haus C, 
Werner-von-Siemens-Ring 14, D-85630 Grasbrunn
Handelsregister: Amtsgericht München HRB 153243
Geschaeftsführer: Brendan Lane, Charlie Peters, Michael Cunningham, Charles Cachera



More information about the rhq-devel mailing list