Need advice about agent plugin design
by Steven North
I am trying to design an RHQ/JON agent plugin to manage a software
resource with the following characteristics:
- there is the software itself (the installation);
- there are a variable number of "bundles" of configuration information
about 250KB in size each which need to be read from and written to the
agent; and
- there are "log" files which can 10-50MB in size each which need to be
read from the agent.
I think I am pretty clear on how to handle the software itself--just
like any number of other agents.
I am not sure how to handle the configuration bundles and the large log
files.
We might want to have the RHQ/JON server manage different versions of
these configuration files and distribute them to multiple remote agents.
Is there some existing domain object that would handle the read/write
aspect of the configuration bundles (zip files)? Could the "package"
concept be used for these? Would we need to create a new domain object
on the server side for these bundles? If so, is there an example of
this kind of thing?
For the log files, I see some mention of the SupportFacet. Would this
be appropriate for retrieving large log files? Is there an example of this?
We expect to access the configuration bundles and the log files using
remote client operations because we have a separate GUI tool to
build/edit the configuration bundles and to correlate and analyze the
log files. Is there an example of using a remote client to pull files
from and push files to remote agents?
Thanks in advance for any advice you can give or examples you can point to.
Steve
10 years, 2 months
MeasurementUnits (EPOCH_MILLISECONDS, EPOCH_SECONDS)
by Jiri Kremser
Hi,
in plugin descriptor, there can be a metric definition with unit type "epoch_milliseconds" or "epoch_seconds" (rhq-configuration.xsd allows it). What kind of metric should could be represented by epoch_milliseconds? Shouldn't the exact time moments (what values epoch_milliseconds represent) be addressed rather by traits?
I am asking because the values of this type are not formatted (https://bugzilla.redhat.com/show_bug.cgi?id=857144).
I think these two unit types should be removed from MeasurementUnits and xml schema, however there might be some plugins out there using it, so what about deprecation? Are there any edge cases, when these unit types do make sense?
JK
10 years, 3 months
changes for OOB calculations
by John Sanda
The current implementation for calculating OOBs involves a complex query that reads the 1hr metric data table to get data from the last hour. Since the metric data tables, including the 1 hr table, are being ported to Cassandra, the implementation for calculating OOBs necessarily has to change. Even with CQL querying in Cassandra is significantly different than SQL. First and foremost, there are no joins. Stefan and I have been reviewing some different design options and wanted to solicit feedback.
* Option 1 - Leverage existing indexes we already put in place to get 1hr data
We already have some indexes in place in the Cassandra design that we could leverage to get the 1hr data.
pros:
Does not require any additional schema changes and avoids the overhead of updating and maintaining an additional index. Minimizes changes to the code base for calculating OOBs as well calculating baselines and aggregates.
cons:
Fetching the 1hr data will involve multiple queries to Cassandra. In terms of performance this is suboptimal and could become a performance issue as the number of schedules that have 1hr data for the previous hour increases.
* Option 2 - Put a new index in place
We could implement a new index that optimizes querying for 1hr data from the previous hour.
pros:
The index will allow us to much more efficiently load all of the data with a single query. Additional queries would only be necessary for paging the data but with row caching enabled, after the initial read subsequent reads will come directly from memory making them very fast. Not as many code changes required to support this as compared to the latter options.
cons:
The index would be implemented as custom index which means another column family/table to maintain. This means that when we insert new data into the 1hr table, we have to also update the index. The index will take up additional disk space and will divert CPU cycles away from Cassandra doing other work. The querying will be substantially faster that option 1, but loses out to options 3 and 4.
option 3 - Altogether avoid querying for 1hr data
OOBs are calculated when the data purge job runs. Prior to OOBs, aggregates, and baselines are calculated. As stefan astutely pointed out, we already have the 1hr data in memory that is needed for calculating the OOBs.
pros:
Avoids the query/index overhead of options 1 and 2.
cons:
Will require a good deal of implementation change. The code that currently generates aggregates basically does it one big batch operation. The same holds with the Cassandra implementation. We need to have that code return the generated 1hr aggregates so that it can be made available to MeasurementOOBManagerBean. That is simple enough; however, the simple approach is not a scalable approach. As the number of schedules increases so does the number of 1hr aggregates that we are holding onto in memory. A safer solution is to do it in chunks. First, we generate aggregates for the first N schedules and pass those results onto MeasurementOOBManagerBean, then repeat for the next N schedules, and so on. This will involve a fair amount of change and like options 1 and 2, all of the work (aggregation, baseline, OOBs) is still done serially in a single thread unlike option 4.
option 4: Altogether avoid querying for 1hr data and do calculations concurrently
The primary difference between this and option 3 is that this one would be implemented with message passing. Since we currently cannot use CDI due to portal-war that means JMS. Once portal-war is gone, then it would be worth considering CDI events with EJB async methods for a more lightweight approach.
pros:
Avoids the query/index overhead of options 1 and 2. Components will be more granular and very loosely coupled, making it easier to write unit tests. It is difficult to write tests for some of the existing metrics code in part because automated tests were not written along side that code. This approach provides much better throughput than the other options as well as the existing implementation. To illustrate, suppose the container maintains a pool of 10 threads to run MDBs. If there is enough work to do during a given run of the data purge job, we can easily pipeline it to utilize those threads resulting in a higher level of throughput that will help us keep us with those larger inventories that produce lots of metrics and provided the impetus to migrate our metrics storage to Cassandra in the first place. Lastly, the JMS solution should scale very nicely for those users who run multiple RHQ servers.
cons:
Of all the options this involves the most implementation change. I am not up to speed on the pros/cons with JMS in AS 7, but that is something we would definitely have to consider. I am not sure what if any issues there are with JMS and Arquillian. If JMS functionality is not well supported with Arquillian then automated integration testing will be more challenging. If this turns out to be the case, the CDI events + asyn EJB approach might be more favorable.
- John
11 years, 4 months
options for installing metrics db (cassandra)
by John Sanda
There are different production use case scenarios in which Cassandra will have to be installed, notably a fresh RHQ install. With the PostgreSQL and Oracle, we make it the user's responsibility to have the database set up and configured properly. With Cassandra though, we want RHQ to handle things to the greatest extent possible. I wrote up a doc[1] on the wiki that describes the various options for how we might install Cassandra as well as the different scenarios for when it will need to be installed. Any feedback is appreciated. Also bear in mind that UX is a key component here.
[1] https://docs.jboss.org/author/display/RHQ/Metrics+DB+Installation
Thanks
- John
11 years, 4 months
Re: JMX plugin test failure
by Thomas Segismont
Hi,
Follow up investigation on both matters.
1. I have built Sigar from source[1] as discussed with Stefan yesterday.
I ran the ProcState test again (on my box) and it also failed.
2. I have found two JIRA issues [2] [3] on the ProcCredName error
"Numerical result out of range". The first one says they incremented the
buffer size and the second one says they should look at the
_SC_GETPW_R_SIZE_MAX value from sysconf to determine it. When I look at
the latest source code [4] there is no runtime call to sysconf, just a
constant defined.
3. In the first JIRA issue Doug MacEachern attached a test case which I
tried on jon10 and jon11. On jon11 ProcCredName is correctly read. Not
on jon10. So I think the issue we have in JMX plugin tests does not come
from my ProcessInfo changeset but rather from sigar native lib on this
system.
4. I wanted to see how pure C code would behave so I wrote a C source
file [6] (from Sigar code). The '1000' parameter is my user id on my
box. I changed this to 600 on jon{10,11} (hudson user id). It compiles
and run on my box and jon11. Not on jon10. From the output [7] you can
see that gnu/stubs-64.h is missing, whereas gnu/stubs-32.h is in place.
And jon10 is 64 bit Linux. How can we get this fixed?
Thomas
[1] branches 1.6 and master on GitHub: https://github.com/hyperic/sigar
[2] https://jira.hyperic.com/browse/SIGAR-27
[3] https://jira.hyperic.com/browse/SIGAR-231
[4] http://pastebin.test.redhat.com/125297
[5] https://jira.hyperic.com/secure/attachment/10639/credname.java
[6] http://pastebin.test.redhat.com/125303
[7] http://pastebin.test.redhat.com/125310
Le 30/01/2013 11:38, Thomas Segismont a écrit :
> The two problems come out differently:
> * when calling getProcCredName (found in JMX plugin tests by John)
> * when calling getProcState (found in Apache plugin integration tests by
> me)
>
> John quickly reviewed the test case I wrote yesterday and found nothing
> strange. So the test case is likely to be good.
>
> Thomas
>
> Le 30/01/2013 00:55, Charles Crouch a écrit :
>> So whatever you've found Thomas looks different to the problem hitting
>> master:
>>
>> (5:47:26 PM) ccrouch: ok once again from the top :-)
>> (5:47:40 PM) ccrouch: plugin test: passes on ....
>> (5:47:44 PM) jsanda: jon11
>> (5:48:02 PM) jsanda: fails on jon10 and jon12
>> (5:47:59 PM) ccrouch: and thomas' test fails on jon10 and jon11 ?
>> (5:48:14 PM) jsanda: appears that way
>> http://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/RHQ/job/sigar-test/2...
>>
>> http://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/RHQ/job/sigar-test/3...
>>
>> (5:48:24 PM) ccrouch: ok, so that just means its a bad test :-/
>> (5:48:27 PM) ccrouch: or a different bug
>> (5:48:43 PM) jsanda: seems that way
>>
>> ----- Original Message -----
>>> Sure thing. I'll run the test and reply to the thread. Thanks again
>>> for your help.
>>>
>>> - John
>>> On Jan 29, 2013, at 3:46 PM, Thomas Segismont <tsegismo(a)redhat.com>
>>> wrote:
>>>
>>>> I'm really confused with that change I made. The bug I found on
>>>> Sigar#getProcState came up while working on it (debugging the
>>>> Apache plugin integration tests). Now you say the jmx plugin tests
>>>> fail since the changeset was merged into master...
>>>>
>>>> And still:
>>>> * I had wrote a test on ProcessInfo before changing implementation
>>>> and it finds no regression
>>>> * The changeset only affects our Java code (no Sigar version
>>>> change) and both problems (getProcState and getProcCredName) seem
>>>> to come from Sigar native libraries
>>>> * The jmx plugins tests passes if you run the cassandra-backend job
>>>> on jon11 ...
>>>>
>>>> I'll work on that tomorrow as it's indeed becoming a huge problem
>>>> for 4.6 release.
>>>>
>>>> By the way, can you please run the test case for getProcState and
>>>> reply on RHQ mailing list?
>>>>
>>>> Thanks
>>>> Thomas
>>>>
>>>> Le 29/01/2013 20:53, John Sanda a écrit :
>>>>> Hi Thomas,
>>>>>
>>>>> Thanks for looking at that Sigar issue with me earlier. I tried
>>>>> calling
>>>>> freshSnapshot() to work around the NPE, but as you expected, we
>>>>> still
>>>>> hit it. Would you mind doing further investigation into this since
>>>>> it
>>>>> related to the work you have been doing? As for why it is not
>>>>> failing in
>>>>> the rhq-master job, you will note from the conversation below that
>>>>> it
>>>>> may be because jon11, where rhq-master runs, is 32 bit, whereas
>>>>> jon10
>>>>> and jon12 are both 64 bit. Here is the chat we had a bit earlier
>>>>> in
>>>>> #jboss-on:
>>>>>
>>>>>
>>>>> 1
>>>>> 4:30
>>>>> 14:30jsanda: ccrouch: made some headway with the jmx plugin test
>>>>> failure
>>>>> and it looks like it might be 4.6 blocker
>>>>> ccrouch: grr
>>>>> 14:30jsanda: earlier i enabled logging in the plugin tests
>>>>> 14:30jsanda: and saw that an NPE was getting thrown
>>>>> 14:31jsanda: it's in in the ProcessInfo code where
>>>>> tsegismont|dinner
>>>>> made changes to deal with the stale state
>>>>> 14:31jsanda: we were discussing it earlier
>>>>> 14:31jsanda: he thinks we might be hitting
>>>>> 14:31jsanda: https://jira.hyperic.com/browse/SIGAR-231
>>>>> 14:32jsanda: as for why it's not failing on jon11 where rhq-master
>>>>> runs,
>>>>> stefan_n just pointed out that jon11 is 32 bit whereas jon10 and
>>>>> jon12
>>>>> are 64 bit
>>>>> 14:35jsanda: ccrouch: the test started failing consistently after
>>>>> these
>>>>> changes -
>>>>> http://git.fedorahosted.org/cgit/rhq/rhq.git/commit/?id=3ded44d88b2700c95...
>>>>>
>>>>> 14:37jsanda: but at this point, i'd rather let tsegismont|dinner
>>>>> take
>>>>> the reigns on this since it is related to the stuff he's been
>>>>> working on
>>>>> 14:37jsanda: and for now, i'll just disable the test in the
>>>>> cassandra-backend branch so that i can hopefully get a good build
>>>>> 14:39stefan_n: ccrouch, *jsanda*, I think it's a 4.6 blocker
>>>>> because we
>>>>> cannot prove that it does not happen on other 32bit Linux
>>>>> platforms (eg.
>>>>> RHEL 5) beyond the RHEL6 32bit is passing on
>>>>> 14:39stefan_n: and 64bit is already broken
>>>>
>>>
>>>
>
11 years, 4 months
Sigar getProcState bug on other platforms?
by Thomas Segismont
Hi,
Last week I may have found a bug in Sigar[1]. I think it does not only
affect Linux amd64 so I ask for your help to determine on which other
platforms it may be present.
I pushed a very simple project on GitHub[2]. You can fork it or simply
download source zip file.
Then open a terminal, go to the project directory and run:
# mvn clean package
# java -cp target/sigar/sigar.jar:target/classes/ test/ProcStateTest
If it runs silently then your platform may not be affected. Otherwise,
can you send me the error message and your platform type.
Of course if you think it's not a Sigar bug and that I missed something
please tell me. I had no response in Sigar user forum so far.
Thanks for your help!
Regards,
Thomas
[1] http://communities.vmware.com/message/2183651#2183651
[2] https://github.com/tsegismont/sigar-procstate-test
11 years, 4 months
Modifications on ProcessInfo implementation
by Thomas Segismont
Hi all,
While working on a fix for BZ 885664 (OpenSSHD and MySQL availability
check may report stale data) we started to have a discussion with Lukas
on how the current ProcessInfo is implemented.
ProcessInfo uses SIGAR to gather information on platform processes. It
behaves like a cache of the SIGAR call results. So, when a user gets an
existing ProcessInfo instance and invokes one of its methods, generally
no new SIGAR call is made.
This behavior (and the corresponding API) is not really documented and
it has led to bugs like the BZ 885664 (the cached ProcState instance
reports the underlying process is up even if it no longer exists).
Lukas suggested I should try to make usage of the class less error prone
while fixing the bug so I have updated the implementation in this way:
* created a public internal class ProcessInfoSnapshot, which groups non
static process property accessors (like state, CPU usage) and operations
on these properties (like isRunning method)
* added two new methods, priorSnapshot and freshSnapshot, to get the
last or retrieve a new ProcessInfoSnapshot respectively.
* kept static process properties and associated operations in the top
level type
* kept the previous API but marked it deprecated.
If you want to see the changeset:
http://git.fedorahosted.org/cgit/rhq/rhq.git/commit/?h=bug/885664&id=c406...
As an example, to check if a process is alive with the new API:
processInfo.freshSnapshot().isRunning();
What do you think?
Regards
Thomas
11 years, 4 months
injecting overlord in tests
by John Sanda
In lots of our server integration tests we obtain and use the overlord. I added a small utility to eliminate the boiler plate as well as the round trips to the database. Instead of doing,
public void myTest() {
SubjectManagerLocal subjectManager = LookupUtil.getSubjectManager();
Subject overlord = subjectManager.getOverlord();
….
}
you can alternatively do,
public class MyTest {
@Inject @Overlord
private Subject overlord;
}
This is more succinct, eliminating the dependency on SubjectManagerLocal for those frequent cases when you only want to get the overlord, and it eliminates the calls to the database.
- John
11 years, 4 months
RHQ Charting Working Meeting Recording
by mike thompson
If you weren't able to attend the RHQ Charting meeting here is a Youtube recording:
https://www.youtube.com/watch?feature=player_embedded&v=iFfh7DhCFv0
From the last meeting here are the charting issues addressed:
1) Trendlines - being able to visualize the trendiness of the data and not get lost in the noise
2) Identification of individual values (through a heavy visual cap on bar top) as opposed to the stacked bar aggregations
3) Eliminating gradients that alluded to data changing within the bar in a certain direction
4) Simplified x-axis with more human readable times displayed on logical boundaries such as every 15 min, 1 hour instead of every 10th bar showings its exact time
4) Hovers to provide detailed insight into each bar with explicit values showing (and durations of bars: 20 minutes bars)
5) More exact availability intervals and hovers that provide availability start and end period times along with duration calculations. This anwsers questions such as "When was my down time period exactly?" The duration information provides quick answers to questions like "Oh we were down for 50 minutes." instead of having to look at the availability stoplights and calculate the interval (which would only be as granular as the bar duration shown).
6) Out-of-Bounds charting (when baselines are available)
7) Performance++: with the addition of availability data and Out-Of-Bounds data along with the metric data performance issues were beginning to show up. By using parallel requests we are now able to incur no additional time to the metrics query and reap the benefits of the additional data (avail + OOB). This also opens the door to augmenting the charts with additional data in the future without much of a performance hit.
-- Mike
11 years, 4 months