Cassandra + ElasticSearch
by Elias Ross
So I got ElasticSearch to store and load data from a Cassandra node.
Apparently none of the shared storage solutions are officially
supported by ElasticSearch anymore, perhaps for performance
reasons[*], but it solves at least one of the problems with how to
manage that data within an RHQ cluster.
[*] I do think that if you have E.S. itself talking to Cassandra on
the local host, the performance would be about the same as a local
filesystem anyway.
I think the next problem is how to integrate ElasticSearch within RHQ
and LogStash with the agent.
I don't think that's actually that hard. You can bootstrap E.S. within
a unit test quite easily. LogStash running within RHQ agent is likely
pretty straightforward.
The other questions about how to organize, query and display this data
seem more pertinent. Also, how to handle existing log events, etc.
Anyway, if there is enough interest, I'll post the Cassandra piece.
10 years, 1 month
RHQ 4.11 patches
by Elias Ross
The following didn't make it into 4.10 but I hope will be looked at.
Bug 1073201 - Agent does a discovery loop if the inventory contains an
unknown resource
Bug 1075757 - Group metric graphs for resources with quotes in name do
not display graph - JSON bug
Do I need to setup github pull requests or are patches in Bugzilla still fine?
10 years, 1 month
RHQ 4.10 available
by Heiko W.Rupp
Hello,
It is my pleasure to announce the release of RHQ 4.10
Biggest changes are
• Reduced agent foot print
• Metric charts have been further improved
• Fine grained bundle permissions
and then this also includes lots of other improvements and bug fixes.
You can find the full release notes at:
https://docs.jboss.org/author/display/RHQ/Release+Notes+4.10
The documentation of the REST-api will be uploaded to SF.net in the next few days - until then,
you can use the 4.9 one, which is still valid.
Many thanks go again to the contributors
* Elias Ross,
* Michael Burman
for their contributions.
Heiko on behalf of the RHQ team
10 years, 1 month
Scalability issues in RHQ 4.9 - A summary
by Elias Ross
I've encountered the following scalability issues:
Bug 1025918 Uninventoring resources is slow; should not take more than a second
This was fixed by improving the query. Basically only is an issue when
number of resources is in the 100,000+ range.
Bug 1064563 - Separate metrics compression and OOB from actual data purge tasks
This is something fairly simple to do. These jobs don't need to be run
hourly. They also don't belong with the metrics compression process.
The deletes don't scale well, and there's also no way to turn them off
completely.
Bug 1073093 - Admin -> Metrics/Alert/Drift query does not scale well
when alert definition count high
Problem happens with a large number of alert definitions. Fixed by
adding a couple of indexes. Somebody needs to add indexes to
Hibernate/upgrade script.
Bug 1073558 - Purge process for deleting alert definitions can lead to
high database contention and timeouts
Not addressed. I'm thinking there are a couple of options here:
1) Purge a few definitions at a time, not every one.
2) Figure out if some indexes need to be added.
3) Drop some of the foreign key constraints. Not really needed and
hurts overall performance. My DBAs say that "This schema was designed
using the old way."
Bug 1070473 - When updating resource-type based alerts, ORA-00060:
deadlock detected when deleting rhq_config
It's possible in the UI to have multiple 'save' clicks cause two
transactions to run in contention. This can really hit hard when you
happen to be doing this during a purge.
No bugs for these:
1) Event Purging is slow. Hopefully addressed by moving this to
Cassandra release. Addressed by adding indexes. Can be caused by too
many rows in rhq_event_source, and foreign
2) Trait Purging is slow, and can cause database transactions to halt.
Meaning, the agents can't report metrics, and the server can run out
of database connections. The work around is to stop this from
happening. Again, should be addressed with moving this stuff to
Cassandra.
3) Aggregation slow (can run > 1 hour for enough metrics.) Being
addressed well enough in 4.11, I assume.
4) OOB slow. There was Bug 1059412 but doesn't necessarily scale that
well. OOB moving to Cassandra, though, may help.
...
I'd like to help with 1073558 (purge alert definitions), but not sure
the recommended approach. My suggestion would be to only purge ~1000
or so per transaction.
I wouldn't mind working on some Cassandra features either. I could
work on moving events to Cassandra, but I'm not sure how interested I
am in dealing with the data migration steps or whatnot.
Anyway, any advice would be welcome here.
10 years, 2 months
Auto-import server plugin
by Libor Zoubek
Hello,
I remember several people incl. me talking about auto-import feature for
RHQ. For sure, this feature can be implemented by anyone just by hitting
CLI or REST with his own script.
I wrote server plugin doing the same. It's a scheduled job running every 5
minutes - better would be if plugin could listen whenever new resources
appear. Do we have such feature?.
Plugin has several settings:
- auto-import platforms (true/false) - enable auto-import for new platforms
- subnet filter (longString) - user can define subnets (eg
192.168.1.0/24). Only agents connecting from matching subnets are
auto-imported
- children (true/false) - auto-import platform child resources.
When plugin is deployed everything is disabled.
Do you guys have any ideas about this plugin? Or at least ACK to push it
to master and be built & deployed to RHQ?
--
Libor Zoubek
10 years, 2 months
Cassandra event storage - design ideas
by Elias Ross
I had come up with a simple design (not tested) for moving events to Cassandra.
The goals are:
0) Replace what's there.
1) Deal with high and low volume situations. Since RHQ doesn't handle
a large number of events very well (> 100,000), mostly they are used
for low volume situations today. You would not want to capture debug
messages for all your application servers, currently. In some cases,
though, you might have thousands of event definitions. But eventually
a solution for all log messages is needed.
2) Allow events to be searched in the same ways, e.g. severity, location, etc.
3) Deal with data retention. How to remove data after a certain time.
Some other goals: (probably outside of this...but worth consideration.)
0) Administrator type problems: "Bring me all errors across all systems."
1) Make it easier to add lots of log files for monitoring. For
example, /var/log/*.log
2) Automate log parsing of dates, severity, log category (where available)
3) Query all the events with 'NullPointerException' in them, for
instance. Probably possible by adding extra columns like 'exception'
or 'http_status', and adding an index or something like that based on
what the user wants.
3b) Design to allow full text search to work. (Simple approach: Create
a separate table, with primary key of the hashed token. Map this back
to event(s). Use the TTL of the event.)
4) Basically, make RHQ function more like 'Splunk-lite'.
Mostly this design is taken from:
http://www.ebaytechblog.com/2012/08/14/cassandra-data-modeling-best-pract...
CREATE TABLE rhq.events (
hour int, -- whole hours since 1970
event_definition int,
time bigint, -- could be timeuuid
type varchar,
location varchar,
-- category varchar, -- host?
severity varchar,
detail varchar,
PRIMARY KEY ((hour, event_definition), time) -- 'event_id'
);
If there is a large TTL and you want to select 'every event', maybe
this is needed? This the earliest and latest events. Updated when
events are inserted. (Is this worth it?)
CREATE TABLE rhq.event_range (
event_definition int,
start_hour int,
end_hour int,
PRIMARY KEY (event_definition)
);
Why partition this way? For large date ranges, there are still only a
few hundred rows to query. And even if thousands events are recorded
per hour, Cassandra columns can be fairly wide, so this shouldn't be a
problem.
time - Past-the-hour relative time, with a unique counter. Basically,
the number of milliseconds since the current hour (32 bits), plus 32
bits for an incremented counter. This is so multiple events with the
same millisecond timestamp don't overwrite each other. Since one agent
is talking to one server, the 'time' will be unique.
type - EventDefinition.getType() -- is this useful?
location - Log file path, etc. Indexed by default.
-- category - Log category, subsystem, etc. Indexed by default. (Maybe
not in v1, as it's not captured today.)
severity - INFO, WARN, etc.
detail - Full log text, etc.
The tables RHQ_EVENT and RHQ_EVENT_SOURCE are integrated into this one
table (column family.)
Indexes: (are they useful?)
CREATE INDEX events_location
ON events(location);
Should identify the log name. You can then query all events for
'/var/log/messages'. Possible problems: The number of locations may
grow, especially if the log file has timestamps in the name or a
counter. Also doesn't really allow for partial selection, e.g.
'/var/log/*'. (Can you query Cassandra for all unique indexed values?
Create a separate table for this?)
CREATE INDEX events_severity
ON events(severity);
Useful to find all errors, but unclear if DEBUG or INFO (for example)
should be indexed. Might create a separate column for WARN and ERROR
severity only, e.g. 'high_severity'. May be too high volume.
Deletion:
TTL columns are used for event data. This could be assigned per event
definition, but probably makes sense to simply use a global setting
(as it is) for now.
SQL operations:
@NamedQuery(name = Event.DELETE_BY_RESOURCES, query = "DELETE FROM
Event ev "
+ " WHERE ev.source IN ( SELECT evs FROM EventSource evs WHERE
evs.resource.id IN ( :resourceIds ) )"),
Maybe not needed? Since events are deleted by TTL anyway. To do this,
would be to find all the event definitions, the first time for events
for this source (by TTL), then delete incrementally by hour.
@NamedQuery(name = Event.DELETE_BY_EVENT_IDS, query = "DELETE FROM
Event e WHERE e.id IN ( :eventIds )"),
hour,event_definition,time is used here as the key for deleting the column/row.
@NamedQuery(name = Event.DELETE_ALL_BY_RESOURCE, query = "" //
@NamedQuery(name = Event.DELETE_ALL_BY_RESOURCE_GROUP, query = "" //
Similar to DELETE_BY_RESOURCES. Just find the event_definition_ids.
@NamedQuery(name = Event.FIND_EVENTS_FOR_RESOURCE_ID_AND_TIME,
query = "SELECT ev FROM Event ev "
+ " JOIN ev.source evs JOIN evs.resource res WHERE res.id =
:resourceId AND ev.timestamp BETWEEN :start AND :end "),
Find the event_def_ids, then find the applicable hours, query those
rows and filter by exact timestamp.
@NamedQuery(name =
Event.FIND_EVENTS_FOR_RESOURCE_ID_AND_TIME_SEVERITY, query = "SELECT
ev FROM Event ev "
Find the event_def_ids, then find the applicable hours, query those
rows and filter by exact timestamp and severity. Severity is indexed.
@NamedQuery(name = Event.GET_DETAILS_FOR_EVENT_IDS, query = "SELECT "
+ " new
org.rhq.core.domain.event.composite.EventComposite(ev.detail, res.id,
res.name, res.ancestry, res.resourceType.id, ev.id, ev.severity,
evs.location, ev.timestamp) "
+ " FROM Event ev JOIN ev.source evs JOIN evs.resource res
WHERE ev.id IN (:eventIds) AND evs.id = ev.source"
+ " AND res.id = evs.resource "), //
Given a list of event_ids, return the resource details. Since the
event_id will contain the event_def_id as part of its key... Should be
possible, but need to look into this.
@NamedQuery(name = Event.QUERY_EVENT_COUNTS_BY_SEVERITY, query = "" //
Might need to create a special table... Each severity column has a
list of events. Or simply rely on the index of severity. Counts are
hard to do in Cassandra, so dropping these might be wise.
CREATE TABLE rhq.event_severity (
resource_id int,
severity varchar,
event_id stuff,
PRIMARY KEY ((resource_id),severity,event_id)
);
@NamedQuery(name = Event.QUERY_EVENT_COUNTS_BY_SEVERITY_GROUP, query = "" //
Use the same approach as above, but with the list of resources.
10 years, 2 months
Need advice about agent plugin design
by Steven North
I am trying to design an RHQ/JON agent plugin to manage a software
resource with the following characteristics:
- there is the software itself (the installation);
- there are a variable number of "bundles" of configuration information
about 250KB in size each which need to be read from and written to the
agent; and
- there are "log" files which can 10-50MB in size each which need to be
read from the agent.
I think I am pretty clear on how to handle the software itself--just
like any number of other agents.
I am not sure how to handle the configuration bundles and the large log
files.
We might want to have the RHQ/JON server manage different versions of
these configuration files and distribute them to multiple remote agents.
Is there some existing domain object that would handle the read/write
aspect of the configuration bundles (zip files)? Could the "package"
concept be used for these? Would we need to create a new domain object
on the server side for these bundles? If so, is there an example of
this kind of thing?
For the log files, I see some mention of the SupportFacet. Would this
be appropriate for retrieving large log files? Is there an example of this?
We expect to access the configuration bundles and the log files using
remote client operations because we have a separate GUI tool to
build/edit the configuration bundles and to correlate and analyze the
log files. Is there an example of using a remote client to pull files
from and push files to remote agents?
Thanks in advance for any advice you can give or examples you can point to.
Steve
10 years, 2 months
managing storage node snapshots
by John Sanda
Snapshots are generated weekly during scheduled maintenance and when nodes are (un)deployed. A snapshot consists of hard links to SSTable files; so, a snapshot takes up little disk space. But when an SSTable is deleted during compaction, the snapshot remains and does consume disk space. This can add up over time. There is currently nothing in place for managing snapshots. Here are a few possible options,
1) Move snapshots older than X to a specified location
2) Move all snapshots to a specified location
3) Delete snapshots older than X
4) Move N snapshots (from oldest to youngest) to a specified location
5) Delete N snapshots (from oldest to youngest) to a specified location
This could be done as a reoccurring operation. We could also introduce some new metrics to monitor snapshot disk usage similar to what we already have for the data directories. If the disk usage exceeds a threshold, we can fire an alert and perform one of the above actions. I think that this is another good step we can take for providing storage node disk management.
Thoughts?
- John
10 years, 2 months
migrating storage node data directories
by John Sanda
There are various reasons you might want to migrate your storage node data - improving I/O performance, increasing capacity, or addressing failing hardware. Data can be migrated by adding/removing nodes, but often times it is simpler and more desirable to simply change move the rhq data directory. This can be accomplished, but doing so involves some manual steps which can be error prone.
What do people think about providing support for moving storage node data? I have attached a screenshot of the storage node configuration UI. We currently expose only some heap settings. We could add a section here for the data directory that would list the paths of the commit log, data, and saved caches directories. Changing the location of the data directory would result in the following,
* make sure that the target directory is writable
* update cassandra.yaml with the new directory
* restart the storage node
* run repair as necessary
This encapsulates all of the work involved, making it easy for users to migrate data if/when necessary. Questions, comments, objections?
- John
10 years, 2 months
missed metrics aggregations
by John Sanda
Metrics aggregation is kicked off from the DataPurgeJob that runs at the start of every hour. It computes and stores aggregate metrics for the previously completed time slice(s). For instance, if aggregation runs at 10:02, then raw data stored between 09:00 and 10:00 will get rolled up into 1 hour metrics. I will describe scenarios in which missed aggregations can occur followed by possible solutions. Any feedback is welcomed/appreciated.
Missed aggregation scenarios:
* server outage
Suppose the server goes down at 08:46 and does not come back up until 09:45. We miss the regularly schedule aggregation for the 08:00 - 09:00 time slice.
* failed aggregation
While aggregation runs, suppose the storage cluster goes goes down. We will fail to store aggregate metrics.
* Late measurement reports
Suppose an agent loses its connection to the server at 09:30. The agent will spool measurement data. Then the agent reconnects to the server at 10:15 after aggregation has finished. The agent sends one or more measurement reports with data from the 09:00 hour. That data will not aggregated.
Problems with missed aggregations:
It can lead to skewed or inaccurate aggregate metrics which in turn can affect baselines and OOBs. Another issue is that rows in the metrics_index table which otherwise would have been purged can wind up living on indefinitely.
Solutions:
* Ignore missed aggregations
We already handle the case of server outages. If we choose to ignore the other scenarios, then we only need to make sure that rows in the metrics_index table get purged. We can accomplish this easily by setting TTLs.
* Retry missed/failed aggregations
There are a couple different ways we could go about doing this. I will save the details for a separate discussion as it can rather involved. Suffice it to say, we can implement functionality to handle the scenarios of late measurement reports and failed runs. This would obviously be more complex that ignoring missed/failed aggregations but arguably more robust.
I guess the first question is, do we need to worry about missed/failed aggregations?
- John
10 years, 2 months