Resource Subcategories - Simplified
by Stefan Negrea
Hello Everybody,
I've been working for a while on adding resource subcategories to the AS7 plugin to simplify the resource tree. While on surface this would be simple change, just add subcategories to the plugin descriptor, the change is very very complicated. The current subcategory design is outdated due to the addition of the run-inside (multiple parents) feature. I've been trying to fix the current subcategory design from different angles but none worked so far. Mazz helped me along with reviews to my design progressions. This morning while on a call with Mazz, we had a small epiphany, let's solve everything by completely removing the current implementation.
So here are the changes that I am currently working on:
1) Do not require parent resources to pre-declare all subcategories for children. This was done today with a top level <subcategories> tag.
2) The <subcategories> tag will be deprecated for the next release and will eventually be dropped in subsequent releases.
3) While deprecated, just ignore the <subcategories> tag.
4) The subcategories will be declared only the resources themselves via the subCategory attribute
5) For hierarchical categories allow a pipe delimited syntax subCategory="Subsystems|Test"
6) In the UI apply use the camel case syntax to make it more readable, TestSubsystems = Test Subsystems
7) Drop the entire subcategories entities and related tables. For now deprecated, and later completely remove the functionality.
8) Work on a database migration task to fold subcategories text into the resource types.
9) Update current RHQ plugins to remove the <subcategories> tag.
Why the changes:
1) The current implementation is almost unused, none of the complicated structure is fully taken advantage of.
2) The current implementation is broken in so many ways, the run-inside really made the design obsolete and almost unfixable.
3) The current validation for subcategories (= they are declared on the parent) is really not needed. The only the actual resources that is placed on a subcategory needs to know about the subcategory.
4) The UI operates under the assumptions of a the simplified model. If a resource belongs to a category then create the UI resource tree accordingly.
5) Simple plugin structures makes it easy for community members to implement/fix plugins.
Removing the current subcategory implementation is relatively easy because very little is actually used; it is mostly a liability. There were a couple of fixes applied over time around transactional boundaries that made the code very brittle. And after all this is done there is absolutely no change from a user perspective.
To summarize the change:
1) Deprecate <subcategories> tag
2) Drop <subcategories> tag in future releases
2) Subcategories are set only via subCategory attribute
3) Pipe delimit hierarchies of subcategories and use camel case to improve legibility
4) Clean backend and existing plugins completely
5) No change for the users
BZ:
https://bugzilla.redhat.com/show_bug.cgi?id=1069545
Working pull request:
https://github.com/rhq-project/rhq/pull/22
Thank you,
Stefan Negrea
Software Engineer
10 years
RHQ trait storage
by Elias Ross
I was thinking about how to do trait storage.
https://planet.jboss.org/post/modeling_metric_data_in_cassandra
This document doesn't really explain the major problem with how to
deal with repeated inserts.
I came up with three approaches for consideration:
1. Select and insert
The schema:
CREATE TABLE IF NOT EXISTS measurement_data_trait (
schedule_id int,
value varchar,
timestamp int, -- when it was inserted
PRIMARY KEY (schedule_id)
);
CREATE TABLE IF NOT EXISTS measurement_data_trait_history (
schedule_id int,
timestamp x,
value varchar,
PRIMARY KEY (schedule_id, timestamp)
);
Approach is to:
1) Select the current value. (*)
2) If it is different then insert into the history table. Use TTL
columns to delete old records.
3) In any case, update the measurement_data_trait table with the most
current value
Benefits:
1) Easy to select the most recent value
2) Logic is straightforward
Problems:
1) Select than insert is slow.
2) Hard to do transactionally and with a lot of traits.
3) Race condition exists where insert may happen twice. (Not really likely.)
(*) Probably the most efficient way is to select is to submit the
select asynchronously, then when it completes, do the update and
insert. This does mean, potentially, an insert or update is lost if
the server is shutdown. The mitigating factor, of course, is trait
reports are periodic and effectively retried.
2. Insert then cleanup
Insert in history every time a new value is reported, also update the
most current value.
Then periodically remove any duplicates.
Benefits:
1) Insert only, no round trips
Problems:
1) Basically defers the 'select' as before, so the same amount of work, really
2) Extra disk space/storage. Cassandra data compression may mitigate
disk usage, but still lots more writes. (Worst case would be a trait
reporting the same value every 30 seconds.)
3) Cannot really update the 'when the trait changed' timestamp --
since we are only inserting
Possible alternatives:
1) Defer compression to when data is selected.
3. Map values to timestamps
Similar to above, but using a set of timestamps.
CREATE TABLE IF NOT EXISTS measurement_data_trait_history (
schedule_id int,
value varchar,
timestamps set<x>,
PRIMARY KEY (schedule_id, value)
);
Benefits:
1) Insert only
Problems:
1) The number of timestamps may be ever increasing and so a process is
needed to clean it up. TTL expiration doesn't work, if the value
flip-flops back and forth between two sets of values.
2) Therefore, there needs to be another cleanup process to deal with
this. This could be done using a counter that increases for every
update, and once it reaches a certain value forces a "garbage
collection" of sorts.
Thoughts?
10 years
Is TeraByte units as max enough?
by mike thompson
I noticed at least in the UI that we don’t support numbers above TB. Seems like these days there could be some PetaByte storage configurations out there. But maybe I’m wrong.
Comments? (your chance to speak up)
—Mike
10 years, 1 month
AUTO: David A. Webster is out of the office (returning 04/22/2014)
by David A. Webster
I am out of the office until 04/22/2014.
On Vacation, will be back Tuesday morning
Note: This is an automated response to your message "Resource
Subcategories - Simplified" sent on 4/17/2014 3:38:38 PM.
This is the only notification you will receive while this person is away.
**
This email and any attachments may contain information that is confidential and/or privileged for the sole use of the intended recipient. Any use, review, disclosure, copying, distribution or reliance by others, and any forwarding of this email or its contents, without the express permission of the sender is strictly prohibited by law. If you are not the intended recipient, please contact the sender immediately, delete the e-mail and destroy all copies.
**
10 years, 1 month
Events migration plan
by Elias Ross
Here's an outline for how RHQ might store and handle events in Cassandra.
Part 1 - Use Elasticsearch as storage
The idea is to focus on improving event storage but keep the UI
elements the same and EJB interfaces behaving the same.
1. Create plugin for storing events in Elasticsearch using Cassandra.
I've already done the work for this, but it needs more testing.
The alternative here is to use Elasticsearch natively, similar to how
storage nodes are created and managed, meaning some sort of RHQ-based
installer or wrappers. This would mean additional work on creating
agent plugins and whatever similarly was done with Cassandra.
2. Create a module for JBoss to bootstrap Elasticsearch.
This should be fairly easy to do, but I do foresee some additional
memory usage required. There are quite a lot of jar files that are
'sharded' into it, and I wonder if that could be avoided.
Google Guice is used as a way to wire the system, I'm not sure how
well it may work inside JBoss EAP. Also, things like plugins and
whatever classloader type things may need some coercion.
3. Create a series of classes for storing RHQ events in Elastic Search
using the same log format as Kibana uses. Ensure that at least log
files are stored in this way and are searchable.
This isn't critical as Kibana won't be used yet. But it makes sense
for searching in the future.
4. Create EJB classes for retrieving events from Elasticsearch,
deletion, search, purging, etc., identical to the existing methods.
Do you keep the existing web service API (REST-based) operational and
use that, or use the Java API directly itself?
5. Event migration tools.
6. Testing and release.
Part 2 - UI improvements or Kibana integration
The idea is to integrate Kibana in some form with the native RHQ UI.
Part 3 - Agent event changes
Does the agent continue to use the existing remoting API or something
that may be better suited for high volume traffic?
Part 4 - Log file importation improvements
Log file discovery should happen more easily. For example, all the
logs from JBoss or Tomcat applications should be directly imported if
those applications are in inventory. Also, any system logs
(/var/log/messages etc.) should appear as well.
Anything to do with log parsing (especially parsing dates, log
severity, messages, exceptions as one message) should work
automatically.
Log rotation detection, compression, log tailing, should be handled
automatically.
10 years, 1 month
Auto-discovery of Postgres table resources
by Thomas Segismont
Hi,
What do you think about auto-discovery of Postgres table resources?
In my opinion it's not a good idea, most serious applications involve a
great number of tables and monitoring them all is probably not desired.
I'd like to disable auto-discovery and implement manual import. Please
shout if you disagree.
Thanks,
Thomas
10 years, 1 month
metrics data loss
by John Sanda
Currently there exists the possibility of numeric data loss when merging measurement reports. If there is an error storing raw data, we log the error but do nothing else. Suppose for example that while the server is storing a set of raw data, the storage cluster goes down half way through. In this scenario it is likely that the latter half of that data is lost. There has been some recent discussion about the potential for data loss, and I want to open it up to the list for additional thoughts, opinions, etc. I will briefly summarize a few options for dealing with data loss.
* option 1 - do nothing
The case can be made that loss of metric data may not be as significant as losing inventory or configuration data for example. If the data loss is limited to a single measurement report or subset thereof, then it probably is not very significant since we are dealing with loss of a single data point for some some number of schedules. Of course, some dropped metrics here and some dropped metrics there can quickly add up to where we are dealing with a substantial amount of data loss, and this would be bad.
* option 2 - Rely on agent/server comm layer guaranteed delivery
MeasurementServerService.mergeMeasurementReport(MeasurementReport report) has guaranteed delivery semantics. If the calls fails for whatever reason, the agent will retry it. The agent also spools the report to disk so that if it get disconnected from the server, it can retry after reconnecting. The downside of the guaranteed delivery is that the agent continually retries. If storing raw data failed because the storage cluster is overloaded, this could exacerbate the problem. I have actually experienced this in test environments where I was putting a heavy write load on the server and storage cluster. My server would be down or in maintenance mode for a while, and then the server comes back up, all my agents hammer the server with spooled measurement reports.
There is another aspect to consider in terms of efficiency. Suppose an agent sends 10,000 raw data to the server. An error occurs after storing 9,995 raw data. The agent will resend and the server will store again all 10,000. This is less than optimal and brings me to option 3.
option 3 - Do not overwhelm the server and only retry failed data
The server can report back to the agent the raw data that it failed to store. The agent can spool that data to disk, and resend it at some point in the future. There could be some different approaches. The agent could retry on some fixed interval, or maybe it uses some initial delay with an increasing back off, e.g., 2 minutes, 4 minutes, 8 minutes, etc. This option requires the most work, but I think that it is the most robust.
What do others think? Are there other options that should be considered?
- John
10 years, 1 month