Auto-import server plugin
by Libor Zoubek
Hello,
I remember several people incl. me talking about auto-import feature for
RHQ. For sure, this feature can be implemented by anyone just by hitting
CLI or REST with his own script.
I wrote server plugin doing the same. It's a scheduled job running every 5
minutes - better would be if plugin could listen whenever new resources
appear. Do we have such feature?.
Plugin has several settings:
- auto-import platforms (true/false) - enable auto-import for new platforms
- subnet filter (longString) - user can define subnets (eg
192.168.1.0/24). Only agents connecting from matching subnets are
auto-imported
- children (true/false) - auto-import platform child resources.
When plugin is deployed everything is disabled.
Do you guys have any ideas about this plugin? Or at least ACK to push it
to master and be built & deployed to RHQ?
--
Libor Zoubek
10 years, 1 month
Need advice about agent plugin design
by Steven North
I am trying to design an RHQ/JON agent plugin to manage a software
resource with the following characteristics:
- there is the software itself (the installation);
- there are a variable number of "bundles" of configuration information
about 250KB in size each which need to be read from and written to the
agent; and
- there are "log" files which can 10-50MB in size each which need to be
read from the agent.
I think I am pretty clear on how to handle the software itself--just
like any number of other agents.
I am not sure how to handle the configuration bundles and the large log
files.
We might want to have the RHQ/JON server manage different versions of
these configuration files and distribute them to multiple remote agents.
Is there some existing domain object that would handle the read/write
aspect of the configuration bundles (zip files)? Could the "package"
concept be used for these? Would we need to create a new domain object
on the server side for these bundles? If so, is there an example of
this kind of thing?
For the log files, I see some mention of the SupportFacet. Would this
be appropriate for retrieving large log files? Is there an example of this?
We expect to access the configuration bundles and the log files using
remote client operations because we have a separate GUI tool to
build/edit the configuration bundles and to correlate and analyze the
log files. Is there an example of using a remote client to pull files
from and push files to remote agents?
Thanks in advance for any advice you can give or examples you can point to.
Steve
10 years, 1 month
missed metrics aggregations
by John Sanda
Metrics aggregation is kicked off from the DataPurgeJob that runs at the start of every hour. It computes and stores aggregate metrics for the previously completed time slice(s). For instance, if aggregation runs at 10:02, then raw data stored between 09:00 and 10:00 will get rolled up into 1 hour metrics. I will describe scenarios in which missed aggregations can occur followed by possible solutions. Any feedback is welcomed/appreciated.
Missed aggregation scenarios:
* server outage
Suppose the server goes down at 08:46 and does not come back up until 09:45. We miss the regularly schedule aggregation for the 08:00 - 09:00 time slice.
* failed aggregation
While aggregation runs, suppose the storage cluster goes goes down. We will fail to store aggregate metrics.
* Late measurement reports
Suppose an agent loses its connection to the server at 09:30. The agent will spool measurement data. Then the agent reconnects to the server at 10:15 after aggregation has finished. The agent sends one or more measurement reports with data from the 09:00 hour. That data will not aggregated.
Problems with missed aggregations:
It can lead to skewed or inaccurate aggregate metrics which in turn can affect baselines and OOBs. Another issue is that rows in the metrics_index table which otherwise would have been purged can wind up living on indefinitely.
Solutions:
* Ignore missed aggregations
We already handle the case of server outages. If we choose to ignore the other scenarios, then we only need to make sure that rows in the metrics_index table get purged. We can accomplish this easily by setting TTLs.
* Retry missed/failed aggregations
There are a couple different ways we could go about doing this. I will save the details for a separate discussion as it can rather involved. Suffice it to say, we can implement functionality to handle the scenarios of late measurement reports and failed runs. This would obviously be more complex that ignoring missed/failed aggregations but arguably more robust.
I guess the first question is, do we need to worry about missed/failed aggregations?
- John
10 years, 1 month
Optimization of descendent resource queries
by Elias Ross
https://bugzilla.redhat.com/show_bug.cgi?id=1025918
With a large enough system (500,000+ resources), the queries for
finding the descendent resources of a resource can be very slow. This
is because of the various joins that take place. Of course, things
work well with a small number of resources, but not with tens of
thousands.
I'm optimizing the Oracle and Postgres cases using a recursive
sub-query. (See bug for details.)
>From my preliminary findings, the speed goes from about 5 seconds,
down to a few hundred milliseconds to uninventory a resource with a
few children. When you have to uninventory a few thousand resources,
this makes a big difference in usability.
There are a couple of other queries I was looking at:
public Resource getPlaformOfResource(Subject subject, int resourceId)
Is this really the same as this method? (with a authorization checks.)
The assumption is that the root resource is always a platform
resource.
getRootResourceForResource(resourceId);
The other one that could be fixed, but probably not needing optimization is:
getResourceDescendantsByTypeAndName
There are a couple of ways to fix this. One is simply doing a graph
traverse, you don't have to run a bunch of queries. It is more
memory/network intensive, but easy on the database.
Any thoughts on this?
10 years, 1 month
Making sense of many merged branches
by Lukas Krejci
Hi all,
with the move to github, I'm hoping we are going to see an increase in using
code reviews and hopefully even external code contributions using github's
pull requests.
Because PRs are merged, not rebased, we are going to see an increase in
parallel development with relative large number of parallel branches.
(It is of course possible to rebase a pull request, but one loses information
about exactly what commits were committed to mainline (of course you do know
that when you rebase, but not 3 months after that). There is no information in
the git history about that and one has to resort to matching commit messages
and the actual contents of the commits).
When for example a PR is rebased into the master and the contributor then
updates their PR branch, it would be very hard to rebase such PR branch again.
There would be no record in the history that git could use to correctly rebase
the PR branch again. One would have to resort to manual cherry-picking of
commits that weren't in the mainline yet (which would be potentially hard to
figure out, too, because their hashes would be different than in the PR
branch, so one would have to search by commit message, etc).
Our policy in the past was to merge any long(er) term feature branches but try
to avoid merges when doing more short-term work. We feared the
incomprehensible log listings and cluttering the history with merge commits.
I do think that none of the two reasons we tried to avoid merges are severe
enough to forego the advantages of retaining the full history of the parallel
distributed development. More so when the "complex" history and the merge
commits can be mitigated by configuring git log and/or gitk correctly.
So without further ado, let's how to make sense of complex commit history.
1) Merge commits
git log --no-merges
gitk --no-merges
(in gitk you can also create persistent views that you can set up to your
liking)
I also recommend using these with git log:
--oneline (commit on single line)
--decorate (prints out the branch along with the commit message (this is what
gitk does by default)
2) Complex history with many multiple parallel streams
One important takeaway is that when you look at multiple branches, you usually
aren't interested in individual commits in those branches but rather you want
to know the relationship between them - i.e when the branches diverged. So you
don't have to actually see the unteen individual commits in-between the
"branching points" - this is what --simplify-by-decoration does - it only
considers HEADs (tips) of branches and the branching points.
a) seeing what unmerged branches there are in the repo:
git log --all --graph --oneline --decorate --simplify-by-decoration
gitk --all --simplify-by-decoration
(in here, all the leaf commits are branches not merged into any other)
b) list only commits *present* on a feature-branch:
git log origin/master..feature-branch
gitk origin/master..feature-branch
c) list only commits *made* on a feature-branch to which master is regularly
merged:
git log --first-parent master..feature-branch
gitk --first-parent master..feature-branch
For more advanced "archaeology" in git, check out git log -L, git blame, etc.
http://jfire.io/blog/2012/03/07/code-archaeology-with-git/
I am still learning and wanted to share what I found so far about this brand
new world, so please share whatever you find worth sharing, too.
Hope this helps,
Lukas
10 years, 1 month
Github Migration - Phase 3
by Stefan Negrea
Hello Everybody,
Phase 3 of the github migration will start in just a few hours.
Here is the plan:
1) Mazz will remove all write access to fedorahosted repository early evening (US time)
2) I will perform the migration Friday, trying to minimize the time without a repository open for commits. The plan is to do this before the start of the European work day.
3) I will send an email when the migration completes with instructions on how to update local repositories.
The old fedorahosted repository will be kept with read-only access until a later time.
Please let me know if you have any questions or need help with the migration.
Thank you,
Stefan Negrea
Software Engineer
10 years, 1 month
MeasurementUnits (EPOCH_MILLISECONDS, EPOCH_SECONDS)
by Jiri Kremser
Hi,
in plugin descriptor, there can be a metric definition with unit type "epoch_milliseconds" or "epoch_seconds" (rhq-configuration.xsd allows it). What kind of metric should could be represented by epoch_milliseconds? Shouldn't the exact time moments (what values epoch_milliseconds represent) be addressed rather by traits?
I am asking because the values of this type are not formatted (https://bugzilla.redhat.com/show_bug.cgi?id=857144).
I think these two unit types should be removed from MeasurementUnits and xml schema, however there might be some plugins out there using it, so what about deprecation? Are there any edge cases, when these unit types do make sense?
JK
10 years, 1 month