preparing for master merge
by John Sanda
We are looking to merge the feature/cassandra-backend branch into master asap. One of the things that needs to happen prior to the merge is having all of the core master jobs on Jenkins passing as well as the rhq-cassandra-backend job. Can I get someone who is not working in the cassandra-backend branch to volunteer to oversee that we get those jobs passing? The folks working in the cassandra-backend branch will obviously take care of the rhq-cassandra-backend job.
Thanks
- John
10 years, 11 months
Queries with limits and JOIN FETCH
by Lukas Krejci
Hi all,
I've been studying the impact of our use of JOIN FETCH in queries to which we
simulatenously apply limits (captured by
https://bugzilla.redhat.com/show_bug.cgi?id=620603).
There are two of areas where such queries get executed:
1) Any remote API method with a PageControl can potentially cause such query
to get executed (if the query to be run by the API method includes a JOIN
FETCH - this is something I don't have a clear idea about yet - there are 122
places in the codebase that apply ordering and paging using the PageControl
and 70 occurances of "JOIN FETCH" in our domain classes, but obviously not all
of these will result in a possible JOIN FETCH with limit).
2) Any criteria query for criterias that support a fetch of additional
relationships.
The first type is quite hard to determine the impact of, because the vast
number of uses of the code that applies the limits to the query that are
spread all over the codebase.
The second type is very easy to analyze, because it is concentrated into 2
classes: CriteriaQueryGenerator and CriteriaQueryRunner.
To try and determine the impact of the first type, I implemented a custom
QueryGenerator for Hibernate that enhances the default one to log some more
detailed data about the query than just the fact that it is filtered in memory
(which is what the default impl in Hibernate does). Namely I logged the JPA,
SQL and the stacktrace where the DB query is generated from.
With this tool, I went ahead and generated a larg(ish) inventory of 30 agents
with some 20000 resources in total. 30 of those resources had alert
definitions on them with several thousands fired alerts for each resource.
While this dataset is a bit artificial I think it can be used to simulate the
problems of some larger-scale RHQ inventories, at least when it comes to query
performance.
By "casually clicking" in the UI, trying to do basic tasks like paging through
alert history, firing operations, etc, I've found no evidence of occurrences
of the first type of potential limited-JOIN-FETCH queries (the reason for that
is that the application of limits is co-located with the application of
ordering to the queries and it seems that ordering is why that method is
called in the majority of cases). On the other hand, even while browsing the
UI there were a number criteria queries invoked that featured a limited-JOIN-
FETCH query. (For example querying for the 5 most recent alerts on a resource
summary page took approx. 2.5s).
This brought me to the conclusion that the criteria queries are the major
contributor to our problem.
This is a quite fortunate coincidence because fixing the issue for criteria
queries alone is much simpler than trying to fix it globally.
The fix is now in a bug branch
https://git.fedorahosted.org/cgit/rhq/rhq.git/log/?h=bug/620603 and I would
very much like you to comment review it before merging it into master.
The merit of the fix is the following:
1) Replacing the JOIN FETCH with a manual initialization of the lazily fetched
fields.
2) Optimizing the loading of such lazy fields and lazy collections by
configuring the "hibernate.default_batch_fetch_size" property in
persistence.xml to load the data in batches (if this setting is set to a
number N then Hibernate will load the data for a lazy field for N "owning"
entities at once instead of 1 by 1. I.e. setting this number to 2 will cut the
number of DB roundtrips required load "children" of N "parent" entities in
half compared to the trivial case where access to a "children" of a single
"parent" entity would require a DB call).
To test the affects of that change, I created a very simple benchmark that
basically compared the times required to load collections with differently set
limits in the server with or without the patch.
The basis for the benchmark was a ResourceCriteria that tried to fetch all the
resources together with their alert definitions, e.g.:
var crit = new ResourceCriteria
crit.fetchAlertDefinitions(true)
crit.setPaging(x, y)
var results = ResourceManager.findResourcesByCriteria(crit)
Without further ado, let me present the results (based on running each test 10
times):
1) Current codebase (without the fix):
No paging: 47417.7
Paging(0, 10): 26225
Paging(0, 50): 25219.8
Paging(0, 200): 24192.1
Paging(0, 500): 24419.2
Paging(0, 2000): 27813.2
2) Fix with batch size 16:
No paging: 52745.2
Paging(0, 10): 83.2
Paging(0, 50): 192.4
Paging(0, 200): 545
Paging(0, 500): 1201
Paging(0, 2000): 4451.7
3) Fix with batch size 32:
No paging: 53401.5
Paging(0, 10): 76
Paging(0, 50): 166.5
Paging(0, 200): 503.8
Paging(0, 500): 1206.4
Paging(0, 2000): 4459
The results show some interesting "features" of the old implementation as well
as some interesting facts about the fix:
1) The results of the old implementation are consistent with the behavior of
JOIN FETCH with limits as described by Hibernate documentation. The collection
is loaded as whole into the memory and only filtered in-memory after that
fact. The difference between the "No Paging" and the rest of the results can
most probably be atributted to the time required to serialize the huge result
set in the "No Paging" case as opposed to the rest of the cases.
2) The times of the results with the increasing page size are much more
intuitive here. The speed up over the current codebase is quite impressive.
The slowdown in the no-paging scenario is actually not that bad considering
the fact that it performs many more DB roundtrips than the current codebase.
3) The improvement over the 2) is within the error margin.
Lukas
10 years, 12 months
Some proposed changes to Manual Import functionality
by spinder
Hi All,
After discussions with Jay, I wanted to describe some changes that I think we should make in support of 'Manual Import' functionality in the plugin container and potentially on the server side.
The issue:
With BZ 883825 we stumbled into odd behavior while trying to test out 'Manual Import' of Tomcat instances. What wasn't immediately apparent was that those same Tomcat instances were already silently indexed by RHQ in the 'Discovery Queue'. So that while attempting to 'Manual Import' these new instances with a new plugin configuration, the plugin container would take the information from the 'new' import and detect that we'd already had that in inventory(the auto-discovered version) and use the invalid configuration from that version. The result was that a user submitting a Manual Import with valid credentials, was getting rejected because RHQ was silently swapping in the wrong config with no credentials. While the situation just described is ugly, the wider issue is that it occurred because we were never clear about what should happen when one tries to 'Manual Import' a new resource resulting in a resource key that matches an existing resource. Essentially such a behavior, if allowed, would result in a Resource configuration/status update which was not the original point.
To summarize, RHQ/JON does not do enough to detect and deter when 'Manual Import' is being used to inadvertently update the state of a Resource already indexed by RHQ. Current suggestions are to:
i)Update the RHQ code to better detect and prevent such behavior with appropriate error messages
- modify InventoryManager to error out when 'Manual Import' is called and an existing RHQ resource is already indexed. The new logic should handle Resources in all InventoryStatus.* states better since changes to existing Resources should only be handled via our existing Resource update mechanism. This means better agent side and server side logs messages when this situation is attempted in the future. I'm also looking into throwing better UI errors after "Manual Import" attempts. Right now we only really support Plugin Configuration exceptions(bad creds) or Plugin Container exceptions of which this doesn't fit in either category well.
ii) update released/docs/support information to better describe this situation and workaround
- this just means documenting this behavior and how to avoid getting into this state. An administrator attempting to inventory resources could get into this state and it's confusing to debug/detect if auto discovery is doing it's job well.
iii)clarify the javadoc on the remote api documentation so that plugin developers know to not attempt to handle such unintentional updates.
- Trying to support 'Manual Import', without filtering out existing Resources that match the same identity, is a subtle point that we haven't gotten right in the past. The DiscoveryBossRemote public api javadoc implies that we will handle cases where 'Manual Import' specifies a plugin config that maps to an existing resource. We need to update this api description here to be clearer about which situations should not be handled by 'Manual Import'.
This last point is tricky because there's no simple way to tell a plugin developers how to detect if the plugin configuration will map to an existing Resource already. There is perhaps a CLI query that they can put together but it's not a great solution. Logging error messages to the server and agent logs also isn't a great solution here from a CLI users standpoint.
I think we should actively discourage this usage of 'Manual Import' that can overlap with Resource update. Are there any other opinions here?
-Simeon
10 years, 12 months
RHQ 4.7 released
by Heiko W.Rupp
Hello,
I am happy to announce on behalf of the RHQ team the release of RHQ 4.7 with
* awesome new charts
* internally based on JBoss EAP 6.1 alpha 1
Check the release notes for more info
https://docs.jboss.org/author/display/RHQ/Release+Notes+4.7.0
Heiko
--
Reg. Adresse: Red Hat GmbH, Technopark II, Haus C,
Werner-von-Siemens-Ring 14, D-85630 Grasbrunn
Handelsregister: Amtsgericht München HRB 153243
Geschaeftsführer: Mark Hegarty, Charlie Peters, Michael Cunningham, Charles Cachera
10 years, 12 months