Keycloak integration
by Juraci Paixão Kröhling
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
All,
I finally got some time to work a bit more on the Keycloak integration.
== TL;DR:
I'd like to get a review/comments/suggestions for the following:
Branch:
https://github.com/jpkrohling/rhq-metrics/tree/JPK-KCAuthentication-Draft
Comparison:
https://github.com/jpkrohling/rhq-metrics/compare/rhq-project:master...jp...
== Long version
A few words about Keycloak:
- - Keycloak has an authentication server, and can be deployed as a WAR
file.
- - Keycloak has also adapters, to make it easy for applications to
authenticate against an auth server. Basically, an application (WAR,
like rhq-metrics) just define the auth method as "KEYCLOAK" and the
adapter takes care of loading the configuration from keycloak.json and
intercepting all the requests to authenticate/authorize it, exposing
the principal as a regular JAAS subject. For metrics-console, there's
a javascript adapter.
- - To make it easier to get an auth server with adapters, they also
provide an "appliance" version, which includes all the bits. This is
currently based on WildFly 8.1.0.Final and will be using 8.2.0.Final
for the next release.
This effectively means that Keycloak Auth Server can be running on one
node and rhq-metrics in another. But I assume that for development,
the appliance is the easiest solution.
Keycloak has also a notion of "realms", which we are using on a
per-tenant basis: each tenant is a realm in Keycloak. In the
integration code, I'm using two realms to demonstrate the
multi-tenancy capability: "acme-roadrunner-affairs" and
"acme-other-affairs". Each represents a department inside Acme, Inc.
We can define realms as JSON files, and import them during the first
boot, which is convenient for a "getting started" scenario. The JSON
file can also be imported via the Auth Server Web UI, if
needed/required/wanted.
Inside a realm, we define applications, roles and users. By
applications, I mean "metrics-console" and "rhq-metrics", for
instance. By roles, we currently have "admin", "user" and "agent",
which were the ones I imagined as the first roles to add. By users, we
have only one standard user: "agent" (which has an "agent" role). Each
additional user self-registers during the first login.
This all means that we now have two options:
- - use the start.sh to generate random keys and certificates, so that
we don't have "default" ones (it's like having default passwords, and
I think we know how bad it is)
- - use the start.sh only for minimal stuff (copying things around and
starting Keycloak). This implies that we'd have default
passwords/secrets/keys.
I've tried to keep the start.sh as simple as possible, and most of the
things there are replacing tokens and copying things around, but I
know that this wouldn't be easy to maintain for someone who is looking
at it for the first time.
Pretty much nothing else is intrusive, but I'd appreciate feedback
there as well.
And a final note: on the metrics-console, the JavaScript code there is
as real JavaScript. I intend to re-write as TypeScript, as it seems
that's the chosen language. I'd need some time to study TypeScript,
though :-)
So, all that said, I'd like to share the following changes for
review/discussion:
Branch:
https://github.com/jpkrohling/rhq-metrics/tree/JPK-KCAuthentication-Draft
Comparison:
https://github.com/jpkrohling/rhq-metrics/compare/rhq-project:master...jp...
If you'd like to run this on your local machine, I'll need to build
Keycloak from master by yourself, as I'm using a feature that was
added this week.
- - Juca.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCgAGBQJUgZbuAAoJECKM1e+fkPrXZf4H/1h58HL51UyrYqqM/gPhMAt0
EcY4P8/I4HwSpAw1AO8BnR2TL+EgZc+bAZn7Nw/NrOH61eU5IGVG5+UcnqS7LIP9
m8NteACuyJhfTmKly8PNxENQ0grCjOoUpAffN97pbFoJWT/MKlzda4dAbL7aAN05
Kgz99OvqFWJMLGOtQWcvEiZBDU+OacY9NBkKXCl0ZivnN47YonA0c3SLlnG8a+34
4NErQbSpm7iENPz2ab4EbCbRduGoQtu8NtKclnweAkD3MurfRHSvRKckib0MgASM
LRE9B2nt+MMekeN6I9Q2UrTY9hP4X9xIEpWHy+KXKMI4vHb9xxyFiIC/Kly2boY=
=fVaW
-----END PGP SIGNATURE-----
9 years, 3 months
[rhq-metrics] Possible implementation idea for pluggable aggregators
by Heiko W.Rupp
Hey,
so I have been thinking on how users can write their own aggregators and aggregation functions and deploy them in a pluggable way.
As we are thinking about messaging for rhq.next anyway [1] I thought this could be done by forwarding data and/or requests to a message queue / topic and have listeners react on those messages.
Being old-school, I decided to use MessageDrivenBeans - especially as Mazz has already created the necessary bits for running an ActiveMQ broker as subsystem in WildFly and also a ResourceAdapter [2]
I've implemented two kinds of things so far (don't worry, all in a branch in my own repo :-):
1) forward all incoming data to a topic and have then MDBs work on them [3] - this could be used in alerting, where
the alert engine just picks up those messages and works on them.
Samples via MDB look like this [4]:
@MessageDriven(activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Topic"),
@ActivationConfigProperty(propertyName = "destination", propertyValue = "metrics")
}
)
@SuppressWarnings("unused")
public class MaxMdb extends AbstractMetricDrivenMDB {
@Override
void workOn(List<RawNumericMetric> metrics) {
/// do work here
}
}
I guess the activation config could even go into the abstract superclass
2) Adhoc aggregation worker
Those are MDBs that are triggered by this code block in the MetricHandler [5]
@GET
@Path("/metrics/{agg}/{id}")
public void getAggregate(@Suspended final AsyncResponse asyncResponse,
@PathParam("agg")String name, @PathParam("id")String id) throws Exception {
BasicMessage msg = new BasicMessage(id);
Map<String,String> headers = new HashMap<>();
headers.put("function",name);
aggregationProcessor.sendAndListen(msg,
new BasicMessageListener<BasicMessage>() {
@Override
protected void onBasicMessage(BasicMessage basicMessage) {
/// Work on results - e.g asyncResponse.resume();
Basically the name of the aggregation function is passed as path parameter and
added as a header field for messaging
The MDB code would then look like this [6]:
@MessageDriven(activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destination", propertyValue = "aggregationTask"),
@ActivationConfigProperty(propertyName = "messageSelector", propertyValue = "function = 'invert'")
}
)
@SuppressWarnings("unused")
public class InvertAdHocAggregator extends AbstractAggregationWorkerMDB {
@Override
BasicMessage work(BasicMessage msg) {
/// compute the result here
}
}
The most interesting part here is
@ActivationConfigProperty(propertyName = "messageSelector", propertyValue = "function = 'invert'")
where the propertyValue defines a selector that determines what messages can reach this MDB.
So with above MetricHandler code
GET /metrics/bla/123 would not trigger this MDB but
GET /metrics/invert/123 would do.
I am not saying we should adopt this way for user definable aggregations; the beauties are though
- each MDB can just be deployed into the server as its own jar file
- processing can easily be distributed onto many compute nodes because of the message bus.
[1] https://developer.jboss.org/en/rhq/blog/2014/08/14/thoughts-on-rhqnext-se...
[2] http://management-platform.blogspot.de/2014/11/messaging-infrastructure-u...
[3] https://github.com/pilhuhn/rhq-metrics/blob/msg-integration/rest-servlet/...
[4] https://github.com/pilhuhn/rhq-metrics/blob/msg-integration/clients/metri...
[5] https://github.com/pilhuhn/rhq-metrics/blob/msg-integration/rest-servlet/...
[6] https://github.com/pilhuhn/rhq-metrics/blob/msg-integration/clients/metri...
9 years, 3 months
[rhq-metrics] REST Api Versioning
by mike thompson
As we get closer to getting more solid Uris for the REST Api it is time to discuss versioning for non-breaking changes to the REST Api. There are 3 ways (maybe more) to version a REST Api:
1) Http Header: header property “x-rhq-metrics-version”: “1.0”
2) Uri: /rhq-metrics/v1.0/
3) Query Param: /rhq-metrics/?version=1.0
Approach # 1 is the most difficult to use for clients (setting via curl is a lot of extra keystrokes) but technically fits well with the Http REST usage model the best (that is what the header is for). However, instead of sending the version in the header each time we could default to current version if not present. This way, the default method is easy to use, but past versions can be customised via the header. Since, the current version will be used most of the time
Approach #2 seems the most straight forward (and popular) because the version is easily seen in the Uri. Although it has the disadvantage that Uris change over time. Because of this I would suggest that we not use this approach.
Approach #3 the query param approach is the least appealing from a pure REST architecture perspective as the query params are arguments to the individual method (by which the request probably already has been routed to version of the code. This means that we would need a global interceptor (or something) that first checks the params to see if the version param exists just to know what codebase version to use. Also, what if a method actually wants to use the ‘version’ parameter for its functionality (it can’t)?
Given the fact, that most of the time the version we use is the current version, and only on occasion would we want to change the version to a past version then — Method #1 is my pick.
WDYT?
9 years, 4 months
Resource Type versioning
by Libor Zoubek
Hello,
I'd like to ask for advice/recommendation. I am trying to fix
https://bugzilla.redhat.com/show_bug.cgi?id=1173479 and sofar I came up
with 3 solutions - and I don't like any of those (what a bad day today).
https://bugzilla.redhat.com/show_bug.cgi?id=1173479 - we introduced 2 new
pluginConfiguration properties (which are required and have default
values), but we forgot to add code to resourceUpgrade facet
implementation, that detects those 2 props on existing resources. Now,
after upgrades to 4.13 it's too late for writing the resourceUpgrade code.
The code would have either to
1) try to detect those 2 properties on every agent start - because it is
not able to distinguish between older/newer resource. Resources imported
before upgrade have default values, and resources after upgrade have
correct values. Plugin code would have to literally detect both values at
agent start.
or
2) add another boolean property foo that would denote if a resource was
discovered before or after upgrade (those before will get upgraded and we
detect our 2 properties and foo is set to true (=upgraded))
3) introduce resource type version: extend plugin descriptor with optional
numeric attribute typeVersion. This typeVersion would help plugins in
resourceUpgrade code to know whether to run or not. typeVersion would have
to be stored in resource as well and upgradeCode would have to set the
version to latest and greatest in case the resource was upgraded (maybe
this could go to agent code)
I don't like any of above solutions
1) potential slowdown of agent start (especially if it is going to hit
resources being intentionally down)
2) stupid property
3) maybe an overhead to enrich all resources / resource types just because
of this type of bug. But this is my canditate.
I'd be thankful for any comments.
--
Libor Zoubek
9 years, 4 months
Alerts don't fire when REST is used to post data collections composed of multiple metric types to RHQ - UPDATED
by Van Dillon
Hi,
I originally posted this question on 11/26 and have not received a
response. I'm assuming my post was overlooked due to closeness to the
Thanksgiving holiday and the fact that you guys were really busy with RHQ
4.13 and RHQ.next.
I also have some additional information. I've confirmed that the problem
occurs on RHQ 4.13.
If you'd prefer not to handle this problem here, I'd be happy to submit it
as a bug report.
Thanks,
Van Dillon
>>
>>
This is a follow up to a question I posted on 9/12 that involved using a
plugin that depends on the No-op plugin along with REST to push metrics to
RHQ. The answer to that question was that resources have to be created as
children of an agent backed platform for alerts to work.
I'm running into a different problem with alerts that involves using REST
to post data collections to RHQ. In summary, alerts don't fire when they
should when the data collection is composed of data of multiple metric
types. I've tried both of the following methods on RHQ 4.10.0 and RHQ
4.12.0:
POST /metric/data/raw
POST /metric/data/raw/{resourceId}
Both these methods almost immediately call the same code in
MetricHandlerBean on the server side, so I'll just use test results for the
schedule id based method in my examples.
Let's say I have the following metric types and alert definitions:
Metric: CPU Util (percent) Alert: CPU Util > 50%
Metric: Disk Reads Bytes Alert: none
Here are the results from a test series of randomly generated data posted
at one minute intervals:
cpu util:0.49, disk read bytes:700.0
cpu util:0.87, disk read bytes:1180.0
cpu util:0.72, disk read bytes:1210.0
cpu util:0.38, disk read bytes:969.0
cpu util:0.10, disk read bytes:806.0
+cpu util:0.52, disk read bytes:456.0
cpu util:0.09, disk read bytes:696.0
cpu util:0.75, disk read bytes:950.0
+cpu util:0.74, disk read bytes:297.0
cpu util:0.13, disk read bytes:555.0
The '+' indicates data points that triggered alerts. For the data points:
0.87, 0.72, and 0.75 no alert was triggered.
Another interesting thing happens if you add an alert for the "Disk Read
Bytes" metric: "Disk Read Bytes > 500". In this case I posted the same
test series (composed of data points that all exceed the alert threshold
for both metrics) at one minute intervals:
cpu util:0.65, +disk read bytes:700.0
cpu util:0.65, +disk read bytes:700.0
cpu util:0.65, +disk read bytes:700.0
cpu util:0.65, +disk read bytes:700.0
+cpu util:0.65, disk read bytes:700.0
cpu util:0.65, +disk read bytes:700.0
+cpu util:0.65, disk read bytes:700.0
cpu util:0.65, +disk read bytes:700.0
cpu util:0.65, +disk read bytes:700.0
+cpu util:0.65, disk read bytes:700.0
Again, the '+' indicates data points that triggered alerts. As you can
see, each post causes one alert to trigger. I wanted to see if this
pattern holds up when more types of metrics are added, and indeed it does.
I added another metric called "Disk Read Ops" to the test series with an
alert defined as: "Disk Read Ops > 1000" and posted the following data at
one minute intervals:
cpu util:0.75, disk read bytes:800.0, +disk read ops:1200.0
cpu util:0.75, disk read bytes:800.0, +disk read ops:1200.0
cpu util:0.75, disk read bytes:800.0, +disk read ops:1200.0
+cpu util:0.75, disk read bytes:800.0, disk read ops:1200.0
cpu util:0.75, disk read bytes:800.0, +disk read ops:1200.0
cpu util:0.75, +disk read bytes:800.0, disk read ops:1200.0
+cpu util:0.75, disk read bytes:800.0, disk read ops:1200.0
cpu util:0.75, disk read bytes:800.0, +disk read ops:1200.0
cpu util:0.75, disk read bytes:800.0, +disk read ops:1200.0
cpu util:0.75, +disk read bytes:800.0, disk read ops:1200.0
The only workaround I've been able to find for this problem is to only send
a single metric type per post when using the REST methods for raw data
collections. When I do this, alerts always trigger when they should.
<<
<<
9 years, 4 months
[rhq.next] Netflix Atlas
by Thomas Segismont
Atlas captures operational intelligence. Whereas business intelligence
is data gathered for the purpose of analyzing trends over time,
operational intelligence provides a picture of what is currently
happening within a system.
https://github.com/Netflix/atlas/wiki
9 years, 4 months
[rhq.metrics] schema changes
by John Sanda
On Friday night, I pushed some schema changes to master for making data retention configurable. You will need to drop any existing keyspaces so that the changes will get applied.
Thanks,
- John
9 years, 4 months
Nice talk about microservices
by Heiko W.Rupp
Hey,
this is a pretty nice talk from an engineer at Netflix about
Microservices with some of the challenges and solutions
https://www.youtube.com/watch?v=CriDUYtfrjs
--
Reg. Adresse: Red Hat GmbH, Technopark II, Haus C,
Werner-von-Siemens-Ring 14, D-85630 Grasbrunn
Handelsregister: Amtsgericht München HRB 153243
Geschäftsführer: Charles Cachera, Michael Cunningham, Paul Hickey, Charlie Peters
9 years, 4 months
Fwd: [wildfly-dev] Management console web application
by Heiko W.Rupp
Interesting concept with the force directed graph
> Anfang der weitergeleiteten Nachricht:
>
> Datum: 9. Dezember 2014 12:42:50 MEZ
> Von: Dev Ops <devopsmoreorless(a)gmail.com>
> An: Harald Pehl <hpehl(a)redhat.com>
> Kopie: WildFly Dev List <wildfly-dev(a)lists.jboss.org>
> Betreff: Aw: [wildfly-dev] Management console web application
>
> Hi,
> thanks for pointing me to the github repo.
>
> I need to do a couple of things:
> provide a graphical and complete view of all server-groups - along with the hosts - something like the following http://bl.ocks.org/mbostock/1062288 <http://bl.ocks.org/mbostock/1062288>
> provide a button to download the deployed application;
> Next would be extending point 2. by providing different colors for different metric values.
> Likely, heap memory metric going to saturation, the balloon would be colored has almost red (#cc0000)... every hosts could have its own metric (heap memory, cpu usage, connection pool statistics, etc...).
> Alerts and notifications are not needed, as there are already other software accomplishing these tasks (nagios, jon, etc...).
>
> Regards,
> DevOps guy
>
>
> On Tue, Dec 9, 2014 at 11:13 AM, Harald Pehl <hpehl(a)redhat.com <mailto:hpehl@redhat.com>> wrote:
> The source code for the management console lives in its own repository at https://github.com/hal/core <https://github.com/hal/core>
>
> I'm curious, what kind of customization do you have in mind?
>
> .: Harald
>
>> Am 09.12.2014 um 11:06 schrieb Dev Ops <devopsmoreorless(a)gmail.com <mailto:devopsmoreorless@gmail.com>>:
>>
>> Hi all,
>> where do I find the source of the web application which provides the Management console?
>>
>> I need to add something to it, but I don't know where to get the source.
>>
>>
>> TIA,
>> DevOps guy
>> _______________________________________________
>> wildfly-dev mailing list
>> wildfly-dev(a)lists.jboss.org <mailto:wildfly-dev@lists.jboss.org>
>> https://lists.jboss.org/mailman/listinfo/wildfly-dev <https://lists.jboss.org/mailman/listinfo/wildfly-dev>
> ---
> Harald Pehl
> JBoss by Red Hat
> http://hpehl.info <http://hpehl.info/>
>
>
> _______________________________________________
> wildfly-dev mailing list
> wildfly-dev(a)lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/wildfly-dev
--
Reg. Adresse: Red Hat GmbH, Technopark II, Haus C,
Werner-von-Siemens-Ring 14, D-85630 Grasbrunn
Handelsregister: Amtsgericht München HRB 153243
Geschäftsführer: Charles Cachera, Michael Cunningham, Paul Hickey, Charlie Peters
9 years, 4 months