On 11/10/11 09:31 -0700, Steven Dake wrote:
> On 10/10/2011 12:51 PM, Jason Guiditta wrote:
>> Greetings all,
>>
>> I am looking today for feedback on a proposed event subsystem to be
>> used across Aeolus (eventually). This would include not only
>> Conductor, but Factory, Iwhd, Audrey, Oz, Pacemaker-Cloud, etc. IOW
>> our entire suite may eventually use this in some capacity, so input
>> from all perspectives is highly desired here (needed, even). We have
>> a feature request from more than one of our project sponsors to be able to
>> capture 'events' in some kind of standardized format to allow a client
>> system to report on or monitor certain kinds of events within
>> aeolus (this could range from 'deployable started' to 'build
initiated'
>> to 'replication complete' to an app stack trace).
>>
>> The basic idea here is to:
>> 1. Agree on a overall way to capture/log these various types of
>> events.
>> 2. Assemble an extensible list of 'things' we may want to log, as well
>> as descriptions related to these things.
>> 3. Converge on a format for the above list
>>
>> Based on conversations and documentation from one of our sponsors, I
>> have created some wiki pages that attempt to take a first whack at
>> this feature. I take neither credit nor blame for the overall content
>> here, my approach to this was to take the documented requirement and
>> try to figure out how to make an initial pass at it for real world
>> vetting
>>
(
https://www.aeolusproject.org/redmine/projects/aeolus/wiki/Feature_1_0_Sy...)
>>
>> The draft proposal (which is entirely open for debate and change in
>> direction) is on the following pages:
>>
https://www.aeolusproject.org/redmine/projects/aeolus-umbrella/wiki/Event...
>>
https://www.aeolusproject.org/redmine/projects/aeolus-umbrella/wiki/Event...
>>
https://www.aeolusproject.org/redmine/projects/aeolus-umbrella/wiki/Event...
>>
>> An important note that I would especially appreciate feedback and
>> suggestions on, is the part of the proposal suggesting syslog as the
>> initial/default protocol to support with this. I have no experience
>> with syslog, but understand it is widely used by sys admins, and that
>> a large number of potential client systems know how to read/use it and
>> monitor such a system. The obvious benefit here is that a lot of very
>> powerful tools would be able to use our events for reporting,
>> monitoring/alerts, etc. I do not know which of the many
>> implementations we are using with the fedora/red hat environments, let
>> alone other setups, so information on this piece would be very
>> helpful.
>>
>> On the other hand, there have been concerns raised already about the
>> performance effect this may have on some or all parts of the aeolus
>> ecosystem. For example, from the ruby side, someone pointed out this
>> article -
http://vitobotta.com/syslog-woes/. I am reading through
>> this now, and what I have read thus far makes me think we more need to
>> be careful we know what we are doing than that we need to avoid
>> syslog. However, if there are people who know about this topic
>> (whether ruby, python, c/c++, or any other language we may be using),
>> I would especially like to hear back from you about this. I am
>> attempting to build up a wiki page with information on libraries,
>> usage, etc, that I hope to eventually be a usable source for many
>> pieces of our project, but I will need help/input from others to put
>> it together. Right now, it is just a very rough list of notes from
>> irc, and a few links. If you have any information, please either add
>> to that page, or reply here and I will add it. The page is located
>> here:
>>
https://www.aeolusproject.org/redmine/projects/aeolus/wiki/Syslog_Libraries/
>>
>> Thanks (and hoping for no flames),
>>
>> -j
>
> Jason,
>
> I have read through the entire thread as well as design docs. It isn't
> clear to me, as pointed out by others in the threads, if the goal is a
> general event system or diagnostics capability. I believe both are
> necessary, and diagnostics could build on a general event system.
>
Yes, the main purpose of this is a general event system, diagnostics
would be a subset of that. Apparently not enough of what I thought I
understood carried through in those wiki pages (or they were just
plain too long and rambly), so apologies to all for that. I will
attempt to briefly describe in different words what I think we are
going for (in this first round at least, this can certainly grow and
change as people start to use this system as a whiole in greater
numbers).
We expect enterprise users to have one of a number of commercial
systems (splunk is the only example I recall being given, but I know
there are others), that are designed for monitoring. This may be for
diagnostics, or other types of events in the target system (meaning us).
The initial pass is meant to focus on 'things that users
did/requested' types of events, versus 'we had a bad sql statement
that caused a query to fail' kind of diagnostics. These monitoring
systems _already_ know how to read syslog, so the thought was that
this would be a good place to start for target output. All the
suggestions of using some kind of bus, storing in an audit-type db,
etc, are all valid. The point is that they would be longer term, and
one of potentially many output types. My suggestion for this first
pass (while we discuss other things we may want to capture in more
detail), is to have a couple hooks we add into conductor that collect
the needed onformation for the 2 events we are currently targetting.
This will have an initial output going to syslog (barring major
issues), with the potential to expand to whatever we think makes sense
longer term.
For these enterprise users/apps, if we can output to syslog in a
simple way (even a small number of events), that will allow us to be
'officially supported' by some of these monitoring tools, right out of
the box, versus trying to build our own whole monitoring system, which
is a much larger task.
Really, what I probably should have done is to propose the 'feature
1.0' page as a trial implementation, instead of including that as part
of the full discussion wrt overall direction on this topic. I think
they are, in fact, distinct enough to warrant separate conversations,
though they clearly are very much related. I am not sure if I have
just clarified things at all for anyone or muddied the waters further,
so I will wait for further reply to see.
> To the diagnostics point, the open source cluster community spent a
> serious amount of time solving how to provide great diagnostics for
> enterprise environments. I'd be happy to share our experiences with
> that effort.
>
Absolutely, I think that information would be of great use to the
overall project, however we may ultimately implement all of this.
Thanks for all of the initial feedback from everyone,
The model in upstream cluster was one of many subsystems. Each
subsystem could have one of the following target types:
syslog
file
stderr
memory
There are log levels, matching the standard syslog levels.
The user would request the storing of an event such as:
log (DEBUG, "message", args)
Each subsystem is configured for each of its targets. As an example:
subsystemA -> NOTICE+ syslog, file /var/log/subsystema
subsystemB -> NOTICE+ syslog, stderr, file /var/log/subsystemb
subsystemC -> DEBUG+ file /var/log/subsystemc
Each subsystem could hit any of the 4 targets, the data would be
correlated before committing to a storage system, and _every_ event was
stored to memory.
The purpose of the event logging to memory was to capture high frequency
events without high overhead. We wanted to be able to capture those
events in the case of a failure (such as internal program runtime
information) but didn't want those going to a file or syslog. As an
example, storing every entry and exit of a function is very expensive
when using a syslog target but pretty mild with a well optimized memory
target.
On any unexpected stop crash, the event log is persisted.
Our overhead for our highest frequency output subsystem was about 5% of
system runtime measured via oprofile. In trade, we know exactly what
the software was doing before it failed. We call this feature the
original name of "black box flight recorder".
The general idea is that low frequency _events_ that the user cares
about go to slow targets like syslog or file, and high frequency
diagnostics that only developers care about go to high speed targets
(memory) and are only persisted on a fail stop.
The syslog model is really strong, but the API is terrible mainly
because it blocks and has very high overhead for high frequency output.
Diagnostics and events are separate problems because of this reality.
Regards
-steve