An analysis of logstash I came across from a different effort...
Logstash [1] is an open source project to collect and process logs. Log is defined as
timestamped blobs of data. Logstash consists of three different parts: inputs, filers and
outputs. The inputs collect information from different sources. The examples of the inputs
are amqp, zeromq, syslog, files etc. The inputs are responsible for getting data from the
source and doing some initial structuring. The filters do the processing and
transformation of the data. The most powerful filter is grok. Grok does smart regex
matching. It allows to define regex patters by giving them names and then combining names
patters into more complex patters. As a result the configuration of the regex rules become
more manageable. Grok comes with multiple pre created rules that can be used for log
parsing. Filters can be combined and stacked. Outputs define where the result needs to go.
It can go in a file, email, sent as a message, stored in a database or visualized by a
dashboard graphical tool like graphite. There are dozens of outputs.
Logstash is configured via a config file that resembles a puppet manifest. It is pretty
simple to write and understand. One of the design goals of logstash is to make getting up
to speed extremely simple and they are pretty successful with that.
Logstash is written in Ruby. To avoid issues with dependencies the whole stack is
delivered as one jar file.
Logstash can be used for local processing of the log data and for remote, consolidated
processing. Current Logstash documentation suggests the following architecture:
The nodes where the logs are collected should have a Logstash instance that would collect
the data locally and ship over the network. In the past they recommended AMQP but due to
perceived complexity of setting it up they recommend Redis [2] now. Redis is a key-value
store that also provides a publish-subscribe interface. Such combination makes it a good
storage for consolidation of the information from different sources. The central instance
of Logstash can get a consolidated feed from Redis and do the processing. One of the built
in outputs is called “elasticsearch” [3]. Elasticsearch is a search engine that takes in
JSON and does automatic indexing of the data. It is based on the Lucene technology and
querying mechanism.
Logstash allows for horizontal scalability.
[1]
http://logstash.net/docs/1.1.9/
[2]
http://redis.io/
[3]
http://www.elasticsearch.org/
----- Original Message -----
----- Original Message -----
>
https://speakerdeck.com/obfuscurity/the-state-of-open-source-monitoring
This mentions "logstash" -
http://logstash.net/
Just a very brief read of their home page, and it makes me wonder if
we should integrate this as our log/event subsystem?
_______________________________________________
rhq-devel mailing list
rhq-devel(a)lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/rhq-devel