Disclaimer: I don't know everything about JON, I am a newbie here, I want to learn more. Please be kind with regards to any JON details, where I may be misinformed or have incorrect assumptions about JON.  That said, I've built some pretty amazing systems; when things succeeded it was because of some smart approaches and simplifying assumptions I made (mentioned below, but not specifically). I hope to share those experiences here so that I can help make JON scale to new levels.

That said, let me continue with my thoughts...

Requirements, Alan? It would be nice if we started first with the primary user stories, release themes, hard-requirements, and non-requirements. Given these it would be fairly easy to weigh options. For example, if the user stories were the following I may be able to glean what approaches to take:

"As an operations engineer I want software that will help me manage software and hardware based resources;

* So that I can reactively address current operational issues I want software that can render a semi-continuous stream of operational metrics in real-time."
* So that I can proactively perform capacity planning I want software that can perform time-series forecasting analysis and provide meaningful recommendations."

(I have a lot to learn about JON, but it would seem to me these are two of its fundamental user stories.)

Granted these above, there are perhaps some fundamental behaviours that need to be weighed/considered:

* high-bandwidth ordered insertions
* high-bandwidth (but lesser than insertions) ordered retrievals
* high-bandwidth
(but lesser than insertions) analysis of potentially causally related events; this may possibly touch all (or lots of) state?
* lower-bandwidth, but potentially high-latency, asynchronous alerts
* high-bandwidth periodic data summarisation and scrubbing

Assumptions here, but some non-requirements may be:

* alerts do not need to operate in a guaranteed delivery manner
* strong consistency is probably unnecessary

Accepting the fundamental behaviours, I find it questionable whether any brand of RDBMS could scale unbounded; historically there has been, and continues to be, lots of research related to the following topics because people have found limitations with RDBMS technologies and other traditional techniques or technologies that don't scale. Hot topics include:

* query parallelism (distributed sql, map-reduce)
* append-only databases and lock-free data structures and algorithms (CouchDB et al, HDF5, RRD)
* memory-resident and similar systems (volatile, non-volatile; SSD, MMDB, RAMDISK)
* fail-fast parallel systems languages (Erlang)
* immutable data w/ parallel languages (Scala, Lua, Erlang, Haskell, Clojure, Lisp)

As a defence of the above, there is growing consensus in the software community that in order to achieve internet scale today and tomorrow, you will no longer be able to scale by throwing faster hardware at a problem as we are already at the physical limits of processors. To scale tomorrow we need to be able to leverage parallelism en masse and improve individual pipeline performance by ridding oneself of unnecessary complexity.

I need to carefully address a word chosen above, "traditional". I like traditional, for example Ronald Fagin's 1977 paper on Extendible Hashing was pivotal. I love traditional only when carefully considering applicability. I don't like traditional if it keeps me stuck and inflexible. I don't want folks to think I am some sort of RDBMS basher like many in the NoSQL community; there is a time and a place, but I don't think it's [RDBMS] terribly efficient at handling time-series data. We [Object Design] saw this years ago, late 90's, at Thompson Financial; Oracle fell flat on its face while lesser traditional technologies barrelled by it.

That said, I want to continue...

My longer term scalability thoughts can be summarized in the following points:

1. We need to equally consider storage and distribution technologies that don't compromise locality of reference, or at least makes every attempt possible to intentionally compromising locality of reference.
  1.a. Data should ideally be stored contiguously, or nearly contiguously, but certainly in a sequential manner.

2. We need to consider technologies that optimize for mostly sequential retrieval and insertion.
  2.a. if you properly optimize for locality of reference you will likely optimize for retrieval at the same time
  2.b. e.g. b+tree's, shadow page algorithms, append-only
  2.c. sequential retrieval of how many related values? does this imply column oriented databases would be optimal?

3. We don't have many (any) real user requirements for SQL per se. But Oracle is a "feel good" for corporate executives that sign checks (if and only if they have to separately install a database technology)

4. We need to support an unlimited (or virtually so) set of data feeds.

5. We need to batch operations as much as possible to decrease the relative cost of high latency operations.

6. Keep it simple stupid; and it should be stated _again_. The fastest, most scalable, most enduring, systems out there boil down to some pretty nifty simplifying approaches and assumptions. Look for these, and constantly look to throw off anything that hinders and really does not matter.
  6.a. I see John's statement in a related thread as being related to this point. A fundamental simplicity was achieved that allowed him later to adapt the product easily to new situations. But this is good only so long as that new situation comes along, and that I think was Alan's point.

7. What says we have to provide database of choice to our customers; why not just embed it and hide it away? Less options to maintain by us, less complexity, less bugs, less documentation...

I could probably think of more, but I want to go spend time with the family; a movie is on!

Cheers,

Bob



On 08/10/2011 04:17 PM, Charles Crouch wrote:
A paragraph from an unrelated email this morning helped solidify my thinking around where we could potentially end up going wrt to performance and scalability of RHQ...

"MongoDB, Membase, Memcache allow data flexibility and cloud scale. These are not old-sk00l Oracle databases that get slow with lots of data. These scale horizontally and are enabled by the scaling of a PaaS. We do of course have MySQL for the bits of data that aren't going to go through the roof. "

If users want to manage 1m metrics per minute, or 1m events per minute or 1m alerts per minute they are going to need N machines across which to scale the load. But at the same time there are probably going to be areas of the data model that just will never need to scale like that e.g. plugins, potentially the entire inventory (the requirements for reading/writing 1m resources could be very different to 30m *new* metrics every 12hours) and for that data something like a Postgres instance maybe sufficient. As long as we can also scale down the whole architecture to a single box for those people with smaller environments and still have a reliable and performant system.

BTW I'm not talking about making RHQ into a SaaS here or anything PaaS related right now. Just how to build a system that users could install on their own hardware that would scale in a close to linear fashion. e.g. If people want to manage an environment twice their current size, they call up Dell and order N more machines, set them up and install RHQ.

One further point, I think even if N can get quite large, I think users would still see that as a reasonable trade-off for near linear scalability (assuming that's achievable). If people have a really big environment they are more likely to invest in a larger monitoring/management infrastructure.

More thoughts please...

Thanks
Charles
_______________________________________________
rhq-devel mailing list
rhq-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/rhq-devel