Excerpts from Amit Saha's message of 2015-06-02 15:05 +10:00:
Most recently this bug  made me think if there is a way to catch
of problems during testing rather then in production. I think what we lack really
is the amount of data that our application has to query in production is magnitudes
bigger than during our integration tests or even our development environment.
May be we should look to do that as part of our test suite?
To summarize our discussion from last week...
Doing performance testing and getting a pass/fail result, the way we do
for our regression tests right now, is not easy. We can't just load
a production dump at the start of our test suite, because the tests are
designed to find regressions, not to measure performance.
Even if we did have reliable, fully automated performance tests they
would be too expensive to run for every patch. Just loading a production
dump takes 8 or more hours. Even running it once per release would
substantially increase the cost of doing a release, when we want to do
the exact opposite. And even then, the best they would do is to find
performance *regressions*, it wouldn't help to find performance problems
that we weren't aware of.
Ultimately the most useful performance testing needs to be exploratory.
A human interacts with the application in a production-like environment
and looks out for actions which "feel" too slow.
Since we don't have anyone to do this kind of dedicated exploratory
performance testing, the next best thing we can do is to ensure we all
have a recent production dump loaded in our development environments and
keep an eye out for poorly performing parts of code while we are working
on regular patches.
Dan Callaghan <dcallagh(a)redhat.com>
Senior Software Engineer, Products & Technologies Operations
Red Hat, Inc.