Introduction
by Nils Philippsen
Hi there,
"I'm the new one", as they say.
I've been messing with RHL, RHEL and Fedora since RHL 4.0, nowadays as a
developer, formerly as consultant, support person and sysadmin (paid
gigs that is, unpaid it's still a mix of all these). I'm experienced in
software and database design and have used and still use among others C,
Python, Perl, SQL for that. I've also done web programming with PHP
(which I'd rather forget) and nowadays TurboGears and Co.
My motivation to show up here is that until now I've only caused you
people work and I thought that I could help out with things for a
change; I thought about the web, hosted, devel/tools (what's the
difference?) FIGs. I can't say how much I can spare for Fedora
Infrastructure work and I'll probably only occasionally be able to join
the meeting (which is a tad late in my timezone), but I should still be
able to work on the odd ticket or two.
My Fedora moniker as well as IRC nickname on Freenode is "nphilipp".
Nils
--
Nils Philippsen "Those who would give up Essential Liberty to purchase
Red Hat a little Temporary Safety, deserve neither Liberty
nils(a)redhat.com nor Safety." -- Benjamin Franklin, 1759
PGP fingerprint: C4A8 9474 5C4C ADE3 2B8F 656D 47D8 9B65 6951 3011
14 years, 10 months
Re: Request for test data based off of obfuscated live data
by John (J5) Palmieri
----- "Toshio Kuratomi" <a.badger(a)gmail.com> wrote:
> John Palmieri wrote:
> > ----- "Toshio Kuratomi" <a.badger(a)gmail.com> wrote:
>
> > A note on the code drop, by requiring the author to modify a spec
> file if needed in order to deploy their changes into their environment
> (revision numbers would be automated) patches would include spec file
> changes instead of the maintainer having to sync by hand. This would
> also make sure the build files are kept up to date as the author would
> have to make their changes work in an RPM environment just to test
> them as opposed to just installing from their source tree which often
> leads to annoying bugs (like missing files in a distributed tarball).
> Also by making it easy to generate a patch and submit it to trac we
> will get more consistent formatted patches (such as using the VCS's
> patch format) and most likely more people getting involved as the
> overhead shrinks a bit (how many people want to go to individual trac
> instances to file a patch?).
> >
> I'm not sure that this is a logical outcome of having an
> infrastructure-apps image. It seems more like guidelines that would
> need to be established per-project. For instance, there is
> absolutely
> nothing stopping someone from installing the packagedb from the rpm,
> checking out the development source to their home directory, and then
> changing the config file to run that instead.
>
> -Toshio
I'm more talking about newcomers who don't have a process of their own. If it is all setup with scripts and tutorials right off the livecd (could be a locally running url) submitting well formatted patches by new developers becomes trivial. The key is to make it easy to find the documentation and make the process somewhat easier, or at least more convenient then having to figure out the steps to get the code, change the configs and then restart the servers. I don't want to put process ahead of getting code contributions, it would just be a more integrated workflow where a developer wouldn't have to learn each step separately (well except for the actual development and various vcs workflows). Anyway just an idea. I plan on writing a script for pulling down each piece of infrastructure code and another script for diffing, packaging and deploying the service in the test instance. If others find it useful I will add other workflow scripts.
--
John (J5) Palmieri
Software Engineer
Red Hat, Inc.
14 years, 10 months
Re: Request for test data based off of obfuscated live data
by John (J5) Palmieri
----- "Toshio Kuratomi" <a.badger(a)gmail.com> wrote:
> Mike McGrath wrote:
> > We're actually in a pretty unique situation in that most of our data
> is
> > public anyway, replicating pkgdb and bodhi data for example should
> be
> > fairly easy. Replicating the fas stuff should be easy too.
> >
> > We're going to need to replicate not only the data but access to the
> data
> > and this, to me at least, sounds like another development
> environment that
> > is more mature then the pt setup but still not as strict as the
> staging
> > environment.
> >
> >
> Yep, that seems to be where the need fits in.
>
> > What do others think on this? I like the low overhead of the pt
> servers
> > since people are kind of on their own in getting stuff done and it
> doesn't
> > cause extra work to the sysadmin-web guys. But there are drawbacks
> to it.
> >
> I'm not sure what's best. There's a lot of problems with doing this
> in
> a shared development environment. Even if we're controlling the
> access
> to the data we'd still be more open with it here than in production
> or
> staging. For instance, people who are not primary fas authors or
> system
> admins would have access to make modifications to fas. So I think
> we'd
> still end up wanting to modify the data before it hits this
> environment.
> We'd also have to devote resources to it.... another db server, host
> to
> run koji-web,hub,builder, etc. We'd have to update them. We'd have
> to
> work out conflicts between different developers, for instance if we
> work
> on CSRF fixes in this environment and it makes developing other apps
> like myfedora just flat out fail for a while.
>
> If we can munge the data enough to be comfortable releasing it to the
> public, it seems like that would cost us less man hours. However, it
> isn't entirely free. We'd still have to make new dumps of data,
> modify
> it for changes in the data model, etc. Then the developer would
> become
> responsible for downloading the sanitised data and running it on
> their
> network. Which is good because it isn't us but bad because it's not
> trivial to set all this up.
I would be willing to write scripts and a kickstart file to make this trivial to get a qemu image or test machine up and running in a couple of hours (mostly waiting for download and installs to happen). What I was thinking was an environment that setup a stable fedora infrastructure environment complete with puppet scripts to configure the services to work with one another and a set of scripts for pulling fresh data, modifying common pieces of the various dbs (like changing dates to stay current or setting up one of the users as your test user), and pulling down code from the various source trees for hacking on particular pieces of the infrastructure while integrating them into the environment.
A note on the code drop, by requiring the author to modify a spec file if needed in order to deploy their changes into their environment (revision numbers would be automated) patches would include spec file changes instead of the maintainer having to sync by hand. This would also make sure the build files are kept up to date as the author would have to make their changes work in an RPM environment just to test them as opposed to just installing from their source tree which often leads to annoying bugs (like missing files in a distributed tarball). Also by making it easy to generate a patch and submit it to trac we will get more consistent formatted patches (such as using the VCS's patch format) and most likely more people getting involved as the overhead shrinks a bit (how many people want to go to individual trac instances to file a patch?).
--
John (J5) Palmieri
Software Engineer
Red Hat, Inc.
14 years, 10 months
Re: Request for test data based off of obfuscated live data
by John (J5) Palmieri
----- "Toshio Kuratomi" <a.badger(a)gmail.com> wrote:
<snip>
> >
> Getting koji data munged and transferred may be a problem as it is
> just
> so darn big. If we don't have to make changes to the data in koji,
> just
> get it distributed, then we could give access to a backup... but
> that's
> still a lot of information to transfer.
We would only need a portion of the data. Ideally everything since the last supported version of each distribution (or one after so we get obsolete data to test against) but in reality the last month of activity should be suitable.
> pkgdb, fas, and bodhi are relatively small.
>
> fas is where we'd have our major security problems. We can't give
> the
> information out unmunged. I've munged it before, though, so it's
> doable. How strict we need to be is an issue, though. If we remove
> all
> the identifying information in the people table except for the
> userid,
> is that sufficient? *Note: We probably also need to munge data in
> the
> configs table.
As long as we randomly generate data for that (well username at least). Note that UID's are easily mapped back to usernames so you might want randomize that. Also I believe packagedb and bodhi use usernames as the key instead of UID's so those would have to match accounts in the munged FAS db. I would suggest generating a list of names from a dictionary and using that list to randomize names in the other services. Of course the names need to correspond to group permissions so some logic would be needed to make sure records associated with a give name are valid. However having the ability to recreate the associated user names may not be an issue since all of that data is public. More importantly we need to make sure we aren't giving out addresses, phone numbers, password hashes and other such keys.
> pkgdb and bodhi don't have information that is privacy policy
> sensitive.
> (Which doesn't mean that some users won't like it... just that I
> think
> we're covered.)
Mike's suggestion of running it by legal sounds like the best route.
--
John (J5) Palmieri
Software Engineer
Red Hat, Inc.
14 years, 10 months
Request for test data based off of obfuscated live data
by John (J5) Palmieri
Hey guys,
On IRC the other day there was a discussion where I had requested the ability to use live data stripped of all personal identification and data, for creating a test bed for development of MyFedora. It was asked I write up a reason for needing this data as it was hard to explain in detail on IRC.
Current Development
First lets go over my current development process. Right now I work on live data. Since most of my code involves reading data this isn't an issue except for perhaps putting a load on the servers when testing. This however becomes less ideal as I need to test code that modifies data, such as pushing a build. Even more daunting is if I need to add functionality to one of the other apps, creating data that would somewhat reflect the real word is time consuming and often a blocker which has me move to something else.
Why I need the data
So why is it important to have real world data - or at least a semblance of it? Working on something that will consolidate a lot of the data into one interface, I hit a vast majority of our infrastructure while treating it as one entity. If each piece of infrastructure lived in isolation, it wouldn't be as big of an issue but as it stands the data has keys which link each record in one piece of infrastructure to a record in another. For instance Fas usernames link to builds in Koji who's build numbers link to releases in Bodhi. I need data with those links intact so I can follow the workflow from one tool to another, test access rights and simulate the progression of various data through the pieces of infrastructure without worrying about stomping on the data because I can quickly restore it to its initial state. Also, I can't hit every edge case, I need to concentrate on how the data most commonly flows and having something that resembles what we see on the production servers is key there.
What I am asking for
As stated above, I would like a data set representing the data one would see in our infrastructure. Ideally this would mean a secure process that would dump data from koji, bodhi, fas and pkgdb while obfuscating all personally identifying data. This could include switching package owners and uids at random so as not to be able to trace the data back (though in reality one could gather this data slowly by querying each of the infrastructure pieces). I only need a relatively small sampling of say a months worth of data and a semi random drawing of the most active contributors and their packages. I can update dates to keep the data "current" for testing purposes. Every once in awhile I would need a fresh sampling to make sure the code didn't just work with my sample set.
Why pure random data isn't sufficient
Random data does not produce the relationships needed to work with the entire fedora infrastructure and even if it did the data would not cover real world scenarios and most likely the relationships would be largely invalid (like a build tagged for F-8 released in F-9). Also things like koji tags and group information need to absolutely conform to the structure we have setup. For instance I key off of the string "updates-candidate" to determine if I should show a button to push the build to bodhi. The button also relies on FAS telling bodhi that the current logged in user is in the correct group to push. If it is not an updates candidate or the user is not in the correct group, the button does not show.
What I would do with this data
I would be able to accelerate development of the more interesting bits of myfedora while also being able to experiment and quickly produce patches to various bits of infrastructure. For instance, FAS already had all the API I need to edit my profile except it is not exposed outside of fas because of the lack of a simple @allow_json decorator so I had to drop that feature until after the development freeze and a new FAS with the patch is put into production. Even then modifying data on a production server, even if it is my own profile, is not an ideal way to test. If I had a data set I could set up my own test environment, apply the patch and test before we deploy. I could then go and patch other parts of the infrastructure to say speed up a query, add queries I needed and generally improve the base infrastructure as I developed MyFedora. The patches would then be sent to trac and accepted or rejected in the usual manner.
Others could also more easily get into hacking on infrastructure bits as they would have a place to start instead of a daunting blank slate. If I can get the data I am more than happy to write scripts and kickstart files to easily setup and teardown a Fedora Infrastructure test and development instance.
Whatever solution the infrastructure team thinks is good for what I need will be workable. Above is what I think I need and an explanation on why it is needed. Hopefully there will be some solution we can agree on to move forward fairly quickly. Thanks for your time.
--
John (J5) Palmieri
Software Engineer
Red Hat, Inc.
14 years, 10 months
outage at ibilio
by Mike McGrath
Hey guys, we're having connection issues to ibiblio again, I've taken
their IP out of our proxy DNS setup so users don't hit it. They still
will with torrent. The issue seems less of a total outage and more of a
packet loss, I got about 52% loss earlier so torrent users should still be
able to connect, just slower then they'd expect for now.
Keeping an eye on things.
-Mike
14 years, 10 months
Memory increase for proxy1
by Mike McGrath
I'd like to double the memory in proxy1. Seth pointed out to me a week
ago or so it's built 64 bit unlike the rest of our proxies. This means,
generally, it consumes twice the amount of memory. The way we have httpd
tuned I'm worried it'll start to swap on release day.
After the release I'll rebuild this host.
2 +1's?
-Mike
14 years, 10 months
Re: sspp (server-status php parses)
by Mike Putnam
Serghey's script (http://sspp.googlecode.com/) requires mod_status be enabled...
(delivered disabled in Apache/2.0.52 at least)
#<Location /server-status>
# SetHandler server-status
# Order deny,allow
# Deny from all
# Allow from .example.com
#</Location>
...and...
#ExtendedStatus On
(also delivered disabled in Apache/2.0.52)
...as well as write access to the /tmp file mentioned in the code in order to
work.
Once these prerequisites are satisfied, it's a handy little script.
Thanks, Serghey!
-Mike Putnam
http://wisconsinlinux.org/
14 years, 10 months
Re: sspp (server-status php parses)
by Mike Putnam
Serghey's script (http://sspp.googlecode.com/) requires mod_status be enabled...
(delivered disabled in Apache/2.0.52 at least)
#<Location /server-status>
# SetHandler server-status
# Order deny,allow
# Deny from all
# Allow from .example.com
#</Location>
...and...
#ExtendedStatus On
(also delivered disabled in Apache/2.0.52)
...as well as write access to the /tmp file mentioned in the code in order to
work.
Once these prerequisites are satisfied, it's a handy little script.
Thanks, Serghey!
-Mike Putnam
http://wisconsinlinux.org/
14 years, 10 months