Hello all,
first, a quick intro of myself: I'm Martin Pitt (nicknamed "pitti" on IRC and IRL) and joined Red Hat's Cockpit team yesterday. Until then I've been a Debian developer for about 14 years and an Ubuntu developer for about 12½. I've touched a lot of things over the years, but most recently I've mostly been involved in plumbing (systemd, networking, udisks and the like) and Ubuntu's CI.
While learning about cockpit and how to test it I put together a small script [1] that creates a Fedora 25 based Cockpit development VM out of thin air (using mkosi). This contains a running cockpit (as it comes with F25) as well as all build and test dependencies. This helped me personally to figure out some issues with setting up the tests (like [2]), gives me a tool to get a reproducible dev environment without cluttering my host system with lots of build/test depends, and I can use QEMU's snapshots to reset to a clean state. Stef mentioned that this might also be useful for improving our isolation in GitHub's integration tests.
How does it look like? You call it with the output VM path and cockpit's git checkout directory as arguments, it will do some grinding and eventually give you some info how to use it:
| $ ~/cockpit-dev-vm.sh /srv/vm/cockpit.img ~/upstream/cockpit | [...] | Run the VM (possibly with appending "-snapshot"): | | qemu-system-x86_64 -enable-kvm -cpu host -nographic -m 6144 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd \ | -virtfs local,id=src,path=.,security_model=none,mount_tag=src,readonly \ | -net nic,model=virtio -net user,hostfwd=tcp::22000-:22,hostfwd=tcp::9099-:9090 /srv/vm/cockpit.img | | Cockpit: https://localhost:9099 | SSH: ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o CheckHostIP=no -p 22000 test@localhost | (password "test") | | Read-only view of . is at /src
After booting the VM and ssh'ing in, you can copy the read-only view of outside cockpit tree to writable place:
| cp -a /src cockpit | cd cockpit
... and run some integration test:
| $ sudo test/vm-prep | $ test/verify/testsuite-prepare | # the following is a bug [2], PR is pending | $ npm install phantomjs-prebuilt | | $ test/verify/check-login
The binding of the outside checkout dir is useful so that the VM doesn't have to re-download the large test VMs, and you don't need to spend so much VM disk space on them. I'm not too happy about the read-only /src yet, I'll see if it's feasible to automatically set up an overlayfs for ~test/cockpit instead.
Note that the VM gets 6 GiB of RAM, as some of its inner VMs are quite large. So you need sufficient RAM on your host. Also note that this requires booting with "kvm-intel.nested=1" option on the kernel command line -- while /etc/modprobe.d/kvm-intel.conf apparently intends to supply that option, it doesn't work at least in F25.
Maybe this is useful for someone/something else, please let me know if you have ideas for improvements.
Thanks,
Martin
[1] http://www.piware.de/tools/cockpit-dev-vm [2] https://github.com/cockpit-project/cockpit/issues/5676
On 04.01.2017 16:45, Martin Pitt wrote:
Hello all,
first, a quick intro of myself: I'm Martin Pitt (nicknamed "pitti" on IRC and IRL) and joined Red Hat's Cockpit team yesterday. Until then I've been a Debian developer for about 14 years and an Ubuntu developer for about 12½. I've touched a lot of things over the years, but most recently I've mostly been involved in plumbing (systemd, networking, udisks and the like) and Ubuntu's CI.
Awesome. Welcome :)
Cockpit has had to be involved in so many of the same things...
While learning about cockpit and how to test it I put together a small script [1] that creates a Fedora 25 based Cockpit development VM out of thin air (using mkosi). This contains a running cockpit (as it comes with F25) as well as all build and test dependencies. This helped me personally to figure out some issues with setting up the tests (like [2]), gives me a tool to get a reproducible dev environment without cluttering my host system with lots of build/test depends, and I can use QEMU's snapshots to reset to a clean state. Stef mentioned that this might also be useful for improving our isolation in GitHub's integration tests.
How does it look like? You call it with the output VM path and cockpit's git checkout directory as arguments, it will do some grinding and eventually give you some info how to use it:
This is pretty cool. And the experience has gotten you familiar with all sorts of details already. My question is whether you've tried 'vagrant up' in a cockpit git checkout ... and whether that solves the same issue ... or is hopelessly broken :D
Stef
Hello Stef,
Stef Walter [2017-01-04 16:56 +0100]:
This is pretty cool. And the experience has gotten you familiar with all sorts of details already.
Right, that's mostly why I went the manual route, for the learning exercise; and TBH I'm also a bit of a control freak, I don't like libvirt or vagrant much: they sloppily copy huge images around in triplicate, it's not obvious which things are running (e. g. "virsh list" doesn't show vagrant VMs) or how to clean up behind them, they try to be too magic for my taste (but then e. g. its promised folder sync doesn't work with the libvirt/qemu provider), and it's relying on some third-party images.
My question is whether you've tried 'vagrant up' in a cockpit git checkout ... and whether that solves the same issue ... or is hopelessly broken :D
I've tried vagrant by itself and quickly ran into some issues, so TBH I didn't try it with cockpit yet -- but doing that is still on my list. It would certainly be a more "standard" way to create dev VMs, so I want to give it a proper shot.
The currently specified RAM size can't possibly have worked anytime recent for running the tests (it needs about 6 GiB RAM, 1 GiB will fall over really fast), lacks all of the build/test deps, and uses F24, so I guess this has only been used to run cockpit itself, not for running any of its tests?
Thanks,
Martin
Martin Pitt martin@piware.de writes:
Stef Walter [2017-01-04 16:56 +0100]:
My question is whether you've tried 'vagrant up' in a cockpit git checkout ... and whether that solves the same issue ... or is hopelessly broken :D
[...]
The currently specified RAM size can't possibly have worked anytime recent for running the tests (it needs about 6 GiB RAM, 1 GiB will fall over really fast), lacks all of the build/test deps, and uses F24, so I guess this has only been used to run cockpit itself, not for running any of its tests?
Yes, correct. It's intended for people who want to hack on the HTML/CSS/JavaScript parts only.
Just to share, my setup is like this:
- I have a number of development VMs for various OSes that I create manually and ad-hoc with virt-manager. I don't have to recreate them often, if ever. They usually stick around until I notice that I don't use them anymore and then they get deleted. They typically have 8 GiB disk and 2 GiB memory.
- I mount my $HOME into those VMs via NFS and then ssh into them to run "make" etc. Actually, I just run "f25 make ..." from inside Emacs where the "f25" script does the necessary ssh and changes directory appropriately, so from Emacs, this feels just like local compilation.
- NFS is configured as "async", this is crucial for performance.
- Obviously, all the build deps are installed in the dev VMs, but not on my laptop.
- I run the unit tests in those development VMs.
- I have the .local/share/config -> .../cockpit/dist symlink in those VMs, and my edit-compile-run cycle is
- edit - run "make" (two keystrokes in Emacs), wait a few seconds - [ correct errors and warnings ] - reload browser
For some time I was runing "webpack --watch" which would save the "make" step, but then Emacs doesn't see the errors and warnings, and it was also a bit flaky and two keystrokes are not something I need to optimize away. :)
"make install" isn't much slower than "make", so I could do that and avoid the symlink, but that would enable the caching features of Cockpit and instead of just reloading the browser, I would have to logout and in again for every change.
- I usually only load the iframe that I am working on into the browser. This makes reloading a bit faster.
- When working on Storaged, NetworkManager, etc, I also do that in the dev VMs by installing them from source and later reinstalling the stock version. Works well enough.
- I manually add disks and network interfaces to the dev VMs to play with the storage and networking components of Cockpit.
- I run the integration tests on my bare metal laptop. They have few dependencies and do all their destructive work inside their own VMs, so there doesn't seem to be an advantage to running them inside yet another VM. I occasionally use virt-manager to access to console of a test machine for debugging tests that break the network, for example.
Hello Marius,
thanks for sharing your setup!
Marius Vollmer [2017-01-05 10:44 +0200]:
- I run the integration tests on my bare metal laptop. They have few dependencies and do all their destructive work inside their own VMs, so there doesn't seem to be an advantage to running them inside yet another VM.
I did this (aside from the learning experience and practicing how to reproducibly set up Cockpit testing) because Stef mentioned that we don't currently run the integration tests on github PRs because of security issues (running arbitrary code from the PR on real iron, in particular the test setup), and containing the entire process in a VM might be a solution to that.
For local development it's usually too much overhead indeed, and it's more convenient to run them on bare metal instead of mucking around with too many ssh port forwardings.
Martin
On 05.01.2017 09:59, Martin Pitt wrote:
Hello Marius,
thanks for sharing your setup!
Marius Vollmer [2017-01-05 10:44 +0200]:
- I run the integration tests on my bare metal laptop. They have few dependencies and do all their destructive work inside their own VMs, so there doesn't seem to be an advantage to running them inside yet another VM.
I did this (aside from the learning experience and practicing how to reproducibly set up Cockpit testing) because Stef mentioned that we don't currently run the integration tests on github PRs because of security issues (running arbitrary code from the PR on real iron, in particular the test setup), and containing the entire process in a VM might be a solution to that.
To be clear, we don't run the integration tests for unreviewed pull requests from folks not on the whitelist. Someone is expected to review the pull request and mark it for testing.
For people who are regular contributors to Cockpit, they go on a whitelist and their pull requests are tested without first waiting for a review. You've already done two pull requests in two days ... I imagine you'll get on the whitelist soon :)
The verify machines running the integration tests against GitHub always spawn new VMs to do the (often destructive testing). The tests are staged in a container (which launches the VMs) ... and sometimes that container is itself running in a VM (such as an OpenStack VM).
Here's the container:
https://github.com/cockpit-project/cockpituous/tree/master/verify
Stef
On 01/04/2017 07:45 AM, Martin Pitt wrote:
Hello all,
first, a quick intro of myself: I'm Martin Pitt (nicknamed "pitti" on IRC and IRL) and joined Red Hat's Cockpit team yesterday. Until then I've been a Debian developer for about 14 years and an Ubuntu developer for about 12½. I've touched a lot of things over the years, but most recently I've mostly been involved in plumbing (systemd, networking, udisks and the like) and Ubuntu's CI.
Hey, Martin!
Glad to see that even though both of us have changed projects, I'll still see you around!
cockpit-devel@lists.fedorahosted.org