======================== Retrace server ======================== The retrace server provides a coredump analysis and backtrace generation service over a network. >From the user's perspective, a client (user) sends a coredump and some needed information to the server, and gets a backtrace generation task ID. Then the client asks the server regularly for the task status, and when the task is done (backtrace has been created), the client downloads the backtrace. If the backtrace generation task fails, user gets a log. Sending coredump to a server might not be acceptable for some users because of security reasons, so we should consider supporting minidumps (google-breakpad, see http://code.google.com/p/google-breakpad/wiki/GettingStartedWithBreakpad). The retrace server supports multiple operating systems (Fedora N (=13), Fedora N-1 (=12), Fedora Rawhide, Fedora Rawhide Branched (=14)), and multiple architectures within a single installation. The retrace server consists of the following parts: 1. abrt-retrace-server: a CGI script handling the communication 2. abrt-retrace-worker: a program doing the real coredump processing 3. server package repository -------------------------- 1) abrt-retrace-server -------------------------- 1.1) It's a CGI script written in Python. Python is well suited for the CGI scripting and HTTP protocol handling, and it is already used in ABRT. The CGI (Common Gateway Interface) is used because it's easy to use/develop/test scripts with it and it is widely supported. 1.2) A webserver (httpd) must be configured to run the CGI script when https://someserver/... is accessed. 1.3) Only secure (https) communication is allowed because coredumps and backtraces, which haven't been reviewed by a user yet, are private data. 1.4) The server requires a GnuPG-generated key pair, which is unique for every server. The server's private key is used to generate a password for every task, and the public key is later used to verify that the client asking the backtrace is the same who uploaded the coredump. The password is generated by signing the task id with the use of the private key. 1.5) A client might create a new task by sending a HTTP request to the https://someserver/create URL, and providing a .tar.xz archive as the request content. The archive contains crash data files. The crash data files are the same as in the local /var/spool/abrt/ccpp-time-pid/ directory, so the client must only pack and upload them. xz is used because it has good compression ratio, which is important when compressing large coredumps, and it provides reasonable compress/uncompress speed and memory consumption (see below for the measurements). Compression level 2 is used to compress the data. 1.5.1) The HTTP request must use the POST method. It must contain a proper "Content-Length" and "Content-Type" (application/x-xz) fields. If the method is wrong, the server returns the "405 Method Not Allowed" HTTP error code. If the "Content-Length" field is missing, the server returns the "411 Length Required" HTTP error code. If an incorrect content-type is used, the server returns the "415 unsupported Media Type" HTTP error code. 1.5.2) If there is less than 20 GB of free disk space in /var/spool/abrt/retrace, the server returns the "507 Insufficient Storage" HTTP error code. The server returns the same HTTP error code if uncompressing the received .xz file would cause the free disk space to be less than 20 GB. If the uncompressed data from the received .xz file take more than 600 MB of disk space, the server returns the "413 Request Entity Too Large" HTTP error code. The limit is 600 MB (pretty high), because a coredump that takes the disk space is stored on the server only temporarily: until the backtrace is generated. After that it is deleted by the abrt-retrace-worker, so the space is released. The uncompressed data size can be obtained by calling xz --list file.xz. The --list option has been implemented only recently, so it might be necessary to implement a method to get the uncompressed data size by extracting the archive to the stdout, and counting the extracted bytes, and call this method if --list doesn't work on the server. 1.5.3) The size limit for received compressed .xz files is configured within the HTTP daemon which executes the abrt-retrace-server CGI script, so the limit is not checked inside the script. This assumption must be documented in the abrt-retrace-server manual page. The recommended value for the maximum HTTP request size is 30 MB. 1.5.4) If the upload succeeded, the server creates a new directory /var/spool/abrt/retrace/ and extracts the received archive into it. Then it checks that the directory contains all the required files, and sends the HTTP response. Then it spawns a subprocess with abrt-retrace-worker on that directory. To support multiple architectures, the retrace server either needs an access to another machines (with architectures different from the retrace server arch), or needs to install a gdb package compiled separately for every target architecture (this is technically and economically better solution, already used by the avr-gdb package; we would need to add i686-gdb and/or x86_64-gdb packages to Fedora). In the case of multiple server machines, the retrace server's /var/spool/abrt/retrace/ directory and local yum repositories are shared with the machines via NFS. If the received request requires an architecture different from the retrace server's architecture, the retrace server should ssh to a machine providing that architecture, and run abrt-retrace-worker from there. All data and results are shared via NFS, so just running the worker is enough. The architecture-to-machine table for the retrace server would be loaded from the /etc/abrt/retrace-server.conf configuration file. 1.5.5) The following files from local crash directory are required to be present in the archive: coredump, architecture, release, packages (this one does not exist yet, see below for more info). If one or more files are not present in the archive, the server returns the "403 Forbidden" HTTP error code. 1.5.6) If the upload succeeded, the server response has the "201 Created" HTTP code. The response includes a field "X-Task-Id" containing a new server-unique numerical task id, and it includes a field "X-Task-Password" containing the password required to access the result. It also includes a field "X-Task-Est-Time" containing a number of seconds the server estimates it will take to generate the backtrace. The "X-Task-Password" password is a signature generated by the GnuPG signing the task id, abd the /var/spool/abrt/retrace/ directory creation time by the server's private key. If we would generate the signature only from the task id, all the future tasks with the same id would have the same password. If we would generate it from the directory creation time only, there is a chance that or tasks would share the same passoword. The combination of both is unique and the signature cannot be re-used by an attacker. The algorithm for the "X-Task-Est-Time" time estimation should take the coredump size, the package name, and the current server load into account. The server should store statistics about how long it took to generate a backtrace from certain package. The server knowns that crashes from the openoffice package took 5 minutes to process in average, so it returns the value 300 in the field. The client then does not waste time asking about that task every 20 seconds, but the first status request comes after 300 seconds. The algorithm for the estimation can be very simple at first, but it should be included as it decreases the number of requests nicely. 1.6) A client might request a task status by sending a HTTP GET request to the https://someserver/ URL, where is the numerical task id returned in the "X-Task-Id" field by https://someserver/create. 1.6.1) The client request must contain the "X-Task-Password" field, and its content must be verified by the server's GnuPG public key. If the password is not valid, the server returns the "403 Forbidden" HTTP error code. 1.6.2) If the is not in the valid format, or the task does not exist, the server returns the "404 Not Found" HTTP error code. 1.6.3) If the task exists, the server returns the "200 OK" HTTP code, and includes a field "X-Task-Status" containing one of the following values: "FINSIHED_SUCCESS", "FINISHED_FAILURE", "PENDING". The field contains "FINISHED_SUCCESS" if the file /var/spool/abrt/retrace//backtrace exists. The client might get the backtrace on the https://someserver//backtrace URL. The log might be obtained on the https://someserver//log URL, and it might contain warnings about some missing debuginfos etc. The field contains "FINISHED_FAILURE" if the file /var/spool/abrt/retrace//backtrace does not exist, and file /var/spool/abrt/retrace//retrace-log exists. The logfile with error messages might be obtained on the https://someserver//log URL. The field contains "PENDING" if neither file exists. The client might ask again after 10 seconds or later. 1.7) A client might request a backtrace by sending a HTTP GET request to the https://someserver//backtrace URL, where is the numerical task id returned in the "X-Task-Id" field by https://someserver/create. 1.7.1) The client request must contain the "X-Task-Password" field, and its content must be verified by the server's GnuPG public key. If the password is not valid, the server returns the "403 Forbidden" HTTP error code. 1.7.2) If the is not in the valid format, or the task does not exist, the server returns the "404 Not Found" HTTP error code. 1.8.3) If the file /var/spool/abrt/retrace//backtrace does not exist, the server returns the "404 Not Found" HTTP error code. Otherwise it returns the file contents, and the "Content-Type" field will contain "text/plain". 1.8) A client might request a task log by sending a HTTP GET request to the https://someserver//log URL, where is the numerical task id returned in the "X-Task-Id" field by https://someserver/create. 1.8.1) The client request must contain the "X-Task-Password" field, and its content must be verified by the server's GnuPG public key. If the password is not valid, the server returns the "403 Forbidden" HTTP error code. 1.8.2) If the is not in the valid format, or the task does not exist, the server returns the "404 Not Found" HTTP error code. 1.8.3) If the file /var/spool/abrt/retrace//retrace-log does not exist, the server returns the "404 Not Found" HTTP error code. Otherwise it returns the file contents, and the "Content-Type" field will contain "text/plain". 1.9) Tasks that were created more than 5 days ago must be deleted, because tasks occupy disk space (the coredumps are deleted after the retrace, and only backtraces and configuration remain). A shell script "abrt-retrace-cleanold" checking the creation time and deleting the directories in /var/spool/abrt/retrace must be written. It is supposed that the server administrator sets cron to call the script once a day. This assumption must be mentioned in the abrt-retrace-cleanold manual page. 1.10) The maximum number of simultaneous connections is configured within the HTTP daemon which executes the CGI script, so this limit is not checked by the script. This must be documented in the server manual page. The recommended value for the maximum number of connections is 20 (to be tested). The archive extraction and chroot preparation and gdb analysis is mostly limited by the hard drive size and speed. ------------------------------ 2) abrt-retrace-worker ------------------------------ 2.1) The worker gets a /var/spool/abrt/retrace/ directory as an input. It reads which packages (and which versions) it needs from the directory (this step needs a new "packages" file to be provided by ABRT). Then it prepares a "root" subdirectory with the packages, their debuginfo, and gdb installed. Then it moves the coredump there and changes root (using chroot) to the subdirectory, and runs gdb to generate the backtrace there. Once the backtrace is generated, it is copied to the /var/spool/abrt/retrace/ directory to the "backtrace" file, and a log documenting the process (yum installation of packages etc.) is copied to the "retrace-log" file. Then the chroot subdirectory is removed. The packages and gdb must be selected depending on the architecture where the systen crashed. That architecure might be different from the server's architecture. A special version of gdb needs to be installed to analyze coredump from other than server's architecture. 2.2) We need to create proper chroot-ready environment for certain supported operating system, which is different from the retrace server's operating system. We will use the "mock" library to do that (https://fedorahosted.org/mock/). The /usr/bin/mock binary is not useful for us, but the underlying Python library is. So an ABRT-specific interface to the mock library must be written (/usr/bin/abrt-retrace-mock), supporting only the operations we need: package extraction/installation, and a gdb run resulting in a backtrace file and/or a log file. 2.3) We can save some time and disk space by extracting binaries and dynamic libraries only from the packages. We can save even more time and disk space by extracting only the libraries and binaries really used by the coredump. Packages should not be "installed", they should be "extracted" only. 2.4) We need to support every Fedora release with all packages that ever made it to the updates and updates-testing repositories. In order to provide all that packages, a local yum repository is maintained for every supported operating system. The debuginfos will be provided by a debuginfo server in the future (it will save the server disk space). However, we should support the usage of local debuginfo first, and add the debuginfofs support later. ------------------------------ 3) server package repository ------------------------------ A repository with Fedora packages must be maintained locally on the server to provide good performace and to provide older packages already removed from the official repositories. We need a package downloader, which scans Fedora servers for new packages, and downloads them so they are immediately available. 3.1) The rsync cannot be used with Fedora repositories. Older versions of packages are regularly deleted from the updates and updates-testing repositories. We must support older versions of packages, because that is one of two major pain-points that the retrace server is supposed to solve (the other one is the slowness of debuginfo download and debuginfo disk space requirements). 3.2) The reposync tool from yum looks likely to be useful, and used. Abrt should contain a script 'abrt-reposync', which calls reposync with proper arguments to download packages from Fedora repositories, but it does not delete older versions of the packages. The retrace server administrator is supposed to call this script using cron every 6 hours or so. This expectation must be documented in the abrt-reposync manual page. When the abrt-reposync is used to sync with the Rawhide repository, unneeded packages (where a newer version exists) must be removed after residing one week with the newer package in the same repository. 3.3) On a typical retrace server installation, the /etc/yum.repos.d directory should contain configuration files for all supported operating systems (Fedora versions), and all architectures. We should ship the yum configuration files together with the retrace server. 3.4) The packages should be downloaded to a local repository in /var/spool/abrt/repo/{fedora12,fedora12-debuginfo,...}. 3.5) We should consider a possibility of removing all the unneeded content from the packages in the retrace package repository. We need just the binaries and dynamic libraries, and that is a tiny part of package contents. However, we need a working gdb in the chroot. Can that be provided by the server by hardlinks, setting proper PATH in the chroot etc.? ---------------------------- 4) traffic/load estimation ---------------------------- 4.1) 2500 bugs are reported from ABRT every month. Approximately 7.3% from that are Python exceptions, which don't need a retrace server. That means that 2315 bugs need a retrace server. That is 77 bugs per day, or 3.3 bugs every hour on average. Occasional spikes might be much higher (imagine a user that decided to report all his 8 crashes from the last month). 4.2) We should probably not try to predict if the monthly bug count goes up or down. New, untested versions of software are added to Fedora, but on the other side most software matures and becomes less crashy. So let's assume that the bug count stays approximately the same. 4.3) Test crashes (see that we should probably use `xz -2` to compress coredumps): - firefox with 7 tabs with random pages opened - coredump size: 172 MB - xz: - compression level 6 - default: - compression time on my machine: 32.5 sec - compressed coredump: 5.4 MB - decompression time: 2.7 sec - compression level 3: - compression time on my machine: 23.4 sec - compressed coredump: 5.6 MB - decompression time: 1.6 sec - compression level 2: - compression time on my machine: 6.8 sec - compressed coredump: 6.1 MB - decompression time: 3.7 sec - compression level 1: - compression time on my machine: 5.1 sec - compressed coredump: 6.4 MB - decompression time: 2.4 sec - gzip: - compression level 9 - highest: - compression time on my machine: 7.6 sec - compressed coredump: 7.9 MB - decompression time: 1.5 sec - compression level 6 - default: - compression time on my machine: 2.6 sec - compressed coredump: 8 MB - decompression time: 2.3 sec - compression level 3: - compression time on my machine: 1.7 sec - compressed coredump: 8.9 MB - decompression time: 1.7 sec - thunderbird with thousands of emails opened - coredump size: 218 MB - xz: - compression level 6 - default: - compression time on my machine: 60 sec - compressed coredump size: 12 MB - decompression time: 3.6 sec - compression level 3: - compression time on my machine: 42 sec - compressed coredump size: 13 MB - decompression time: 3.0 sec - compression level 2: - compression time on my machine: 10 sec - compressed coredump size: 14 MB - decompression time: 3.0 sec - compression level 1: - compression time on my machine: 8.3 sec - compressed coredump size: 15 MB - decompression time: 3.2 sec - gzip - compression level 9 - highest: - compression time on my machine: 14.9 sec - compressed coredump size: 18 MB - decompression time: 2.4 sec - compression level 6 - default: - compression time on my machine: 4.4 sec - compressed coredump size: 18 MB - decompression time: 2.2 sec - compression level 3: - compression time on my machine: 2.7 sec - compressed coredump size: 20 MB - decompression time: 3 sec - evince with 2 pdfs (1 and 42 pages) opened: - coredump size: 73 MB - xz: - compression level 2: - compression time on my machine: 2.9 sec - compressed coredump size: 3.6 MB - decompression time: 0.7 sec - compression level 1: - compression time on my machine: 2.5 sec - compressed coredump size: 3.9 MB - decompression time: 0.7 sec - OpenOffice.org Impress with 25 pages presentation: - coredump size: 116 MB - xz: - compression level 2: - compression time on my machine: 7.1 sec - compressed coredump size: 12 MB - decompression time: 2.3 sec 4.4) So let's imagine there are some users that want to report their crashes approximately at the same time. Here is what the retrace server must handle: - 2 OpenOffice crashes - 2 evince crashes - 2 thunderbird crashes - 2 firefox crashes We will use the xz archiver with the compression level 2 on the ABRT's side to compress the coredumps. So the users spend 53.6 seconds in total packaging the coredumps. The packaged coredumps have 71.4 MB, and the retrace server must receive that data. The server unpacks the coredumps (perhaps in the same time), so they need 1158 GB of disk space on the server. The decompression will take 19.4 seconds. Several gigabytes will be needed to install all the required packages and debuginfos for every chroot (8 chroots 2 GB each = 16 GB, but this seems like an extreme, maximal case). Some space will be saved by using a debuginfofs. Note that most applications are not as heavyweight as OpenOffice and Firefox.