= Work done last couple weeks:
* Time spent on elfutils: ~50%, had some RHEL & Fedora-related work to do.
* Done rough testing, where I just wanted to swipe over all files and was filtering out messages using several broad regexes. Doing more elaborate testing now. The rejection rule is now composed of package regex, a message regex, and optional test scriptlet. The message is filtered out if the package regex matches package name, and if message regex matches the message. The scriptlet is run in that case, if present, to validate assumptions behind this rejection. (E.g. check that the section the message refers to is [^W]AX.)
= Work scheduled next:
* Expected time spent on elfutils: 80%. * Continue plowing through the results. * Roland, what's the next big item on our list?
PM
The real big item at this point is to actually get started on the writer. It is my task this week to write up/flesh out the basic design of writer components so the real work can start.
In the meantime, I think the new thing for you to work on this week is the "smart reader" hooks (now DwarfTasks 4.2). That does not have other prerequisites from me holding it up. It should naturally mix in well with the tail end of the dwarflint reloc work, as that is used in its testing plan.
Do the new libdw C hooks on a separate branch for now. (It would be nice to make it a fork from master, but if it's easier to hack on it using a fork from dwarf, that is fine too.) We'll review those hook details carefully before merging C changes. When they are in shape, my plan is to merge into master the use of the hooks in the libdw C code, with just stub definitions of the hooks. That will keep it easy to maintain use of the hooks in the C code on master and e.g. be sure to use them if we merge the CFI support to master, though they will be inline stubs that compile away. The real definitions of the hooks will come later on the dwarf branch only.
The reloc-aware libdw is not the first priority, but it is something that is already well-specified and straightforward to work on today.
My sense of the priority order of our tasks at large granularity is this:
1. basic writer data structure building (designed for the eventual fancy plans) 2. basic writing of almost-correct format, no relocs, no refs 3. reference tracker (component to solve reference equality and ref writer) -> dwarfcmp meaningful for refs from non-identical-offsets files 4. write correct refs (automatically includes in-CU dup elim) 5. reloc-savvy interfaces in C++ 6. reloc generation
The reader reloc hooks (DwarfTasks 4.2.a) task is buried down in #5. But it doesn't have any other blocking prerequisites. Work on #3 and #5 can go in parallel with #1.
My intent this week is to flesh all these out into finer-grained tasks. Next week I hope you can get started on whichever of the highest priority tasks appeals to you most (and I'll work on another one of them).
Thanks, Roland
Roland McGrath wrote:
In the meantime, I think the new thing for you to work on this week is the "smart reader" hooks (now DwarfTasks 4.2). That does not have other prerequisites from me holding it up. It should naturally mix in well with the tail end of the dwarflint reloc work, as that is used in its testing plan.
Do the new libdw C hooks on a separate branch for now. (It would be nice to make it a fork from master, but if it's easier to hack on it using a fork from dwarf, that is fine too.) We'll review those hook details carefully before merging C changes. When they are in shape, my plan is to merge into master the use of the hooks in the libdw C code, with just stub definitions of the hooks. That will keep it easy to maintain use of the hooks in the C code on master and e.g. be sure to use them if we merge the CFI support to master, though they will be inline stubs that compile away. The real definitions of the hooks will come later on the dwarf branch only.
I'm on it. Will commit something later today, so that you can comment on if you want.
But browsing through the source I realized another thing: block forms can contain relocatable data. What with these? Shall libdw handle them too? I suspect it has to, client doesn't know about reloc hooks...
PM
Petr Machata wrote:
Will commit something later today
Done that now. It's on pmachata/reader_hooks branch, forked off master.
These "hooks" are currently simple global functions. When you say "hook", I get the idea of a client- or generally external-party-supplied callback that that external party uses to fine-tune aspects of behaviour of the library in question. However that doesn't mix with your plan not to introduce API changes. I guess you will be able to comment on how much and if at all my hooks fit into your plans.
PM
But browsing through the source I realized another thing: block forms can contain relocatable data. What with these? Shall libdw handle them too? I suspect it has to, client doesn't know about reloc hooks...
There are two kinds of block forms: DWARF expressions and constant blocks.
For DWARF expressions, just handle them in the expression decoder. (This includes .debug_loc blocks as well as DW_FORM_block proper.) They are only possible/kosher in operands for certain ops, I think only in DW_OP_addr and DW_OP_call_ref. Those cases in the decoder can use the normal hooks.
For constant blocks, don't worry about it for the moment. We can see if any relocs in constant blocks actually come up in the test data. I suspect that none will.
In the reloc-savvy interfaces, these will be the one special case, i.e. not just a symbolic address nor a DWARF-internal offset value. The constant_block () accessors/constructors will have a special form that explicitly give details of the embedded relocs. Until we work that out in its final form, I think we can just punt on this case.
Done that now. It's on pmachata/reader_hooks branch, forked off master.
Great! I'll try to review it soon.
These "hooks" are currently simple global functions. When you say "hook", I get the idea of a client- or generally external-party-supplied callback that that external party uses to fine-tune aspects of behaviour of the library in question.
No, that's not what I meant to imply. All I mean is to have all the libdw internals go through these few internal functions that we can change later without touching all over the decoder sources again. For the "stub hooks" I had in mind just writing some inlines in libdwP.h so they compile away. (And that much we can merge into master as soon as we've settled on the signatures.) In the final implementation, these will be __libdw_* internal_function globals defined in libdw/relocate.c or suchlike.
Thanks, Roland
Roland McGrath wrote:
My sense of the priority order of our tasks at large granularity is this:
- basic writer data structure building (designed for the eventual fancy plans)
What are these "fancy plans"? DWARF compression (as in zlib) was mentioned a couple times, so that might be one (although that would need reader support, too, wouldn't it?), anything else? I expect that the "semantic compression" will be built on top of this, as an application rather than intrinsic feature of the library.
Also, what level do you want to write the writer on? C? There is "C++ interface for writer" item on your list of tasks, but it's not clear whether C++ is the place where the writer will be implemented, or rather a place where it would be wrapped.
I'm thinking that .debug_abbrev is one approach to compression that isn't currently held back by absence of reference equality component. To do it, one needs to be able to write/modify .debug_info, and that in turn requires writing of/modifying .debug_loc, .debug_pub*, and .debug_aranges. A hack that would recompute .debug_aranges and .debug_pub* and write them to disk would be sufficient for starters, and and I don't think I've seen any DW_OP_call_ref in any of our binaries at all. So we could in fact have something in hand soonish...
PM
- basic writer data structure building (designed for the eventual fancy plans)
DwarfOutput is a sketchy new wiki page about the writer work. I didn't post because I didn't get it more together. But I should have been posting more partial incoherencies earlier rather than waiting.
What are these "fancy plans"?
There it was an oblique reference to the "combined debug archive" idea. I wrote some wiki stuff about an earlier version of that idea, but my current thinking I haven't really written down in detail (or hashed out). The core issue of "eventual fancy" is to combine multiple .debug objects together into an ar archive or something similar (maybe its own format akin to locale-archive), where we write the DWARF in the constituent files using low-level format extensions to permit sharing of data across files inside the archive.
For today, the only thing to consider about that is that it motivates the separation of dwarf_output_collector from dwarf_output. If we do the combined-archive output mode, there will still be one dwarf_output object per logical .debug file, but all will be built using a single dwarf_output_collector object. For this reason, some essential writer work resides in the collector. That will include the core means of identifying duplicates, and format output stuff like abbrev generation.
DWARF compression (as in zlib) was mentioned a couple times,
We have never mentioned the zlib stuff, though binutils does now support it. The term "DWARF compression" is often used and is very ambiguous. The plain section data-compression stuff does not serve the interests that motivate our DWARF size reduction work (download size of aggregate packages that already use data compression). It is trivial to support, but possibly not even desireable (CPU/memory cost vs direct shareable mmap from files).
I expect that the "semantic compression" will be built on top of this, as an application rather than intrinsic feature of the library.
No, the DWARF-level size reduction will be in the writer. Part of that is optimal choice of "invisible" format details like abbrevs and form selection. But if you look at any large object, all those other sections are dwarfed in size (no pun intended) by .debug_info. What we expect to be the most effective "semantic compression" is consolidation of duplicate identical DIE subtrees. The writer will (optionally) do this automagically.
Also, what level do you want to write the writer on? C? There is "C++ interface for writer" item on your list of tasks, but it's not clear whether C++ is the place where the writer will be implemented, or rather a place where it would be wrapped.
The writer will be in pure C++, no real programmatic C interface to it. That is the reason for the whole focus on the C++ layer for the reader. (Eventually some high-level C wrappers + shared argp stuff for "just do it" transformation uses.)
I'm thinking that .debug_abbrev is one approach to compression that isn't currently held back by absence of reference equality component.
Most of the work really is not held back by that issue (and it also should not be so huge of an issue). We can do a lot of the work in parallel.
The "peephole" optimization of .debug_abbrev is IIRC what nickc started on (in C) on his branch some time ago. That alone really does not have enough payoff to warrant spending time on it--.debug_abbrev size is not really the problem. The whole-writer approach intrinsically includes doing optimal abbrev generation (and aranges et al), though obviously it is a holistic approach that takes a long time to get from zero to useful.
So, let's get into it. I haven't written the plan at all coherently, and clearly failed utterly heretofore even to communicate the outlines of it to you. I started writing a little bit of code, and we can start talking about that. It's very unfinished and not even its structure entirely figured out, on git branch roland/dwarf-collector. (I don't intend to make that a real branch, just parked temp commits before I have anything worth committing for real.)
As dwarfcmp.cc test_writer code constructs dwarf_edit from dwarf, so it will construct dwarf_output from dwarf (parameterized by a collector object). The dwarf_output is usable for read like a dwarf or dwarf_edit object, but immutable like a dwarf object (and unlike a dwarf_edit). The construction of the dwarf_output will collect in the collector everything needed to write the output, and consolidate all duplication on the way.
The part I've started tackling but haven't even close to finished is the collector data structures to hold and de-duplicate all the same data that a dwarf/dwarf_edit holds today. I think I have some of the containers kind of sane, but I need to figure out how to organize the constructor code.
Once a construct-only dwarf_output works like a dwarf/dwarf_edit (e.g. dwarfcmp -T test), we get to the first real writer step. That is abbrev generation, which really pulls in form selection too.
Thanks, Roland
So, to elaborate a bit on "semantic compression", the key means there is DW_TAG_imported_unit.
The long comment at the top of c++/dwarf talks about the "logical" vs "raw" view of the DIE tree. The trivial parts of that are hiding DW_AT_sibling and DW_TAG_partial_unit from view. The significant part is that the logical view expands any DW_TAG_imported_unit children in the raw DIE tree so that the logical view never includes a DW_TAG_imported_unit DIE. Instead, where the raw view sees a DW_TAG_imported_unit, the logical view insteads sees one or more different DIEs there (the children of the other CU referenced by DW_TAG_imported_unit's DW_AT_import).
The output-side C++ interfaces have no "raw view" at all. Instead, the only presentation in dwarf_edit and dwarf_output maps to the "logical view" from the dwarf reader class. The automatic de-duplication code in the writer will work by identifying identical DIE subtrees used more than once, and replacing them with DW_TAG_imported_unit pointers when composing the output CUs.
This is why dwarfcmp compares the logical view. The de-duplicating transformation using DW_TAG_imported_unit is a no-op in the abstract, and should not change anything about the logical view.
The very first crack at writing will not generate DW_TAG_imported_unit. But the basic design of the collector is all organized around consolidating duplication on the way in so we have only one copy stored in the collector. For the "dumb" writer mode, we will then produce identical duplicates from the shared data structures. But the stage will be set for "smart" mode.
Thanks, Roland
I stashed another unfinished tidbit on roland/dwarf-refcmp (not intended for a real commit). This is the start of the approach for reference tracking.
The beginning piece that's started there is to use a different control/dispatch flow for the whole-file comparison. Instead of methods (operator==) on debug_info_entry et al, the recursion is done via methods on the dwarf_comparator object. This gives us a place to hang the ref tracking data structures outside the dwarf object itself, where the deep-in-recursion calls can refer to them and the tree-walking steps can populate them. This sort of structure for comparison is probably also a better way to implement dwarfcmp.cc's describe_mismatch.
I think this is the way to work out the ref tracker first. Then we'll use it in dwarf_edit/dwarf_output construction, with a similar top-down control flow.
Thanks, Roland
elfutils-devel@lists.fedorahosted.org