Hi Petr,
On Tue, 2014-12-02 at 21:42 +0100, Petr Machata wrote:
We now require callers to pass DWARF_GETMACROS_START to start the
iteration. 0 is still accepted, but signals to libdw that the
iteration request comes from an old-style caller, and that opcode 0xff
should be rejected when iterating .debug_macro, to avoid confusion.
[...]
+/* Token layout:
+
+ - sign bit is used for distinguishing between .debug_macinfo
+ iteration (when unset) and .debug_macro iteration (when set,
+ i.e. negative values). The mask is DWARF_GETMACROS_DMACRO.
+
+ - the next highest bit is used for distinguishing between callers
+ that know that opcode 0xff may have one of two incompatible
+ meanings. The mask that we use for selecting this bit is
+ DWARF_GETMACROS_START.
+
+ Besides, token value of 0 signals end of iteration and -1 is
+ reserved for signaling errors.
+
+ That means that on a 32-bit machine, 30 bits are allowed for
+ offset, and therefore the maximum macro unit size is 1GB. Also,
+ because -1 is reserved, it's impossible to represent maximum offset
+ of a .debug_macro unit to new-style callers (which in practice
+ decreases the permissible macro unit size by another 1 byte). */
I am wondering whether we really need to track both issues with the
toke/offset. It looks like we only need to track old-vs-new caller (0xff
allowed). Whether we are using macinfo or macro seems to be discoverable
from the call made.
If the user calls dwarf_getmacros they provide the CU DIE. From that we
can determine whether we are using macinfo or macro by looking for the
DW_AT_macro_info or DW_AT_macros attribute.
If the user calls dwarf_getmacros_off then they are a new style caller
by default because that function didn't exist previously. And it only
works for macro because it is meant for the transparent_include style
macro which doesn't exist with macinfo.
Sorry to bring this up after you have written all this code. But if we
can use a simpler token encoding then I think we should.
Thanks,
Mark