[Crash-catcher] small bug
by Denys Vlasenko
Nikola,
void CDebugDump::Create(const std::string& pDir, int64_t uid)
{
...
SaveText(FILENAME_UID, ssprintf("%li", uid));
SaveKernelArchitectureRelease();
SaveTime();
}
int64_t is not the same as long. Printing it with %li is not correct.
One solution is to cast: (long)uid
--
vda
14 years, 6 months
[Crash-catcher] todo
by Jiri Moskovcak
Just a reminder of important things we should take care of asap:
* switch to better config files:
- we don't write the configuration at all, so enabling/disabling plugins
doesn't survive the daemon restart
- the information if plugin is enabled should be in the plugin config file
- the behaviour should be similar to yum
* make kerneloopses to report itself automatically
- maybe just use the abrtd cron function and run the kerneloops reporter
once in a while?
* improve the dupe checker
* fix the python hook to make it quiet if it fails
14 years, 6 months
[Crash-catcher] java reports by crash catcher
by Mark Wielaard
Hi,
It would be nice if crash-catcher could be thought about the hs_pid*.log
files that a crashed java process creates. That file contains much more
information that is relevant to the crash than the gdb backtrace that is
currently collected. If it can see if there is a hs_pid###.log file
(where ### is the process id of the java process that crashed) and
attached that to the bug report it files that would be appreciated.
Thanks,
Mark
14 years, 6 months
[Crash-catcher] eu-unstrip -n fails to process a coredump
by Denys Vlasenko
Hi,
I have a coredump which cannot be processed by eu-unstrip.
It is from crashed firefox's nspluginwrapper. Crashed binary is
/usr/lib/nspluginwrapper/npviewer.bin and
"ldd /usr/lib/nspluginwrapper/npviewer.bin" shows that
it is apparently a normal dynamically linked program.
Other people which are working with me on abrt
also say that some firefox crashes can't be processes.
It's likely they refer to this (or similar) problem.
[CC-ing abrt ml]
Coredump is about 200 MB big (bzipped 20 MB), I can send it
on request.
eu-unstrip just says this and exits with exitcode 1:
# eu-unstrip -n --core=coredump.big
eu-unstrip: coredump.big: Callback returned failure
I built unstrip from current git and it does the same.
With some instrumentation, I see that execution deviates from
"normal" flow (one which I see with good coredumps) in
libdwfl/link_map.c, report_r_debug(), here:
...
GElf_Addr next = addrs[0];
Dwfl_Module **lastmodp = &dwfl->modulelist;
int result = 0;
while (next != 0)
{
...
if (name != NULL && name[0] == '\0')
name = NULL;
/* If content-sniffing already reported a module covering
the same area, find that existing module to adjust.
The l_ld address is the only one we know for sure
to be within the module's own segments (its .dynamic). */
Dwfl_Module *mod;
int segndx = INTUSE(dwfl_addrsegment) (dwfl, l_ld, &mod);
if (unlikely (segndx < 0))
{fprintf(stderr, "%s.%d: %s() we return -1 (segndx:%d < 0)\n", __FILE__, __LINE__, "report_r_debug", segndx);
return release_buffer (-1);
}
...
The fprintf shown above triggers.
Full instrumented output is:
core-file.c.459: dwfl_core_file_report() dwfl_link_map_report ...
link_map.c.873: dwfl_link_map_report() report_r_debug(integrated_memory_callback) ...
link_map.c.210: integrated_memory_callback() dwfl_addrsegment(vaddr:7fa09549c276) returns mod->name:'ld-linux-x86-64.so.2' mod->main.name:'(null)'
link_map.c.216: integrated_memory_callback() dwfl_module_address_section ...
derelocate.c.391: dwfl_module_address_section() check_module ...
derelocate.c.291: check_module() dwfl_module_getsymtab ...
dwfl_module_getdwarf.c.752: dwfl_module_getsymtab() find_symtab ...
dwfl_module_getdwarf.c.507: find_symtab() __libdwfl_getelf ...
dwfl_module_getdwarf.c.134: __libdwfl_getelf() find_elf ...
dwfl_module_getdwarf.c.138: __libdwfl_getelf() open_elf ...
dwfl_module_getdwarf.c.66: open_elf() file->name:'(null)'
dwfl_module_getdwarf.c.71: open_elf() returns CBFAIL: fd < 0
dwfl_module_getdwarf.c.140: __libdwfl_getelf() open_elf returned
dwfl_module_getdwarf.c.760: dwfl_module_getsymtab() we return -1
derelocate.c.298: check_module() dwfl_module_getsymtab returned error 16, we return true
derelocate.c.393: dwfl_module_address_section() we return NULL (check_module != 0)
link_map.c.228: integrated_memory_callback() we got scn == NULL and return false
link_map.c.210: integrated_memory_callback() dwfl_addrsegment(vaddr:7fa09549c276) returns mod->name:'ld-linux-x86-64.so.2' mod->main.name:'(null)'
link_map.c.216: integrated_memory_callback() dwfl_module_address_section ...
derelocate.c.391: dwfl_module_address_section() check_module ...
derelocate.c.291: check_module() dwfl_module_getsymtab ...
dwfl_module_getdwarf.c.752: dwfl_module_getsymtab() find_symtab ...
dwfl_module_getdwarf.c.760: dwfl_module_getsymtab() we return -1
derelocate.c.298: check_module() dwfl_module_getsymtab returned error 16, we return true
derelocate.c.393: dwfl_module_address_section() we return NULL (check_module != 0)
link_map.c.228: integrated_memory_callback() we got scn == NULL and return false
link_map.c.389: report_r_debug() we return -1 (segndx:-1 < 0)
unstrip: coredump.big: Callback returned failure
and it is different from unstrip runs or "good" coredump only in two last lines,
those continue like this:
derelocate.c.393: dwfl_module_address_section() we return NULL (check_module != 0)
link_map.c.228: integrated_memory_callback() we got scn == NULL and return false
link_map.c.453: report_r_debug() we return result:6
dwfl_module_getelf.c.58: dwfl_module_getelf() __libdwfl_getelf ...
dwfl_module_getdwarf.c.134: __libdwfl_getelf() find_elf ...
dwfl_module_getdwarf.c.138: __libdwfl_getelf() open_elf ...
dwfl_module_getdwarf.c.66: open_elf() file->name:'(null)'
dwfl_module_getdwarf.c.71: open_elf() returns CBFAIL: fd < 0
dwfl_module_getdwarf.c.140: __libdwfl_getelf() open_elf returned
dwfl_module_getdwarf.c.675: find_dw() __libdwfl_getelf ...
...and here we are getting 1st line of the output:
0x400000+0x209000 23c77451cf6adff77fc1f5ee2a01d75de6511dda@0x40024c - - [exe]
...
Instrumentation is attached.
So, is it a bug?
--
vda
14 years, 6 months
[Crash-catcher] test builds
by Denys Vlasenko
Hi,
I decided to put my debugging scripts into git, scripts/
directory.
This will scale better than exchanging them with all of you
one at a time.
They all meant to be run from a top of a scratch copy of the tree.
Here they are:
scripts/dbg_mkrpm -
Builds new rpm's from source and leaves them in the top of the tree.
scripts/dbg_rpmbuildlocal -
Helper for the above. I use it for many projects
and on my box, it sits in ~/bin
scripts/dbg_rpminst -
Deinstalls all installed packages with word "abrt" in them,
then installs all *.rpm it sees in current directory.
Used to install new rpm's produced by dbg_mkrpm run.
Very dumb: deinstalls/reinstalls in the loop until
succeeds - "poor man's dependency tracking".
If you know how to fix that, please let me know.
scripts/dbg_unpkrpm -
Unpacks all *.rpm it sees in current directory unto UNPACKED/*.
Useful if you want to peek into rpm's contents.
These scripts are crude in places, feel free to adapt to your needs.
I know that Jiri installs his test builds into his home directory.
IOW, he builds the source with configure --prefix=SOMETHING.
I was also doing something like this, but stopped doing it.
The rationale is this way my test builds were different from
what package users will have. Therefore I'll see some bugs
they will never see, and vice versa - I will fail to catch
some bugs which happen only with standard build, so these
bugs will go full round - package is released, users hit bugs,
users report them. I spend lots of time reproducing their bugs
(since they dont happen to me in non-std build).
In short:
Currently, we need to reduce the number of problems
with standard build.
We can debug problems with custom builds later.
--
vda
14 years, 6 months