2016-01-12 1:50 GMT+02:00 Gerald Henriksen <ghenriks(a)gmail.com>:
On Mon, 11 Jan 2016 18:13:53 -0000, you wrote:
>Properly packaging Big Data software is something upstream developers
should want more, as not doing so costs a lot in maintenance. And I am sure
that upstream (most, like Databricks, Cloudera, or even Data Artisans, are
commercial companies, which care about development costs) would rather
develop new features than maintain patches on old versions of their bundled
The problem is that there is also a development cost to upgrading to
newer libraries, both in the work to update the code base and then the
Despite efforts by Fedora packagers (as well as others on occasion on
the Hadoop mailing lists) the upstream developers have no desire to
spend the time and effort to update libraries until they are forced
For that matter, the only fully approved version of Java for Hadoop is
the official Oracle Java 7, which is now 9 month beyond any publicly
available updates for security issues. They do not support OpenJDK or
In short, as previously discussed, Hadoop is like so much of the
software developed in the last decade where the need to constantly
update with the language and/or libraries is viewed as a bug and
alternative methods have been developed to allow the project to exist
in its own little bubble - in the case of Hadoop its Maven but even
the recently open sourced Swift is developing its own package system
just like Python, Perl, Go, etc.
As long as they can stay in that bubble there is little to no
development cost to not updating the code base.
As long as that continues Hadoop is impossible to package properly in
rpm form for Fedora.
[obviously one can wrap up the jar files into an rpm, as some
commercial Hadoop companies do. But they could never be in Fedora
proper, and if you are going to go through the hassle to wrap a jar
into an rpm you may as well move with the present and do a Docker
Ok, thanks. All what you say is clear and makes sense, unfortunately... So,
a is a little despairing: we should wait until the big data development
ecosystem implodes under its own complexity, and then we hope that the big
data developer community will be wise enough to pick the best practices,
which have allowed the Linux distributions to succeed so far.
By the way, that practice in Apache Spark causes headaches to a lot
users (search, for instance, for "Spark NoSuchFieldError"), even
experienced ones, as runtime errors pop up out of nowhere, and debugging
them is quite difficult. The dependency graph of a typical Spark-based
application lists dozens of libraries with not always compatible versions,
most being duplicated, part of them being bundled and patched.
I have no experience with Spark, so I cannot comment on your example
other than to say fixing that sort of issue relies on buy in from
upstream to keep updating the code base, and upstream may not be
willing. The secondary issue is that even if the Spark (or any other
Hadoop related project) developers want to do things correctly they
are still stuck with the decisions made by Hadoop.
Yes, you are right.
>Fixing that kind of headache should not be the work of a user. To
belief, that is exactly what Linux distributions are done for: ship an
ecosystem of components, for which the versions are known to work well
1) the distributions simply don't have the available manpower to do it.
At this point there are so many different projects with different
library needs it would take years by a lot of programmers/packagers
just to get everybody up do date, and with too many upstreams
unwilling to update their codebases with the changes it would remain
an ongoing problem.
The Linux distribution packagers could all unite. But given the above (and
below), it is worthless until the upstream developer community drastically
change their perception about the value of packaging.
2) the users don't care - as posted by one of the Red Hat(?) members
of the list the users don't trust a properly packaged Hadoop.
Part of the problem is the Java is too much of a nightmare to package,
it results in far too many rpm files that to a user it looks too
complicated compared to downloading 1 jar file.
But in the end it doesn't matter, as no user support means it is
difficult to justify the hours of work required.
Ok. I am of course biased as a user (and was inclined to believe that more
users would be like me (willing clean big data packages within a standard
Linux distribution)) :)
Thanks again for the clarification.
So, back to the initial question: what do we do? Should we drop Hadoop
I would favor to continue trying to keep a working version of Hadoop, as
up-to-date as possible, on Fedora (and hopefully on CentOS). It would be a
shame to throw away the man.years of investment/work done by the Fedora
[By the way, thanks to gil, Hadoop builds again on Rawhide (I still have to
apply his patch, though, so that it finds way into the Git repository]