On Wed, Nov 6, 2013 at 6:58 PM, Ivan Afonichev <ivan.afonichev(a)gmail.com> wrote:
So what is the decision of community?
Hi,
I've taken a look at the hadoop spec, and builded the httpfs
sub-package. It is packaged as a classic all-in-one-dir "catalina
base". I believe this goes against the guidelines [1] which state that
"Fedora packages must follow the FHS".
Is it good to have some /usr/share/*/bin/*.sh files?
It is not needed for tomcat package itself but some, not so systemd'ed,
stuff like hadoop's httpfs may be happy to use it.
As I understand it, we cannot follow hadoop upstream's packaging just
like the tomcat package doesn't. Also a "standard" java WAR (unpacked
here) contains its JARs in its WEB-INF/lib directory, which seems to
also go against java packaging guidelines [2]. It states that "All
architecture-independent JAR files MUST go into %{_javadir} or [...]
%{_javadir}-*".
Should we package original upstream catalina.sh or we should create
some
"service tomcat $@" emulation of it?
There actually is a systemd service [3] in the tomcat package. And
hadoop has a similar service [4], the difference is that tomcat
doesn't stick to upstream's scripts because the guidelines don't allow
to work the way they work.
Dridi
[1]
https://fedoraproject.org/wiki/Packaging:Guidelines#Filesystem_Layout
[2]
https://fedoraproject.org/wiki/Packaging:Java#Installation_directory
[3]
http://pkgs.fedoraproject.org/cgit/tomcat.git/tree/tomcat-7.0.service?h=f19
[4]
http://pkgs.fedoraproject.org/cgit/hadoop.git/tree/hadoop-httpfs.service
2013/10/31 Robert Rati <rrati(a)redhat.com>
>
> On 10/25/2013 06:32 PM, Dridi Boukelmoune wrote:
>>
>> On Wed, Oct 23, 2013 at 7:03 PM, Robert Rati <rrati(a)redhat.com> wrote:
>>>
>>> I should mention that I'd actually tested the functionality and done all
>>> the
>>> work needed for this piece of functionality to make it into hadoop. The
>>
>>
>> What have you tested exactly ? Have you manually added catalina.sh &co
>> somewhere on your system and tested your package ?
>
>
> I've just tested the setup/deployment for my use case. I tested with the
> shell scripts from the version of tomcat d/led by the hadoop build, and the
> Fedora tomcat rpm bits. Once I setup the dir structure properly and setup
> the hadoop httpfs config, I was able to start the service through systemd
> and access it as expected.
>
>
>>> missing piece is the tomcat shell scripts. If those are packaged then I
>>> just need to do a little work and I can include the functionality in
>>> hadoop.
>>
>>
>> How little ? Have you tried to replace catalina.sh in httpfs.sh [1] ?
>>
>> -exec ${HTTPFS_CATALINA_HOME}/bin/catalina.sh "$@"
>> +exec /usr/sbin/tomcat "$@"
>>
>> And maybe set a proper TOMCAT_CFG environment variable pointing to the
>> config for this very package.
>
>
> I'd really prefer not to have to come up with some non-upstreamable
> implementation here. What's the harm in packaging the shell scripts and
> allowing the projects to run tomcat webapps as they wish?
>
>
>
>> Where can I find a spec with your current work (and a testing
>> procedure) to understand better the problem ? Btw, I'm not the tomcat
>> maintainer, just a regular tomcat user, this is just my opinion. As I
>> said earlier, the catalina.sh file is part of the upstream all-in-one-dir
>> bundles. I don't think catalina.sh is in Fedora's tomcat package,
it's
>> also not in Debian's [2] tomcat6 and tomcat7 packages.
>
>
> Hadoop has been packaged for F20. You can find it in koji. To build the
> bits I'm talking about you'll need to build the httpfs component (currently
> disabled in the spec, but I think it should build/install cleanly although I
> haven't tested that since 2.0.5-8 or so). You'll also need to not apply the
> patch that prevents tomcat from being downloaded by the maven build process.
> That patch will eventually be needed if building in koji and the tomcat
> shell scripts get packaged.
>
>
> Rob
>
> --
> java-devel mailing list
> java-devel(a)lists.fedoraproject.org
>
https://admin.fedoraproject.org/mailman/listinfo/java-devel