On Thu, Jan 15, 2015 at 01:57:59PM +0000, P J P wrote:
Hello all,
Please see:
->
https://fedoraproject.org/wiki/Changes/Default_Local_DNS_Resolver
->
https://pjps.wordpress.com/2014/05/02/local-dns-resolver-in-fedora/
This is an upcoming F22 feature; it proposes to install a local DNSSEC
validating DNS resolver running at 127.0.0.1:53 on Fedora systems. This
feature is already available in F21. One can easily run the local DNSSEC
enabled resolver by
$ sudo yum install dnssec-trigger
$ sudo systemctl enable dnssec-triggerd.service
$
$ # disable and stop any existing DNS service, e.g., dnsmasq
$ sudo systemctl start dnssec-triggerd.service
Though it works for most of the use-cases. Docker(or container) applications
seem to face problems in accessing the host's DNS resolver at 127.0.0.1:53.
I'm no expert on Docker(or container) applications. I was wondering if someone
could help in testing Docker(or container) applications with the local DNSSEC
validating resolver on F21.
Any results from this exercise would be immensely helpful in fixing bugs and
sorting out edge cases, thus making the solution robust and ready for F22 release.
I'm willing to help in any way I could. As always, your comments and suggestions
are most welcome!
NB this won't just be a Docker problem. It has the potential to affect any
container technology. eg a simple libvirt LXC container can be setup sharing
the filesystem namespace, but with separate network namespace and so no longer
have access to the same 127.0.0.1:53 binding. And libvirt-sandbox can setup
KVM guests where the guest runs off a readonly passthrough of the host's /
but obviously the KVM guest will not see any host network interface.
I think that probably the solution will have to involve any such apps doing
a bind mount on /etc/resolv.conf to reply the file with alternative content
Historically they tried to avoid that because doing a bind mount onto
/etc/resolv.conf would prevent the host OS from unlinking/replacing that
file. With this dnssec change though, /etc/resolv.conf will never get its
content changed once the OS is booted, so this bind mount problem would
no longer be an issue.
An alternative would be for the container technology to setup some kind
of port forward, so that 127.0.0.1:53 inside the container get magically
forwarded to 127.0.0.1:53 outside the container.
A slightly off the wall idea would be for the resolv.conf to not mention
127.0.0.1:53 at all, and instead listen for DNS requests on a UNIX socket
in the filesystem. eg have resolv.conf refer to "/var/run/resolv.sock".
A container starting a new network namespace, but sharing the mount
namespace could still access that UNIX socket. If they had a private
mount namespace, they could bind mount the UNIX socket in. The big problem
is that while you could probably update glibc easily enough, there are
probably a fair few apps that directly parse /etc/resolv.conf and would
almost certainly get confused by a UNIX domain socket. So this idea would
probably not fly.
Unless perhaps glibc could just try /var/run/resolv.sock unconditionally
and only if that's missing, then fallback to /etc/resolv.conf which would
contain 127.0.0.1:53 still. That way most apps would jsut work with the
UNIX socket and resolv.conf would still have a TCP socket for those which
need it.
All the solutions need some work - I can't see any obvious way of making
the containers problem you describe "just work" without the developers
of such software doing some work.
Regards,
Daniel
--
|:
http://berrange.com -o-
http://www.flickr.com/photos/dberrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|:
http://entangle-photo.org -o-
http://live.gnome.org/gtk-vnc :|