On Tue, 15 Apr 2014, William Brown wrote:
How do you setup DNS over TLS?
Unbound has this capability already build in. unbound-control activates via (currently via dnssec-triggerd, in the future via NM) using the keywords tcp-upstream or ssl-upstream.
I meant for say bind, but okay.
bind does not support this.
For a detailed list you will have to check the source code. But it includes thing like DNSSEC records, proper wildcard NSEC(3) records, CNAME support, EDNS0 support, packet sizes, etc. The known bugs in older versions of common DNS software. Cases the IETF actually experienced in the wild.
IE, If I have an out of box bind9 setup with a few zones, or even 100s of zones, these cases should never be triggered. I would hate to see the "dodgy DNS" check giving a false positive on networks that are actually sane ... Such checks need to be conservative in their triggers IMO.
Correct. It only happens for bind4/bind8 or broken old bind9s, djbdns/cache, but mostly because of 5 year old dnsmasq versions embedded in the platforms as "dns proxy".
Even if we ignore the TTL mangling, the first issue of incorrect cached zone data moving between networks is a real world issue IMO. As previously mention, split view business networks. I believe you have said this is solved by flushing "." forwarder between networks that are "secure".
Correct. If an ISP starts modifying DNS content, it is simply an attack. You have no trust relationship with them.
The reason I ask these are documented, is so that when other network admins (like myself) come along, you have already had the argument and provided the justification and detailed explanations of these "edge cases".
Understood.
"suboptimal route". Your workaround will actually be detrimental to the user experience.
Note, I'm trying to optimise that path too, see: http://tools.ietf.org/html/draft-ietf-dnsop-edns-chain-query-00
These two statements really seem to contradict. On one hand you say that moving between secure networks, the "." forwarder gets flushed. But then you say the whole point is that it isn't flushed!
The number of flushes should be limited as much as possible. It is only to accomdate certain networks that we flush the cache. Our preference is to never flush. But we accept sometimes it cannot be avoided to support certain type of DNS deployments.
On my 3g tether, and work, both would be secure wifi, so according to this both flush (Which really, I like :) ) But according to what you are saying they shouldn't do that, but they do?
The price we have to pay to support some kind of setups. We can also add an option that tells us to not flush certain (secure) networks because we know there is no special casing there. Those are tunings we can do later.
Really, it seems like the only time the cache *won't* flush is when I move from a secure wifi to an insecure wifi. What happens when I move from the insecure wifi back? I would like to argue that given not all domains have DNSSEC yet, you can't "trust" the records from the insecure wifi, so at the least on insecure wifi interface down, you should flush the non-dnssec cached records.
Whether the network is "secure" or "insecure" only has an effect on the forwarder state, and thus potentially certain domains handled by that forwarder. DNSSEC validation is not skipped in those cases, so data can still be trusted. Non-DNSSEC domains are always vulnerable to a MITM. Since they can just sign their domains, I don't feel personally that we need to go out of our ways to accomodate those insecure setups. If people will differently, again we could tune and make a toggle.
Which collecting this seems to mean (Current functional state):
Secure to secure network -> Flush "." cache. Secure to insecure network -> Keep cache Insecure to Insecure network -> Keep cache Insecure to secure network -> Keep cache.
I think in the perfect world, assuming that insecure networks are insecure shouldn't it be?
Secure to secure network -> Flush "." cache. Secure to insecure network -> Keep cache Insecure to insecure network -> Keep DNSSEC cache only. Insecure to secure network -> Keep DNSSEC cache only.
I'll think about these a little more. Note that "keep DNSSEC cache only" is currently not an option implemented by unbound.
The only records you can really guarantee as being the same on all network views are ones signed by DNSSEC.
Not really, you can have differently signed zones for the same name for internal and external view. Hopefully with at least the same DNSKEY, but even that could be different. It would require a manual configuration though of files in /etc/unbound/*.d/
Paul