On Mon, 14 Apr 2014, William Brown wrote:
What is a "captivity-sign" as you so put it?
Check for clean port 80. It fetches the url specified in dnssec-triggerd.conf's url: option (default http://fedoraproject.org/static/hotspot.txt)
If it returns a redirect or a page that does not contain the exact text "OK" it knows a hotspot has intercepted the page and will prompt the user to login to the hotspot. It the user agrees, resolv.conf is filled in with the DHCP obtained values and it fires of xdg-open to the page http://hotspot-nocache.fedoraproject.org/ which is a special DNS entry with TTL=0 so it can never be cached (so we will go through the DNS lies that are told about the name)
When port 80 becomes clean, it is assumed you have "logged on" and it then runs various DNS/DNSSEC tests against TLD servers for known features and bugs in old DNS software. This will determine if DNS is still messed with. If the forwarder shows broken behaviour, it is attempted to bypass it as I described before.
sudo unbound-control forward_add starfish 10.1.2.3. sudo unbound-control flush starfish sudo unbound-control flush_requestlist
When you leave the network, forward_remove is called.
sudo unbound-control forward_remove starfish sudo unbound-control flush startfish sudo unbound-control flush_requestlist
Okay, so lets expand this to my workplace, that run's a University network. We have thousands of students connected. Now, we have many zones on our network, from services.university.edu university.edu, medicalcenter.org, ersearch.com etc etc.
We can't possibly put all of these into our "domain-name" dhcp option. iirc it's a single value attribute anyway.
As we indicated, for "trusted" networks (LAN, secure WIFI) a domain of "." will be used which means "forward everything". This does NOT mean we stop being a recursor. We still recursive because we need tod perform DNSSEC validation. We just use the available DNS cache of the local network - which also gets us your internal-only domains.
Sure, lets agree we "need" dnssec, and that follows that we need cache.
Set cache times to be deliberately low so that silly network admin's don't break things (Even 300).
I still don't see a need for artificially lowered cache times.
Don't try and by pass the local network DNS: There are more network configurations in the world than you or I can contemplate, and bypassing this *will* break things for people.
The publisher determines the TTL, not the consumer. And if we add a forward for "." meaning all domains, that we also run a cache flush for "." meaning all domains. So I don't think TTL matters in this case at all.
If you want to cache, then you can't assume that what I cache on network A will be valid on network B. Consider the home user with the dodgy ISP that set's all TTLs to say 30 days. Do you want that user to take that cached entry to a working network and be using that cache for 30 days? (Or whatever unbound sets it TTL max to.)
Yes. The problem here is the dodgy ISP. If they are dodgy enough, unbound will bypass them anyway. If we need to add an NM option for "don't use this dodgy ISPs DNS servers" we can also add that.
But you can't really tell what's a dodgy DNS and what's not.
Yes we can. There is both dnssec-trigger and some other software that runs various tests for this.
There are plenty of good ISP's with well configured DNS systems that you *should* use as a forwarder. Again, you can't determine what zones exist in this DNS server so that you can use it "just for those" and bypass it for all else.
See earlier discussion. If a wire or secure wifi, and working ISP DNS, we will use it. And flush it.
Consider also, that some ISP's force all port 53 traffic to their own DNS servers too. How does unbound know when the ISP is forcing this?
unbound does not really care about transparent proxy's on port 53. As long as they don't break DNS (and DNSSEC). If they redirect port 53 to some broken DNS server, unbound will try to work around it. If port 53 is broken it will attempt DNS over port 80 of various fedoraproject DNS servers, or DNS over TLS on port 443.
Essentially, what I'm hearing at the moment, is that the proposal isn't just a caching DNS server: It's a DNS server that will be:
- DNSSEC
- Caching
- Attempts to always bypass my local DNS forwarder.
I hope I clarified it now that your third bullet point is not the case.
Sure it helps. But this is DNSSEC helping, not the cache.
I've said everything about caching already. I understand you deem it evil and I explained why I believe you are wrong. We disagree.
If DNS cache was the only cause for Windows machines to need a reboot, I'm sure Microsoft would have fixed that by now. Let's remain honest here and say there are a 1001 reasons why Windows users reboot their machines. DNS might be one of them but it has no relationship to the discussion we are having right now.
That's deflecting the point.
No, bringing up Windows which has nothing to do with anything we are talking about here was deflecting the point.
I'm glad that the NM integration is being considered, that will help. I might not be afraid to touch a CLI, but I do think of users who use the GUI only.
This is why we did not want to force everyone on dnssec-triggerd. We know that solution is not good enough for non-devs.
DNS as a caching system has worked, because the caches on networks don't move. They have one view of the world and they don't move. If you have a laptop or other system that moves around, and you take that view of the DNS world with you, things will become screwy and might break in subtle ways that ordinary users can't explain.
This is a reality already. Every time your phone switches from 3G/LTE to wifi. When I walk across the street, that happens many times. I'm pretty sure my phone won't be flushing its cache all the time.
In summary, all I ask is that:
- If a forwarder exists on the network, unbound uses it for all queries.
Yes, but not for open wifi. Only for physical wire and secured wifi.
- If that forwarder returns an invalid signed DNSSEC zone then you
bypass it for only that zone. (IE the zone is being tampered with)
That's not how things work. The DNS server is either capable or not capable of doing DNSSEC. That is not a "per zone" thing. If it fails to return RRSIG signature records for the root zone, there is nothing you can do but forget about that server. (technically speaking, I do what you say, if you consider "." to be "only that zone")
- Unbound flushes it's cache between interface state changes, because
you are moving between networks with different DNS views of the world.
I am not convinced that is required. It does a lot of damage too.
- That you keep the DNS cache time short, to help avoid issues with DNS
admins who forcefully increase TTLs. Consider google, with the TTL of 300. Perhaps even set each cached record to have a cache time of ttl or 3600 which ever is lower.
No. As I stated repeatedly, we are NOT in the business of modifying DNS records. If people pubilsh long TTLs, we will honor those TTLs. Doing otherwise is similar to launching an attack on the nameservers of those domains, who might not be able to handle such short TTLs. Imagine if I run a domain using a nameserver on my DSL. TTL of 7200. The name gets known, and everyone starts hammering it because middle boxes cut the TTL to 300. That's irresponsible.
Im trying to think about the "user experience" of fedora here rather than a technically perfect world. These suggestions will eliminate all the concerns I have with this system and would hopefully make the default experience better. :)
I think we are fairly close to agreement on what's needed. Thank you for your discussion this with us. It is clear now that we must flush the entire cache when we use a forwarder for more than one domain (eg not the VPN cases) when using authenticated networks. That is something I had not considered before.
Paul