Hackathon @ Africa Internet Summit 2019
By Willem Toorop
The main objectives of the NLnet Labs foundation are the development of Open Source Software and Open Standards; this combination creates synergy in gaining operational and implementation experience of new standards. The kind of hackathons preceding IETF conferences fit these objectives perfectly, and we have participated in them actively right from the start.
The ISOC African regional bureau has been organizing hackathons at the last three editions of the Africa Internet Summit in the same spirit as those of the IETF (hack to support Open Standards development). These hackathons also have the additional purpose to involve the Africa region better and more in work done at the IETF.
I personally (Willem Toorop) love participating in those IETF hackathons. I really enjoy the combination of collaboration and the no-nonsense getting your hands dirty ambiance found thereat. I also love to tell, teach and preach about my passions (i.e. DNS, End-Entity privacy and security etc.), so I was just thrilled that I was given the opportunity by ISOC Africa to lead one of the hackathon tracks during the Africa Internet Summit (AIS) this year.
While preparing for the hackathon, I found out that two other Dutchies (from the RIPE NCC) were going to the AIS too. I managed to convince Jasper den Hertog to co-lead the hackathon with me, and Lia Hestina also provided invaluable support!
The hackathon took place on 19 and 20 June during the Africa Internet Summit 2019 in Kampala, Uganda. In total there were 87 participants in five hackathon tracks. Our track, “Measuring DNS and DoH” had 13 participants.
Measuring DNS and DoH
DNS-over-HTTPs (DoH), and more so the Trusted Recursive Resolvers (TRRs), are at the heart of the current debate in the IETF about privacy versus the move of core internet services (like DNS) to the cloud. What would it mean for the Africa region when Mozilla Firefox would start bypassing the locally configured resolver and start using the built-in Trusted Recursive Resolver by default (as announced)? Would it impact performance? Would it be beneficial to provide a local DoH service? What is needed for that? The “Measuring DNS and DoH” track addressed those questions. Two local teams and one remote team were formed.
Team “Shadow Hunters”
Team “Shadow Hunters” employed RIPE Atlas to schedule DNS measurements from probes in the Africa region to the cloud resolvers 1.1.1.1, 8.8.8.8 and 9.9.9.9 (‘the quads’) and compared that to DNS measurements to the locally configured resolvers. Measures were taken to make sure that the query was cached, so that the response time would be equal to the round-trip time. Measurements were performed over UDP, TCP and TLS.
During the course of the hackathon, a — tls option was added to the RIPE Atlas command line tools (Magellan), to enable it to schedule DNS-over-TLS measurements. This addition has been contributed to the tools as a GitHub pull request.
Results were quite interesting. From the cloud DNS resolvers, Quad9 returns responses the quickest in Africa, but local resolvers are still quicker.
On UDP, on the median, local resolvers provide responses 27 times sooner than the fastest cloud resolver: 2ms RTT for the local resolver versus 55ms for the fastest cloud resolver (which was 9.9.9.9). On TCP, local resolvers provide responses at least 6 times quicker. 14ms versus 94ms (again Quad9).
With the TLS measurements the results are not sound with respect to the local resolver, because there were only a very few resolvers other than 1.1.1.1, 8.8.8.8, 9.9.9.9 supporting DNS-over-TLS. From looking at the RTT however these DoT resolvers also appear to be remote and not local.
Team “Just DoH it”
One of the issues with the TRRs is that the TRR picked (by the browser) might not be the party to which the user entrusts her DNS queries. The user should really have the largest possible choice, and perhaps the network should provide a (authenticated and private) default. A local DoH resolver will definitely provide much better response times, as the Shadow Hunters have shown.
The “Just DoH it” team worked on this by setting up their own DoH resolver on a local Virtual Machine provided by ISOC during the hackathon. Setting up a DoH server is a very new, cutting edge operational affair, and certainly not something which is readily available in of-the-shelf software. The “Just DoH it” team looked into different ways to do this and gave feedback (and corrections) on the online resources.
They came up with two working setups, both using Unbound as their core DNS resolver. The first solution used nginx as the front-end webserver and glue code written in the Go language from a somewhat mysterious github repo. The team also installed a DoT server with the same setup. The second solution used the DoH feature in the commons.host content delivery network server. The team modified the DoH parts of this server to run standalone. This setup was based on node.js (JavaScript runtime) as a frontend webserver and the already mentioned Unbound resolver as the back-end. Since this setup uses the http/2 libng library directly no proxying through nginx is necessary. Commons.host is pretty interesting in itself, it’s an attempt to run a content delivery network as a community project and also includes tools to measure performance and compare different DoH providers and a tool (called Dohnut) to use DoH as the primary DNS mechanism on your computer.
During the team’s presentation, a live demo was given in which the audience was invited to configure their Firefox browser to use the team’s DoH server. A traffic log was displayed to show the audience using the DoH server.
Team “How do you DoH”
We also had remote participants joining the hackathon track. Amreesh Phokeer and Malick provided good feedback on the other teams results and progress, and also a measurement project of their own: comparing RIPE Atlas results with those of SpeedChecker.
This hack is still a work in progress on GitHub.
Summarizing
Our “Measuring DNS and DoH” hackathon track was a huge success. We managed to produce valuable feedback on a currently hot topic in the IETF. Furthermore, I met with a brilliant, skilled, creative, intelligent and enthusiastic group of people, which I hope to see and cooperate with in many more future events to come. Thanks to all of you for this great experience!