Archive for the ‘Forum Papers and Articles’ Category

But That’s Impossible

Friday, May 17th, 2013

For some time now at APNIC Labs we’ve been running an experiment that is intended to measure the state of IPv6 capability across the Internet. To do this we use a number of experimental data sources. A number of web site administrators have kindly put our Javascript test code on their web page, so that visitors to their web page contribute to the measurement exercise (If you operate a web service you can help out too, if you want – the details of how to add these tests to your web page are at In addition to Javascript on web pages, we also use active code embedded in an online advertisement to perform the same basic IPv6 capability measurement on clients who are delivered an impression of the advertisement. Across these two experimental approaches we perform a basic IPv6 capability test on between 800,000 and 1,000,000 clients each day. Such a large scale experiment is bound to produce some anomalous behaviours, but in this case there are a couple of outcomes that, as far as I can tell, should just be impossible!


Measuring DNSSEC Performance

Tuesday, April 30th, 2013

There are a number of reasons that both domain name administrators and vendors of client DNS software cite for not incorporating DNSSEC signing into their offerrings. The added complexity of the name administration process when signatures are added to the mix, the challenges of maintaining current root trust keys, and the adverse consequences of DNSSEC signature validation failure have all been mentioned as reasons to hesitate. We have also heard concerns over increased overhead of using DNSSEC. These concerns come from zone administrators, authoritative name server operators and from suppliers of DNS resolver systems, and all point to a concern over the imposition of further overheads in the process of DNS name resolution when the name being resolved is DNSSEC signed. While the issues of complexity are challenging to quantify, we were interested in the issues of performance. What are the performance costs of adding DNSSEC signatures to a domain? Can we measure them?


DNSSEC and Google’s Public DNS Service

Tuesday, April 9th, 2013

The Domain Name System, or the DNS, is a critical, yet somewhat invisible component of the Internet. The world of the Internet is a world of symbols and words. We invoke applications to interact with services such as Google, Facebook and Twitter, and the interaction is phrased in human readable symbols. But the interaction with the network is one that is entirely in a binary format. So our symbolic view of a service, such as, has to be translated into a protocol address, such as The mapping from symbols to protocol addresses is one of the critical functions of the DNS. We rely not only on the continued presence of the DNS, but its correct operation as well. Entering in a browser does not necessarily guarantee that your interaction will be with your intended service. One of the more insidious attack vectors for the Internet is to deliberately corrupt the operation of the DNS, and thereby dupe the user’s application to open a session with the wrong destination. The most robust response we’ve managed to devise to mitigate this longstanding vulnerability in the DNS has been to add secure cryptographic signatures into the DNS, using a technology called DNSSEC. But are we using DNSSEC in today’s Internet?


Literally IPv6

Friday, March 1st, 2013

As many who have worked with computer software would attest, software bugs come in many strange forms. This month I’d like to relate a recent experience I’ve had with one such bug that pulls together aspects of IPv6 standard specifications and interoperability.


Addressing 2012 – Another One Bites the Dust!

Friday, January 4th, 2013

Time for another annual roundup from the world of IP addresses. What happened in 2012 and what is likely to happen in 2013? This is an update to the reports prepared at the same time in previous years, so lets see what has changed in the past 12 months in addressing the Internet, and look at how IP address allocation information can inform us of the changing nature of the network itself.


Superstorm Sandy and the Global Internet

Friday, November 30th, 2012

The Internet has managed to collect its fair share of mythology, and one of the more persistent myths is that from its genesis in a cold war US think tank in the 1960′s the Internet was designed with remarkable ability to “route around damage.” Whether the story of this cold war think tank is true or not, the adoption of a stateless forwarding architecture, coupled with a dynamic routing system, does allow the network to “self-heal” under certain circumstances. Can we see this self-healing in today’s network? How true is this reputation of network robustness in the face of all kinds of adversity? While the Internet is almost everywhere these days, there are still a small number of locations that host a remarkable amount of Internet connectivity. One of these critical points of global connectivity is New York, an area that hosts a significant number of submarine cable landing points as it it the major termination point of the North Atlantic submarine system so it is a major connection point in linking the trunk cable transit systems in Europe, America and Asia.


Counting IPv6 in the DNS

Saturday, October 27th, 2012

At the recent ARIN XXX meeting in October 2012 I listened to a debate on a policy proposal concerning the reservation of a pool of IPv4 addresses to address critical infrastructure. The term “critical infrastructure” is intended to cover a variety of applications, including use by public Internet Exchanges and authoritative nameservers for various top level domains. As far as I can tell, the assumptions behind this policy proposal includes the assumption that a top level authoritative nameserver will need to use IPv4 for the foreseeable future, so that an explicit reserved pool of these IPv4 addresses needs to be maintained for use by the authoritative nameservers for these domain names. But it this really the case? If you set up an authoritative DNS nameserver for a domain name where all the nameservers were only reachable using IPv6, then what is the visibility of this nameserver? What proportion of the Internet’s user base could still access the name if access to the authoritative nameservers was restricted to only IPv6?


Counting DNSSEC

Sunday, September 23rd, 2012

At the Nordunet 2012 conference in September, a presentation included the assertion that “more than 80% of domains could use DNSSEC if they so chose.” This is an interesting claim that speaks to a very rapid rise in the deployment of DNSSEC in recent years, and it raises many questions about the overall status of DNSSEC deployment in today’s Internet. While the effort to secure the operation of the DNS dates back for more than 10 years (See earlier articles on DNSSEC in August, September and October 2006, and an update in June 2010), the recent impetus for DNSSEC adoption came from the acknowledgement of vulnerabilities in the DNS with the widespread publication of a viable form of attack on DNS resolvers (the “Kaminsky DNS attack“, reported in 2008), and DNSSEC-signed DNS root zone, which commenced on 15 July 2010. The question now is: how is all this playing out in the world of the DNS? How many DNS zones are DNSSEC-signed? To what extent are Internet user’s able to trust in the integrity of DNS name resolution? How many Internet users use DNS resolvers that perform DNSSEC valiation?


Leaping Seconds

Thursday, July 19th, 2012

The tabloid press are never lost for a good headline, but this one in particular caught my eye: “Global Chaos as moment in time kills the Interwebs“. I’m pretty sure that “global chaos” is somewhat over the top, but there was a problem happening on the 1st of July this year, and yes, it impacted the Internet in various ways, as well as many other enterprises who rely on IT systems. And yes, the problem had a lot to do with time and how we measure it.


Bemused Eyeballs: Tailoring Dual Stack Applications for a CGN Environment

Monday, May 14th, 2012

How do you create a really robust service on the Internet? How can we maximise speed, responsiveness, and resiliency? How can we set up an application service environment in today’s network that can still deliver service quality and performance, even in the most adverse of conditions? And how can we engineer applications that will operate robustly in the face of the anticipated widespread deployment of Carrier Grade NATs (CGNs) as the Internet lumbers into a rather painful phase of a depleted IPv4 free pool and continuing growth pressures. Yes, IPv6 is the answer, but between here and there are a few challenges. And one of these is the way applications behave in a dual stack environment. I’d like to look at this in a little more detail here.