The Internet: Looking Forward


by Vint Cerf, Google

As I write, it is 2013 and 40 years have passed since the first drafts of the Internet design were written. The first published paper appeared in 1974[1] and the first implementations began in 1975. Much has happened since that time, but this essay is not focused on the past but, rather, on the future. Although the past is plainly prologue, our ability to see ahead is hampered by the unpredictable and the unknown unknowns that cloud and bedevil our vision. The exercise is nonetheless worth the effort, if only to imagine what might be possible.

Current trends reveal some directions. Mobile devices are accelerating access and applications. The economics of mobile devices have increased the footprint of affordable access to the Internet and the World Wide Web. Mobile infrastructure continues to expand on all inhabited continents. Speeds and functions are increasing as faster processors, more memory, and improved display technologies enhance the functions of these platforms. Cameras, microphones, speakers, sensors, multiple radios, touch-sensitive displays, and location and motion detection continue to evolve and open up new application possibilities. Standards and open source software facilitate widespread interoperability and adoption of applications. What is perhaps most significant is that these smart devices derive much of their power from access to and use of the extraordinary computing and memory capacity of the Internet. The Internet, cloud computing, and mobile devices have become hypergolic in their capacity to ignite new businesses and create new economic opportunities.

In the near term, the Internet is evolving. The Domain Name System (DNS) is expanding dramatically at the top level. Domain names can be written in non-Latin characters. The Internet address space is being expanded through the introduction of the IPv6 packet format, although the implementation rate among Internet Service Providers (ISPs) continues to be unsatisfactorily slow. This latter phenomenon may change as the so-called Internet of Things[2] emerges from its long incubation. Sensor networks, Internet-enabled appliances, and increasing application of artificial intelligence will transform the Internet landscape in ways that seem impossible to imagine. The introduction of IPv6 and the exhaustion of the older IPv4 address space have generated demand for application of the so-called Network Address Translation (NAT)[3] system. Geoff Huston has written and lectured extensively on this topic[4] and the potential futures involving their use. In some ways, these systems simultaneously interfere with the motivation to implement IPv6 and act as a bridge to allow both network address formats to be used concurrently.

Ironically, although most edge devices on the Internet today are probably IPv6-capable, as are the routers, firewalls, DNS servers, and other application servers, this advanced version of the Internet Protocol may not have been “turned on” by the ISP community. This situation is changing, but more slowly than many of us would like.

As the applications on the Internet continue to make demands on its capacity to transport data and to deliver low-latency services, conventional Internet technologies are challenged and new ideas are finding purchase in the infrastructure. The OpenFlow[5, 6] concept has emerged as a fresh look at packet switching in which control flow is segregated from data flow and routing is not confined to the use of address bits in packet headers for the formation and use of forwarding tables. Originally implemented with a central routing scheme to improve efficient use of network resources, the system has the flexibility to be made more distributed. It remains to be seen whether OpenFlow networks can be interconnected by using an extended form of the Border Gateway Protocol (BGP) so as to achieve end-to-end performance comparable to what has already been achieved in single networks.

Business models for Internet service play an important role here because end-to-end differential classes of service have not been realized, generally, for the current Internet implementations. Inter-ISP or edge-to-core commercial models also have not generally been perfected to achieve multiple classes of service. These aspirations remain for the Internet of the present day. Although it might be argued that increasing capacity in the core and at the edge of the Internet eliminates the need for differential service, it is fair to say that some applications definitely need lower delay, others need high capacity, and some need both (for example, for interactive video). Whether these requirements can be met simply through higher speeds or whether differential services must be realized at the edges and the core of the network is the source of substantial debate in the community. Vigorous experimentation and research continue to explore these topics.

Ubiquitous Computing

Mark Weiser[7] coined the term and concept of Ubiquitous Computing. He meant several things by this term, but among them was the notion that computers would eventually fade into the environment, becoming ever-present, performing useful functions, and operating for our convenience. Many devices would host computing capacity but would not be viewed as “computers” or even “computing platforms.” Entertainment devices; cooking appliances; automobiles; medical, environmental, and security monitoring systems; our clothing; and our homes and offices would house many computing engines of various sizes and capacities. Many, if not all, would be interconnected in communication webs, responding to requirements and policies set by users or by their authorized representatives.

To this idyllic characterization, he implied there would be challenges: configurations of hundreds of thousands of appliances and platforms, privacy, safety, access control, information confidentiality, stability, resilience, and a host of other properties.

Even modest thought produces an awareness of the need for strong authentication to assure that only the appropriate devices and authorized parties are interacting, issuing instructions, taking data, etc. It is clear that multifactor authentication and some form of public key cryptography could play an important role in assuring limitations on the use and operation of these systems. Privacy of the information generated by these systems can be understood to be necessary to protect users from potential harm.

The scale of such systems can easily reach tens to hundreds of billions of devices. Managing complex interactions at such magnitudes will require powerful hierarchical and abstracting mechanisms. When it is also understood that our mobile society will lead to a constant background churn of combinations of devices forming subsets in homes, offices, automobiles, and on our persons, the challenge becomes all the more daunting. (By this I do not mean the use of mobile smartphones but rather a society that is geographically mobile and that moves some but not all its possessions from place to place, mixing them with new ones.) Self-organizing mechanisms, hierarchically structured systems, and systems that allow remote management and reporting will play a role in managing the rapidly proliferating network we call the Internet.

For further insight into this evolution, we should consider the position location capability of the Global Positioning System (GPS)[8] . Even small, low-powered devices (for example, mobile devices) have the ability to locate themselves if they have access to the proper satellite transmissions. Adding to this capability is geo-location using mobile cell towers and even known public Wi-Fi locations. In addition, we are starting to see appliances such as Google Glass[9] enter the environment. These appliances are portable, wearable computers that hear what we hear and see what we see and can respond to spoken commands and gestures. The Google self-driving cars[10] offer yet another glimpse into the future of computing, communication, and artificial intelligence in which computers become our partners in a common sensory environment—one that is not limited to the normal human senses. All of these systems have the potential to draw upon networked information and computing power that rivals anything available in history. The systems are potentially self-learning and thus capable of improvement over time. Moreover, because these devices may be able to communicate among themselves, they may be able to cooperate on a scale never before possible.

Even now we can see the outlines of a potential future in which virtually all knowledge can be found for the asking; in which the applications of the Internet continue to evolve; in which devices and appliances of all kinds respond and adapt to our needs, communicate with each other, learn from each other, and become part of an integrated and global environment.

Indeed, our day-to-day environment is very likely to be filled with information and data gathered from many sources and subject to deep analysis benefitting individuals, businesses, families, and governments at all levels. Public health and safety are sure to be influenced and affected by these trends.

Education

It is often noted that a teacher from the mid-19th century would not feel out of place in the classroom of the 21st, except, perhaps, for subject matter. There is every indication that this situation may be about to change. In 2012, two of my colleagues from Google, Peter Norvig and Sebastian Thrun, decided to use the Internet to teach an online class in artificial intelligence under the auspices of Stanford University. They expected about 500 students, but 160,000 people signed up for the course! There ensued a scramble to write or revise software to cope with the unexpectedly large scale of the online class. This phenomenon has been a long time in coming. Today we call such classes “MOOCs” (Massive, Open, OnLine Classes). Of the 160,000 who signed up, something like 23,000 actually completed the class. How many professors of computer science can say they have successfully taught 23,000 students?

The economics of this form of classroom are also very intriguing. Imagine a class of 100,000 students, each paying $10 per class. Even one class would produce $1,000,000 in revenue. I cannot think of any university that regularly has million dollar classes! There are costs, but they are borne in part by students (Internet access, equipment with which to reach the Internet, etc., for example) and in part by the university (Internet access, multicast or similar capability, and salaries of professors and teaching assistants). In some cases, the professors prepare online lectures that students can watch as many times as they want to—whenever they want to because the lectures can be streamed. The professors then hold classroom hours that are devoted to solving problems, in an inversion of the more typical classroom usage. Obviously this idea could expand to include nonlocal teaching assistants. Indeed, earlier experiments with videotaped lectures and remote teaching assistants were carried out with some success at Stanford University when I served on the faculty in the early 1970s.

What is potentially different about MOOCs is scale. Interaction and examinations are feasible in this online environment, although the form of exams is somewhat limited by the capabilities of the online platform used. Start-ups are experimenting with and pursuing these ideas (refer to www.udacity.com and www.coursera.org).

People who are currently employed also can take these courses to improve their skills, learn new ones, and position themselves for new careers or career paths. From young students to retired workers, such courses offer opportunities for personal expansion, and they provide a much larger customer base than is usually associated with a 2- or 4-year university or college program. These classes can be seen as re-invention of the university, the short course, the certificate program, and other forms of educational practice. It is my sense that this state of affairs has the potential to change the face of education at all levels and provide new options for those who want or need to learn new things.

The Information Universe

It is becoming common to speak of “big data” and “cloud computing” as indicators of a paradigm shift in our view of information. This view is not unwarranted. We have the ability to absorb, process, and analyze quantities of data beyond anything remotely possible in the past. The functional possibilities are almost impossible to fully fathom. For example, our ability to translate text and spoken language is unprecedented. With combinations of statistical methods, hierarchical hidden Markov models, formal grammars, and Bayesian techniques, the fidelity of translation between some language pairs approaches native language speaker quality. It is readily predictable that during the next decade, real-time, spoken language translation will be a reality.

One of my favorite scenarios: A blind German speaker and a deaf American Sign Language (ASL) signer meet, each wearing Google Glass. The deaf signer’s microphone picks up the German speaker’s words, translates them into English, and displays them as captions for the deaf participant. The blind man’s Glass video camera sees the deaf signer’s signs, translates the signs from ASL to English and then to German, and then speaks them through the bone conduction speaker of the Google Glass. We can do all of this now except for the correct interpretation of ASL. This challenge is not a trivial one, but it might be possible in the next 10 to 15 years.

The World Wide Web continues to grow in size and diversity. In addition, large databases of information are being accumulated, especially from scientific disciplines such as physics, astronomy, and biology. Telescopes (ground and space-based), particle colliders such as the Large Hadron Collider[11] , and DNA sequencers are producing petabytes and more—in some cases on a daily basis!

We seem to be entering a time when much of the information produced by human endeavor will be accessible to everyone on the planet. Google’s motto: “To organize the world’s information and make it universally accessible and useful,” might be nearly fulfilled in the decades ahead. Some tough problems lie ahead, however. One I call “bit rot.”

By using this term, I do not mean the degradation of digital recordings on various media, although this is a very real problem. The more typical problem is that the readers of the media fall into disuse and disrepair. One has only to think about 8-inch Wang disks for the early Wang word processor, or 3.5-inch floppy disks or their 5 ¼-inch predecessors. Now we have CDs, DVDs, and Blu-Ray disks, but some computer makers—Apple for example—have ceased to build in readers for these media.

Another, more tricky problem is that much of the digital information produced requires software to correctly interpret the digital bits. If the software is not available to interpret the bits, the bits might as well be rotten or unreadable. Software applications run over operating systems that, themselves, run on computer hardware. If the applications do not work on new versions of the operating systems, or the applications are upgraded but are not backward-compatible with earlier file and storage formats, or the maker of the application software goes out of business and the source code is lost, then the ability to interpret the files created by this software may be lost. Even when open source software is used, it is not clear it will be maintained in operating condition for thousands of years. We already see backward-compatibility failures in proprietary software emerging after only years or decades.

Getting access to source code for preservation may involve revising notions of copyright or patent to allow archivists to save and make usable older application software. We can imagine that “cloud computing” might allow us to emulate hardware, run older operating systems, and thus support older applications, but there is also the problem of basic input/output and the ability to emulate earlier media, even if the physical media or their readers are no longer available. This challenge is a huge but important one.

Archiving of important physical data has to be accompanied by archiving of metadata describing the conditions of collections, calibration of instruments, formatting of the data, and other hints at how to interpret it. All of this work is extra, but necessary to make information longevity a reality.

The Dark Side

To the generally optimistic and positive picture of Internet service must be added a realistic view of its darker side. The online environment and the devices we use to exercise it are filled with software. It is an unfortunate fact that programmers have not succeeded in discovering how to write software of any complexity that is free of mistakes and vulnerabilities.

Despite the truly remarkable and positive benefits already delivered to us through the Internet, we must cope with the fact that the Internet is not always a safe place.

The software upon which we rely in our access devices, in the application servers, and in the devices that realize the Internet itself (routers, firewalls, gateways, switches, etc.) is a major vulnerability, given the apparently inescapable presence of bugs.

Not everyone with access to the Internet has other users’ best interests at heart. Some see the increasing dependence of our societies on the Internet as an opportunity for exploitation and harm. Some are motivated by a desire to benefit themselves at the expense of others, some by a desire to hurt others, some by nationalistic sentiments, some by international politics. That Shakespeare’s plays are still popular after 500 years suggests that human frailties have not changed in the past half millennium! The weaknesses and vulnerabilities of the Internet software environment are exploited regularly. What might the future hold in terms of making the Internet a safer and more secure place in which to operate?

It is clear that simple usernames and passwords are inadequate to the task of protecting against unauthorized access and that multifactor and perhaps also biometric means are going to be needed to accomplish the desired effect. We may anticipate that such features might become a part of reaching adulthood or perhaps a rite of passage at an earlier age. Purely software attempts to cope with confidentiality, privacy, access control, and the like will give way to hardware-reinforced security. Digitally signed Basic Input/ Output System (BIOS), for example, is already a feature of some new chipsets. Some form of trusted computing platform will be needed as the future unfolds and as online and offline hazards proliferate.

Governments are formed that are, in principle, kinds of social contracts. Citizens give up some freedoms in exchange for safety from harm. Not all regimes have their citizens’ best interests at heart, of course. There are authoritarian regimes whose primary interest is staying in power. Setting these examples aside, however, it is becoming clear that the hazards of using computers and being online have come to the attention of democratic as well as authoritarian regimes. There is tension between law enforcement (and even determination of what the law should be) and the desire of citizens for privacy and freedom of action. Balancing these tensions is a nontrivial exercise. The private sector is pressed into becoming an enforcer of the law when this role is not necessarily an appropriate one. The private sector is also coerced into breaching privacy in the name of the law.

“Internet Governance” is a broad term that is frequently interpreted in various ways depending on the interest of the party desiring to define it for particular purposes. In a general sense, Internet Governance has to do with the policies, procedures, and conventions adopted domestically and internationally for the use of the Internet. It has not only to do with the technical ways in which the Internet is operated, implemented, and evolved but also with the ways in which it is used or abused.

In some cases it has to do with the content of the Internet and the applications to which the Internet is put. It is evident that abuse is undertaken through the Internet. Fraud, stalking, misinformation, incitement, theft, operational interference, and a host of other abuses have been identified. Efforts to defend against them are often stymied by lack of jurisdiction, particularly in cases where international borders are involved. Ultimately, we will have to reach some conclusions domestically and internationally as to which behaviors will be tolerated and which will not, and what the consequences of abusive behavior will be. We will continue to debate these problems well into the future.

Our societies have evolved various mechanisms for protecting citizens. One of these mechanisms is the Fire Department. Sometimes volunteer, this institution is intended to put out building or forest fires to minimize risks to the population. We do not have a similar institution for dealing with various forms of “cyberfires” in which our machines are under attack or are otherwise malfunctioning, risking others by propagation of viruses, worms, and Trojan horses or participation in botnet denial-of-service or other forms of attacks. Although some of these matters may deserve national-level responses, many are really local problems that would benefit from a “Cyber Fire Department” that individuals and businesses could call upon for assistance. When the cyber fire is put out, the question of cause and origin could be investigated as is done with real fires. If deliberately set, the problem would become one of law enforcement.

Intellectual property is a concept that has evolved over time but is often protected by copyright or patent practices that may be internationally adopted and accepted. These notions, especially copyright, had origins in the physical reproduction of content in the form of books, films, photographs, CDs, and other physical things containing content. As the digital and online environment penetrates more deeply into all societies, these concepts become more and more difficult to enforce. Reproduction and distribution of digital content gets easier and less expensive every day. It may be that new models of compensation and access control will be needed in decades ahead.

Conclusion

If there can be any conclusion to these ramblings, it must be that the world that lies ahead will be immersed in information that admits of extremely deep analysis and management. Artificial intelligence methods will permeate the environment, aiding us with smart digital assistants that empower our thought and our ability to absorb, understand, and gain insight from massive amounts of information.

It will be a world that is also at risk for lack of security, safety, and privacy—a world in which demands will be made of us to think more deeply about what we see, hear, and learn. While we have new tools with which to think, it will be demanded of us that we use them to distinguish sound information from unsound, propaganda from truth, and wisdom from folly.

References

[1]   Vinton G. Cerf and Robert E. Kahn, “A Protocol for Packet Network Intercommunication,” IEEE Transactions on Communications, Vol. Com-22, No. 5, May 1974.
[2]  

[2] David Lake, Ammar Rayes, and Monique Morrow, “The Internet of Things,” The Internet Protocol Journal, Volume 15, No. 3, September 2012.

[3]  

[3] Geoff Huston, “Anatomy: A Look inside Network Address Translators,” The Internet Protocol Journal, Volume 7, No. 3, September 2004.

[4]   Geoff Huston and Mark Kosters, “The Role of Carrier Grade NATs in the Near-Term Internet,” TIP 2013 Conference, http://events.internet2.edu/2013/tip/agenda. cfm?go=session&id=10002780
[5]   http://www.openflow.org/
[6]   William Stallings, “Software-Defined Networks and OpenFlow,” The Internet Protocol Journal, Volume 16, No. 1, March 2013.
[7]   http://en.wikipedia.org/wiki/Mark_Weiser
[8]   http://en.wikipedia.org/wiki/Global_Positioning_System
[9]   http://www.google.com/glass/start/
[10]   http://en.wikipedia.org/wiki/Google_driverless_car
[11]   home.web.cern.ch
[12]   Cerf, V., “Looking Toward the Future,” The Internet Protocol Journal, Volume 10, No. 4, December 2007.
[13]   Vint Cerf, “A Decade of Internet Evolution,” The Internet Protocol Journal, Volume 11, No. 2, June 2008.
[14]   Geoff Huston, “A Decade in the Life of the Internet,” The Internet Protocol Journal, Volume 11, No. 2, June 2008.

VINTON G. CERF is vice president and chief Internet evangelist for Google. Cerf has held positions at MCI, the Corporation for National Research Initiatives, Stanford University, UCLA, and IBM. He served as chairman of the board of the Internet Corporation for Assigned Names and Numbers (ICANN) and was founding president of the Internet Society. Cerf was appointed to the U.S. National Science Board in 2013. Widely known as one of the “Fathers of the Internet,” he received the U.S. National Medal of Technology in 1997, the Marconi Fellowship in 1998, and the ACM Alan M. Turing Award in 2004. In November 2005, he was awarded the Presidential Medal of Freedom, in April 2008 the Japan Prize, and in March 2013 the Queen Elizabeth II Prize for Engineering. He is a Fellow of the IEEE, ACM, and AAAS, the American Academy of Arts and Sciences, the American Philosophical Society, the Computer History Museum, and the National Academy of Engineering. Cerf holds a Bachelor of Science degree in Mathematics from Stanford University and Master of Science and Ph.D. degrees in Computer Science from UCLA, and he holds 21 honorary degrees from universities around the world.

E-mail: vint@google.com</a>

Leave a Reply

*