Loading AI tools
Computer science debate From Wikipedia, the free encyclopedia
The Protocol Wars were a long-running debate in computer science that occurred from the 1970s to the 1990s, when engineers, organizations and nations became polarized over the issue of which communication protocol would result in the best and most robust networks. This culminated in the Internet–OSI Standards War in the 1980s and early 1990s, which was ultimately "won" by the Internet protocol suite (TCP/IP) by the mid-1990s when it became the dominant protocol suite through rapid adoption of the Internet.
In the late 1960s and early 1970s, the pioneers of packet switching technology built computer networks providing data communication, that is the ability to transfer data between points or nodes. As more of these networks emerged in the mid to late 1970s, the debate about communication protocols became a "battle for access standards". An international collaboration between several national postal, telegraph and telephone (PTT) providers and commercial operators led to the X.25 standard in 1976, which was adopted on public data networks providing global coverage. Separately, proprietary data communication protocols emerged, most notably IBM's Systems Network Architecture in 1974 and Digital Equipment Corporation's DECnet in 1975.
The United States Department of Defense (DoD) developed TCP/IP during the 1970s in collaboration with universities and researchers in the US, UK and France. IPv4 was released in 1981 and was made the standard for all DoD computer networking. By 1984, the international reference model OSI model, which was not compatible with TCP/IP, had been agreed upon. Many European governments (particularly France, West Germany and the UK) and the United States Department of Commerce mandated compliance with the OSI model, while the US Department of Defense planned to transition from TCP/IP to OSI.
Meanwhile, the development of a complete Internet protocol suite by 1989, and partnerships with the telecommunication and computer industry to incorporate TCP/IP software into various operating systems laid the foundation for the widespread adoption of TCP/IP as a comprehensive protocol suite. While OSI developed its networking standards in the late 1980s, TCP/IP came into widespread use on multi-vendor networks for internetworking and as the core component of the emerging Internet.
Computer science was an emerging discipline in the late 1950s that began to consider time-sharing between computer users and, later, the possibility of achieving this over wide area networks. In the early 1960s, J. C. R. Licklider proposed the idea of a universal computer network while working at Bolt Beranek & Newman (BBN) and, later, leading the Information Processing Techniques Office (IPTO) at the Advanced Research Projects Agency (ARPA, later, DARPA) of the US Department of Defense (DoD). Independently, Paul Baran at RAND in the US and Donald Davies at the National Physical Laboratory (NPL) in the UK invented new approaches to the design of computer networks.[3][4]
Baran published a series of papers between 1960 and 1964 about dividing information into "message blocks" and dynamically routing them over distributed networks.[5][6][7] Davies conceived of and named the concept of packet switching using high-speed interface computers for data communication in 1965–1966.[8][9] He proposed a national commercial data network in the UK, and designed the local-area NPL network to demonstrate and research his ideas. The first use of the term protocol in a modern data-communication context occurs in an April 1967 memorandum A Protocol for Use in the NPL Data Communications Network written by two members of Davies' team, Roger Scantlebury and Keith Bartlett.[10][11][12]
Licklider, Baran and Davies all found it hard to convince incumbent telephone companies of the merits of their ideas. AT&T held a monopoly on communications infrastructure in the United States, as did the General Post Office (GPO) in the United Kingdom, which was the national postal, telegraph and telephone service (PTT). They both believed speech traffic would continue to dominate and continued to invest in traditional telegraphic techniques.[13][14][15][16][17] Telephone companies were operating on the basis of circuit switching, alternatives to which are message switching or packet switching.[18][19]
Bob Taylor became the director of the IPTO in 1966 and set out to achieve Licklider's vision to enable resource sharing between remote computers.[20] Taylor hired Larry Roberts to manage the programme.[21] Roberts brought Leonard Kleinrock into the project; Kleinrock had applied mathematical methods to study communication networks in his doctoral thesis.[22] At the October 1967 Symposium on Operating Systems Principles, Roberts presented the early "ARPA Net" proposal, based on Wesley Clark's idea for a message switching network using Interface Message Processors (IMPs).[23] Roger Scantlebury presented Davies' work on a digital communication network and referenced the work of Paul Baran.[24] At this seminal meeting, the NPL paper articulated how the data communications for such a resource-sharing network could be implemented.[25][26][27]
Larry Roberts incorporated Davies' and Baran's ideas on packet switching into the proposal for the ARPANET.[28][29] The network was built by BBN. Designed principally by Bob Kahn,[30][31] it departed from the NPL's connectionless network model in an attempt to avoid the problem of network congestion.[32] The service offered to hosts by the network was connection oriented. It enforced flow control and error control (although this was not end-to-end).[33][34][35] With the constraint that, for each connection, only one message may be in transit in the network, the sequential order of messages is preserved end-to-end.[33] This made the ARPANET what would come to be called a virtual circuit network.[2]
Packet switching can be based on either a connectionless or connection-oriented mode, which are different approaches to data communications. A connectionless datagram service transports data packets between two hosts independently of any other packet. Its service is best effort (meaning out-of-order packet delivery and data losses are possible). With a virtual circuit service, data can be exchanged between two host applications only after a virtual circuit has been established between them in the network. After that, flow control is imposed to sources, as much as needed by destinations and intermediate network nodes. Data are delivered to destinations in their original sequential order.[37][38]
Both concepts have advantages and disadvantages depending on their application domain. Where a best effort service is acceptable, an important advantage of datagrams is that a subnetwork may be kept very simple. A counterpart is that, under heavy traffic, no subnetwork is per se protected against congestion collapse. In addition, for users of the best effort service, use of network resources does not enforce any definition of "fairness"; that is, relative delay among user classes.[39][40][41]
Datagram services include the information needed for looking up the next link in the network in every packet. In these systems, routers examine each arriving packet, look at their routing information, and decide where to route it. This approach has the advantage that there is no inherent overhead in setting up the circuit, meaning that a single packet can be transmitted as efficiently as a long stream. Generally, this makes routing around problems simpler as only the single routing table needs to be updated, not the information for every virtual circuit. It also requires less memory, as only one route needs to be stored for any destination, not one per virtual circuit. On the downside, there is a need to examine every datagram, which makes them (theoretically) slower.[38]
On the ARPANET, the starting point in 1969 for connecting a host computer (i.e., a user) to an IMP (i.e., a packet switch) was the 1822 protocol, which was written by Bob Kahn.[30][42] Steve Crocker, a graduate student at the University of California Los Angeles (UCLA) formed a Network Working Group (NWG) that year. He said "While much of the development proceeded according to a grand plan, the design of the protocols and the creation of the RFCs was largely accidental."[nb 1] Under the auspices of Leonard Kleinrock at UCLA,[43] Crocker led other graduate students, including Jon Postel, in designing a host-host protocol known as the Network Control Program (NCP).[44][nb 2] They planned to use separate protocols, Telnet and the File Transfer Protocol (FTP), to run functions across the ARPANET.[nb 3][45][46] After approval by Barry Wessler at ARPA,[47] who had ordered certain more exotic elements to be dropped,[48] the NCP was finalized and deployed in December 1970 by the NWG. NCP codified the ARPANET network interface, making it easier to establish, and enabling more sites to join the network.[49][50]
Roger Scantlebury was seconded from the NPL to the British Post Office Telecommunications division (BPO-T) in 1969. There, engineers developed a packet-switching protocol from basic principles for an Experimental Packet Switched Service (EPSS) based on a virtual call capability. However, the protocols were complex and limited; Davies described them as "esoteric".[51][52]
Rémi Després started work in 1971, at the CNET (the research center of the French PTT), on the development of an experimental packet switching network, later known as RCP. Its purpose was to put into operation a prototype packet switching service to be offered on a future public data network.[53][54] Després simplified and improved on the virtual call approach, introducing the concept of "graceful saturated operation" in 1972.[55] He coined the term "virtual circuit" and validated the concepts on the RCP network.[56] Once set up, the data packets do not have to contain any routing information, which can simplify the packet structure and improve channel efficiency. The routers are also faster as the route setup is only done once; from then on, packets are simply forwarded down the existing link. One downside is that the equipment has to be more complex as the routing information has to be stored for the length of the connection. Another disadvantage is that the virtual connection may take some time to set up end-to-end, and for small messages, this time may be significant.[37][38][57]
Davies had conceived and described datagram networks, done simulation work on them, and built a single packet switch with local lines.[27][58] Louis Pouzin thought it looked technically feasible to employ a simpler approach to wide-area networking than that of the ARPANET.[58] In 1972, Pouzin launched the CYCLADES project, with cooperation provided by the French PTT, including free lines and modems.[59] He began to research what would later be called internetworking;[60][59] at the time, he coined the term "catenet" for concatenated network.[61] The name "datagram" was coined by Halvor Bothner-By.[62] Hubert Zimmermann was one of Pouzin's principal researchers and the team included Michel Elie, Gérard Le Lann, and others.[nb 5] While building the network, they were advised by BBN as consultants.[60][63] Pouzin's team was the first to tackle the highly-complex problem of providing user applications with a reliable virtual circuit while using a best-effort service.[64] The network used unreliable, standard-sized, datagrams in the packet-switched network and virtual circuits for the transport layer.[60][65] First demonstrated in 1973, it pioneered the use of the datagram model, functional layering, and the end-to-end principle.[66] Le Lann proposed the sliding window scheme for achieving reliable error and flow control on end-to-end connections.[67][68][69] However, the sliding window scheme was never implemented on the CYCLADES network and it was never interconnected with other networks (except for limited demonstrations using traditional telegraphic techniques).[70][71]
Louis Pouzin's ideas to facilitate large-scale internetworking caught the attention of ARPA researchers through the International Network Working Group (INWG), an informal group established by Steve Crocker, Pouzin, Davies, and Peter Kirstein in June 1972 in Paris, a few months before the International Conference on Computer Communication (ICCC) in Washington demonstrated the ARPANET.[58][72] At the ICCC, Pouzin first presented his ideas on internetworking, and Vint Cerf was approved as INWG's Chair on Steve Crocker's recommendation. INWG grew to include other American researchers, members of the French CYCLADES and RCP projects, and the British teams working on the NPL network, EPSS and the proposed European Informatics Network (EIN), a datagram network.[70][73] Like Baran in the mid-1960s, when Roberts approached AT&T about taking over the ARPANET to offer a public packet-switched service, they declined.[74][75]
Bob Kahn joined the IPTO in late 1972. Although initially expecting to work in another field, he began work on satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In Spring 1973, Vint Cerf moved to Stanford University. With funding from DARPA, he began collaborating with Kahn on a new protocol to replace NCP and enable internetworking. Cerf built a research team at Stanford studying the use of fragmentable datagrams. Gérard Le Lann joined the team during the period 1973-4 and Cerf incorporated his sliding windows scheme into the research work.[63]
Also in the United States, Bob Metcalfe and others at Xerox PARC outlined the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking.[76][77] INWG met in Stanford in June 1973.[78] Zimmermann and Metcalfe dominated the discussions.[63][79] Notes from the meetings were recorded by Cerf and Alex McKenzie, from BBN, and published as numbered INWG Notes (some of which were also RfCs). Building on this, Kahn and Cerf presented a paper at a networking conference at the University of Sussex in England in September 1973.[70] Their ideas were refined further in long discussions with Davies, Scantlebury, Pouzin and Zimmerman.[80] Most of the work was done by Kahn and Cerf working as a duet.[78]
Peter Kirstein put internetworking into practice at University College London (UCL) in June 1973, connecting the ARPANET to British academic networks, the first international heterogeneous computer network. By 1975, there were 40 British academic and research groups using the link.[81]
The seminal paper, A Protocol for Packet Network Intercommunication, published by Cerf and Kahn in 1974 addressed the fundamental challenges involved in interworking across datagram networks with different characteristics, including routing in interconnected networks, and packet fragmentation and reassembly.[82][83] The paper drew upon and extended their prior research, developed in collaboration and competition with other American, British and French researchers.[84][85][70] DARPA sponsored work to formulate the first version of the Transmission Control Program (TCP) later that year.[86] At Stanford, its specification, RFC 675, was written in December by Cerf with Yogen Dalal and Carl Sunshine as a monolithic (single layer) design.[70] The following year, testing began through concurrent implementations at Stanford, BBN and University College London,[87] but it was not installed on the ARPANET at this time.
A protocol for internetworking was also being pursued by INWG.[88][89] There were two competing proposals, one based on the early Transmission Control Program proposed by Cerf and Kahn (using fragmentable datagrams), and the other based on the CYCLADES transport protocol proposed by Pouzin, Zimmermann and Elie (using standard-sized datagrams).[70][90] A compromise was agreed and Cerf, McKenzie, Scantlebury and Zimmermann authored an "international" end-to-end protocol.[91][92] It was presented to the CCITT by Derek Barber in 1975 but was not adopted by the CCITT nor by the ARPANET.[73][63][nb 6]
The fourth biennial Data Communications Symposium later that year included presentations from Davies, Pouzin, Derek Barber, and Ira Cotten about the current state of packet-switched networking.[nb 7] The conference was covered by Computerworld magazine which ran a story on the "battle for access standards" between datagrams and virtual circuits, as well as a piece describing the "lack of standard access interfaces for emerging public packet-switched communication networks is creating 'some kind of monster' for users". At the conference, Pouzin said pressure from European PTTs forced the Canadian DATAPAC network to change from a datagram to virtual circuit approach,[36] although historians attribute this to IBM's rejection of their request for modification to their proprietary protocol.[93] Pouzin was outspoken in his advocacy for datagrams and attacks on virtual circuits and monopolies. He spoke about the "political significance of the [datagram versus virtual circuit] controversy," which he saw as "initial ambushes in a power struggle between carriers and the computer industry. Everyone knows in the end, it means IBM vs. Telecommunications, through mercenaries."[63]
After Larry Roberts and Barry Wessler left ARPA in 1973 to found Telenet, a commercial packet-switched network in the US, they joined the international effort to standardize a protocol for packet switching based on virtual circuits shortly before it was finalized.[94] With contributions from the French, British, and Japanese PTTs, particularly the work of Rémi Després on RCP and TRANSPAC, along with concepts from DATAPAC in Canada, and Telenet in the US, the X.25 standard was agreed by the CCITT in 1976.[nb 8][62][95] X.25 virtual circuits were easily marketed because they permit simple host protocol support.[96] They also satisfy the INWG expectation of 1972 that each subnetwork can exercise its own protection against congestion (a feature missing with datagrams).[97][98]
Larry Roberts adopted X.25 on Telenet and found that "datagram packets are now more expensive than VC packets" in 1978.[75] Vint Cerf said Roberts turned down his suggestion to use TCP when he built Telenet, saying that people would only buy virtual circuits and he could not sell datagrams.[58][88] Roberts predicted that "As part of the continuing evolution of packet switching, controversial issues are sure to arise."[75] Pouzin remarked that "the PTT's are just trying to drum up more business for themselves by forcing you to take more service than you need."[99]
Internetworking protocols were still in their infancy.[100] Various groups, including ARPA researchers, the CYCLADES team, and others participating in INWG, were researching the issues involved, including the use of gateways to connect between two networks.[73][101] At the National Physical Laboratory in the UK, Davies' team studied the "basic dilemma" involved in interconnecting networks: a common host protocol requires restructuring existing networks that use different protocols. To explore this dilemma, the NPL network connected with the EIN by translating between two different host protocols, that is, using a gateway. Concurrently, the NPL connection to the EPSS used a common host protocol in both networks. NPL research confirmed establishing a common host protocol would be more reliable and efficient.[60]
The CYCLADES project, however, was shut down in the late 1970s for budgetary, political and industrial reasons and Pouzin was "banished from the field he had inspired and helped to create".[63]
The design of the Transmission Control Program incorporated both connection-oriented links and datagram services between hosts. A DARPA internetworking experiment in July 1977 linking the ARPANET, SATNET and PRNET demonstrated its viability.[102][103] Subsequently, DARPA and collaborating researchers at Stanford, UCL and BBN, among others, began work on the Internet, publishing a series of Internet Experiment Notes.[104][105] Bob Kahn's efforts led to the absorption of MIT's proposal by Dave Clark and Dave Reed for a Data Stream Protocol (DSP) into version 3 of TCP in January 1978 written by Cerf, now at DARPA, and Jon Postel at the Information Sciences Institute of the University of Southern California (USC).[106][107] Following discussions with Yogen Dalal and Bob Metcalfe at Xerox PARC,[108][109] in version 4 of TCP, first drafted in September 1978, Postel split the Transmission Control Program into two distinct protocols, the Transmission Control Protocol (TCP) as a reliable connection-oriented service and the Internet Protocol (IP) as connectionless service.[110][111] For applications that did not want the services of TCP, an alternative called the User Datagram Protocol (UDP) was added in order to provide direct access to the basic service of IP.[112] Referred to as TCP/IP from December 1978,[113] Version 4 was made standard for all military computer networking in March 1982.[114][115] It was installed on SATNET and adopted by NORSAR/NDRE in March and Peter Kirstein's group at UCL in November.[45] On January 1, 1983, known as "flag day", TCP/IP was installed on the ARPANET.[115][116] This resulted in a networking model that became known as the DoD internet architecture model (DoD model for short) or DARPA model.[86][117][118] Leonard Kleinrock's theoretical work published in the mid-1970s on the performance of the ARPANET was referenced the development of the protocol.[119][120][121]
The Coloured Book protocols, developed by British Post Office Telecommunications and the academic community at UK universities, gained some acceptance internationally as the first complete X.25 standard. First defined in 1975, they gave the UK "several years lead over other countries" but were intended as "interim standards" until international agreement was reached.[122][123][124][125] The X.25 standard gained political support in European countries and from the European Economic Community (EEC). The EIN, which was based on datagrams, was replaced with Euronet, which used X.25.[126][127] Peter Kirstein wrote that European networks tended to be short-term projects with smaller numbers of computers and users. As a result, the European networking activities did not lead to any strong standards except X.25,[nb 9] which became the main European data protocol for fifteen to twenty years. Kirstein said his group at University College London was widely involved, partly because they were one of the groups with the most expertise, and partly to try to ensure that the British activities, such as the JANET NRS, did not diverge too far from the US.[81] The construction of public data networks based on the X.25 protocol suite continued through the 1980s; international examples included the International Packet Switched Service (IPSS) and the SITA network.[95][128] Complemented by the X.75 standard, which enabled internetworking across national PTT networks in Europe and commercial networks in North America, this led to a global infrastructure for commercial data transport.[129][130][131]
Computer manufacturers developed proprietary protocol suites such as IBM's Systems Network Architecture (SNA), Digital Equipment Corporation's (DEC's) DECnet, Xerox's Xerox Network Systems (XNS, based on PUP) and Burroughs' BNA.[nb 10] By the end of the 1970s, IBM's networking activities were, by some measures, two orders of magnitude larger in scale than the ARPANET.[132] During the late 1970s and most of the 1980s, there remained a lack of open networking options. Therefore, proprietary standards, particularly SNA and DECnet, as well as some variants of XNS (e.g., Novell NetWare and Banyan VINES), were commonly used on private networks, becoming somewhat "de facto" industry standards.[123][133] Ethernet, promoted by DEC, Intel, and Xerox, outcompeted MAN/TOP, promoted by General Motors and Boeing.[134] DEC was an exception among the computer manufactures in supporting the peer-to-peer approach.[135]
In the US, the National Science Foundation (NSF), NASA, and the United States Department of Energy (DoE) all built networks variously based on the DoD model, DECnet, and IP over X.25.
The early research and development of standards for data networks and protocols culminated in the Internet–OSI Standards War in the 1980s and early 1990s. Engineers, organizations and nations became polarized over the issue of which standard would result in the best and most robust computer networks.[137][138] Both standards are open and non-proprietary in addition to being incompatible,[139] although "openness" may have worked against OSI while being successfully employed by Internet advocates.[140][141][142][136][143]
Researchers in the UK and elsewhere identified the need for defining higher-level protocols.[144] The UK National Computing Centre publication 'Why Distributed Computing', which was based on extensive research into future potential configurations for computer systems,[145] resulted in the UK presenting the case for an international standards committee to cover this area at the ISO meeting in Sydney in March 1977.[146][141]
Hubert Zimmermann, and Charles Bachman as chairman, played a key role in the development of the Open Systems Interconnections reference model. They considered it too early to define a set of binding standards while technology was still developing since irreversible commitment to a particular standard might prove sub-optimal or constraining in the long run.[147] Although dominated by computer manufacturers,[135] they had to contend with many competing priorities and interests. The rate of technological change made it necessary to define a model that new systems could converge to rather than standardizing procedures after the fact; the reverse of the traditional approach to developing standards.[148] Although not a standard itself, it was an architectural framework that could accommodate existing and future standards.[149]
Beginning in 1978, international work led to a draft proposal in 1980.[150] In developing the proposal, there were clashes of opinions between computer manufacturers and PTTs, and of both against IBM.[73][151] The final OSI model was published in 1984 by the International Organization for Standardization (ISO) in alliance with the International Telecommunication Union Telecommunication Standardization Sector (ITU-T), which was dominated by the PTTs.[141][152]
The most fundamental idea of the OSI model was that of a "layered" architecture. The layering concept was simple in principle but very complex in practice. The OSI model redefined how engineers thought about network architectures.[147]
The DoD model and other existing protocols, such as X.25 and SNA, all quickly adopted a layered approach in the late 1970s.[147][153] Although the OSI model shifted power away from the PTTs and IBM towards smaller manufacturer and users,[147] the "strategic battle" remained the competition between the ITU's X.25 and proprietary standards, particularly SNA.[154] Neither were fully OSI compliant. Proprietary protocols were based on closed standards and struggled to adopt layering while X.25 was limited in terms of speed and higher-level functionality that would become important for applications.[57] As early as 1982, RFC 874 criticised "zealous" advocates of the OSI reference model and criticised the functionality of the X.25 protocol and its use as an ""end-to-end" protocol in the sense of a Transport or Host-to-Host protocol".
Vint Cerf formed the Internet Configuration Control Board (ICCB) in 1979 to oversee the network's architectural evolution and field technical questions.[155] However, DARPA was still in control and, outside the nascent Internet community, TCP/IP was not even a candidate for universal adoption.[156][157][154][158] The implementation in 1985 of the Domain Name System proposed by Paul Mockapetris at USC, which enabled network growth by facilitating cross-network access,[159] and the development of TCP congestion control by Van Jacobson in 1986–88, led to a complete protocol suite, as outlined in RFC 1122 and RFC 1123 in 1989. This laid the foundation for the growth of TCP/IP as a comprehensive protocol suite, which became known as the Internet protocol suite.[160]
DARPA studied and implemented gateways,[101][57] which helped to neutralize X.25 as a rival networking paradigm. The computer science historian Janet Abbate explained: "by running TCP/IP over X.25, [D]ARPA reduced the role of X.25 to providing a data conduit, while TCP took over responsibility for end-to-end control. X.25, which had been intended to provide a complete networking service, would now be merely a subsidiary component of [D]ARPA's own networking scheme. The OSI model reinforced this reinterpretation of X.25's role. Once the concept of a hierarchy of protocols had been accepted, and once TCP, IP, and X.25 had been assigned to different layers in this hierarchy, it became easier to think of them as complementary parts of a single system, and more difficult to view X.25 and the Internet protocols as distinct and competing systems."[161]
The DoD reduced research funding for networks,[135] responsibilities for governance shifted to the National Science Foundation and the ARPANET was shut down in 1990.[162][146][163]
Historian Andrew L. Russell wrote that Internet engineers such as Danny Cohen and Jon Postel were accustomed to continual experimentation in a fluid organizational setting through which they developed TCP/IP. They viewed OSI committees as overly bureaucratic and out of touch with existing networks and computers. This alienated the Internet community from the OSI model. A dispute broke out within the Internet community after the Internet Architecture Board (IAB) proposed replacing the Internet Protocol in the Internet with the OSI Connectionless Network Protocol (CLNP). In response, Vint Cerf performed a striptease in a three-piece suit while presenting to the 1992 Internet Engineering Task Force (IETF) meeting, revealing a T-shirt emblazoned with "IP on Everything". According to Cerf, his intention was to reiterate that a goal of the IAB was to run IP on every underlying transmission medium.[164] At the same meeting, David Clark summarized the IETF approach with the famous saying "We reject: kings, presidents, and voting. We believe in: rough consensus and running code."[164] The Internet Society (ISOC) was chartered that year.[165]
Cerf later said the social culture (group dynamics) that first evolved during the work on the ARPANET was as important as the technical developments in enabling the governance of the Internet to adapt to the scale and challenges involved as it grew.[142][155]
François Flückiger wrote that "firms that win the Internet market, like Cisco, are small. Simply, they possess the Internet culture, are interested in it and, notably, participate in IETF."[136][166]
Furthermore, the Internet community was opposed to a homogeneous approach to networking, such as one based on a proprietary standard such as SNA. They advocated for a pluralistic model of internetworking where many different network architectures could be joined into a network of networks.[167]
Russell notes that Cohen, Postel and others were frustrated with technical aspects of OSI.[164] The model defined seven layers of computer communications, from physical media in layer 1 to applications in layer 7, which was more layers than the network engineering community had anticipated. In 1987, Steve Crocker said that although they envisaged a hierarchy of protocols in the early 1970s, "If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required."[46] Although some sources say this was an acknowledgement that the four layers of the Internet Protocol Suite were inadequate.[168]
Strict layering in OSI was viewed by Internet advocates as inefficient and did not allow trade-offs ("layer violation") to improve performance. The OSI model allowed what some saw as too many transport protocols (five compared with two for TCP/IP). Furthermore, OSI allowed for both the datagram and the virtual circuit approach at the network layer, which are non-interoperable options.[137][135]
By the early 1980s, the conference circuit became more acrimonious. Carl Sunshine summarized in 1989: "In hindsight, much of the networking debate has resulted from differences in how to prioritize the basic network design goals such as accountability, reliability, robustness, autonomy, efficiency, and cost effectiveness. Higher priority on robustness and autonomy led to the DoD Internet design, while the PDNs have emphasized accountability and controllability."[135]
Richard des Jardins, an early contributor to the OSI reference model, captured the intensity of the rivalry in a 1992 article by saying "Let's continue to get the people of good will from both communities to work together to find the best solutions, whether they are two-letter words or three-letter words, and let's just line up the bigots against a wall and shoot them."[164]
In 1996, RFC 1958 described the "Architectural Principles of the Internet" by saying "in very general terms, the community believes that the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network."
Beginning in the early 1980s, DARPA pursued commercial partnerships with the telecommunication and computer industry which enabled the adoption of TCP/IP.[106] In Europe, CERN purchased UNIX machines with TCP/IP for their intranet between 1984 and 1988.[13][169] Nonetheless, Paul Bryant, the UK representative on the European Academic and Research Network (EARN) Board of Directors,[170] said "By the time JNT [the UK academic network JANET] came along [in 1984] we could demonstrate X25… and we firmly believed that BT [British Telecom] would provide us with the network infrastructure and we could do away with leased lines and experimental work. If we had gone with DARPA then we would not have expected to be able to use a public service. In retrospect the flaws in that argument are clear but not at the time. Although we were fairly proud of what we were doing, I don't think it was national pride or anti USA that drove us, it was a belief that we were doing the right thing. It was the latter that translated to religious dogma."[88] JANET was a free X.25-based network for academic use, not research; experiments and other protocols were forbidden.[171]
The DARPA Internet was still a research project that did not allow commercial traffic or for-profit services. The NSFNET initiated operations in 1986 using TCP/IP but, two years later, the US Department of Commerce mandated compliance with the OSI model and the Department of Defense planned to transition away from TCP/IP to OSI.[172] Carl Sunshine wrote in 1989 that "by the mid-1980s ... serious performance problems were emerging [with TCP/IP], and it was beginning to look like the critics of "stateless" datagram networking might have been right on some points".[135]
The major European countries and the EEC endorsed OSI.[nb 11] They founded RARE and associated national network operators (such as DFN, SURFnet, SWITCH) to promote OSI protocols, and restricted funding for non-OSI compliant protocols.[nb 12] However, by 1988, the Internet community had defined the Simple Network Management Protocol (SNMP) to enable management of network devices (such as routers) on multi-vendor networks and the Interop '88 trade show showcased new products for implementing networks based on TCP/IP.[173][112] The same year, EUnet, the European UNIX Network, announced its conversion to Internet technology.[136] By 1989, the OSI advocate Brian Carpenter made a speech at a technical conference entitled "Is OSI Too Late?" which received a standing ovation.[141][174][175] OSI was formally defined, but vendor products from computer manufactures and network services from PTTs were still to be developed.[135][176][177] TCP/IP by comparison was not an official standard (it was defined in unofficial RFCs) but UNIX workstations with both Ethernet and TCP/IP included had been available since 1983 and now served as a de facto interoperability standard.[137][143] Carl Sunshine notes that "research is underway on how to optimize TCP/IP performance over variable delay and/or very-high-speed networks"[135] However, Bob Metcalfe said "it has not been worth the ten years wait to get from TCP to TP4, but OSI is now inevitable" and Sunshine expected "OSI architecture and protocols ... will dominate in the future."[135] The following year, in 1990, Cerf said: "You can't pick up a trade press article anymore without discovering that somebody is doing something with TCP/IP, almost in spite of the fact that there has been this major effort to develop international standards through the international standards organization, the OSI protocol, which eventually will get there. It's just that they are taking a lot of time.".[178]
By the beginning of the 1990s, some smaller European countries had adopted TCP/IP.[nb 13] In February 1990, RARE stated "without putting into question its OSI policy, [RARE] recognizes the TCP/IP family of protocols as an open multivendor suite, well adapted to scientific and technical applications." In the same month, CERN established a transatlantic TCP/IP link with Cornell University in the United States.[136][179] Conversely, starting in August 1990, the NSFNET backbone supported the OSI CLNP in addition to TCP/IP. CLNP was demonstrated in production on NSFNET in April 1991, and OSI demonstrations, including interconnections between US and European sites, were planned at the Interop '91 conference in October that year.[180]
At the Rutherford Appleton Laboratory (RAL) in the United Kingdom in January 1991, DECnet represented 75% of traffic, attributed to Ethernet between VAXs. IP was the second most popular set of protocols with 20% of traffic, attributed to UNIX machines for which "IP is the natural choice". Paul Bryant, Head of Communications and Small Systems at RAL, wrote "Experience has shown that IP systems are very easy to mount and use, in contrast to such systems as SNA and to a lesser extent X.25 and Coloured Books where the systems are rather more complex." The author continued "The principal network within the USA for academic traffic is now based on IP. IP has recently become popular within Europe for inter-site traffic and there are moves to try and coordinate this activity. With the emergence of such a large combined USA/Europe network there are great attractions for UK users to have good access to it. This can be achieved by gatewaying Coloured Book protocols to IP or by allowing IP to penetrate the UK. Gateways are well known to be a cause of loss of quality and frustration. Allowing IP to penetrate may well upset the networking strategy of the UK."[124] Similar views were shared by others at the time, including Louis Pouzin.[141] At CERN, Flückiger reflected "The technology is simple, efficient, is integrated into UNIX-type operating systems and costs nothing for the users' computers. The first companies that commercialize routers, such as Cisco, seem healthy and supply good products. Above all, the technology used for local campus networks and research centres can also be used to interconnect remote centers in a simple way."[136]
Beginning in March 1991, the JANET IP Service (JIPS) was set up as a pilot project to host IP traffic on the existing network.[181] Within eight months, the IP traffic had exceeded the levels of X.25 traffic, and the IP support became official in November. Also in 1991, Dai Davies introduced Internet technology over X.25 into the pan-European NREN, EuropaNet, although he experienced personal opposition to this approach.[182][183] The EARN and RARE adopted IP around the same time,[nb 14] and the European Internet backbone EBONE became operational in 1992.[136] OSI usage on the NSFNET remained low when compared to TCP/IP. In the UK, the JANET community talked about a transition to OSI protocols, which was to begin with moving to X.400 mail as the first step, but this never happened. The X.25 service was closed in August 1997.[184][185]
Mail was commonly delivered via Unix to Unix Copy Program (UUCP) in the 1980s, which was well suited for handling message transfers between machines that were intermittently connected. The Government Open Systems Interconnection Profile (GOSIP), developed in the late 1980s and early 1990s, would have led to X.400 adoption. Proprietary commercial systems offered an alternative. In practice, use of the Internet suite of email protocols (SMTP, POP and IMAP) grew rapidly.[186]
The invention of the World Wide Web in 1989 by Tim Berners-Lee at CERN, as an application on the Internet,[187] brought many social and commercial uses to what was previously a network of networks for academic and research institutions.[188][189] The Web began to enter everyday use in 1993–4.[190] The US National Institute for Standards and Technology proposed in 1994 that GOSIP should incorporate TCP/IP and drop the requirement for compliance with OSI,[172] which was adopted into Federal Information Processing Standards the following year.[nb 15][191] NSFNET had altered its policies to allow commercial traffic in 1991,[192] and was shut down in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic.[193] Subsequently, the Internet backbone was provided by commercial Internet service providers and Internet connectivity became ubiquitous.[194][195]
As the Internet evolved and expanded exponentially, an enhanced protocol was developed, IPv6, to address IPv4 address exhaustion.[196][nb 16] In the 21st century, the Internet of things is leading to the connection of new types of devices to the Internet, bringing reality to Cerf's vision of "IP on Everything".[198] Nonetheless, shortcomings exist with today's Internet; for example, insufficient support for multihoming.[199][200] Alternatives have been proposed, such as Recursive Network Architecture,[201] and Recursive InterNetwork Architecture.[202]
The seven-layer OSI model is still used as a reference for teaching and documentation;[203] however, the OSI protocols conceived for the model did not gain popularity. Some engineers argue the OSI reference model is still relevant to cloud computing.[204] Others say the original OSI model doesn't fit today's networking protocols and have suggested instead a simplified approach.[205]
Other standards such as X.25 and SNA remain niche players.[206]
Katie Hafner and Matthew Lyon published one of the earliest in-depth and comprehensive histories of the ARPANET and how it led to the Internet. Where Wizards Stay Up Late: The Origins of the Internet (1996) explores the "human dimension" of the development of the ARPANET covering the "theorists, computer programmers, electronic engineers, and computer gurus who had the foresight and determination to pursue their ideas and affect the future of technology and society".[207][208]
Roy Rosenzweig suggested in 1998 that no one single account of the history of the Internet is sufficient and there will need to be a more adequate history written that includes aspects of many books.[45][209]
Janet Abbate's 1999 book Inventing the Internet was widely reviewed as an important work on the history of computing and networking, particularly in highlighting the role of social dynamics and of non-American participation in early networking development.[210][211] The book was also praised for its use of archival resources to tell the history.[212] She has since written about the need for historians to be aware of the perspectives they take in writing about the history of the Internet and explored the implications of defining the Internet in terms of "technology, use and local experience" rather than through the lens of the spread of technologies from the United States.[213][214]
In his many publications on the "histories of networking", Andrew L. Russell argues scholars could and should look differently at the history of the Internet. His work shifts scholarly and popular understanding about the origins of the Internet and contemporary work in Europe that both competed and cooperated with the push for TCP/IP.[215][216][217] James Pelkey conducted interviews with Internet pioneers in the late 1980s and completed his book with Andrew Russell in 2022.[3]
Martin Campbell-Kelly and Valérie Schafer have focused on British and French contributions as well as global and international considerations in the development of packet switching, internetworking and the Internet.[218][132][63][214]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.