top of page
ebharheifimargo

History of computer network: From telegraph to internet



Based on over 80 interviews of entrepreneurs, engineers, executives, and government regulators, this website chronicles the stories of early startups in the fields of data communications, local area networking, and internetworking. A collection of first-person accounts, market data, and historical narrative, The History of Computer Communications is an excellent source of information for students and professors of computer science, business, and history, as well as anyone interested in the compelling stories of the entrepreneurs that laid the foundations for our globally connected world.




History of computer Network




This history follows the origins of computer networks, as the world moved from a telecommunication system based on analogue circuit connections, to a digital, globally distributed network of networks. The questions this research attempts to answer include:


In contrast to the de-regulation that gave birth the data communications market, the local area networking market evolved out of the technical innovation of many key engineers like Gordon Bell of DEC, Ethernet inventor and 3Com founder, Robert Metcalfe, token ring innovator Dave Farber, and startup founders including Ralph Ungermann, Charlie Bass, Judith Estrin, Bill Carrico and many others. The testimonies of these pioneers illustrate the challenges of bringing to market new technologies before a large market for them existed, and the tenacity they needed to manifest their visions for radically changing the future of computing.


The history culminates with two important 1988 tradeshows where vendors and government sponsored agencies touted the future of internetworking products. At the time, the internetworking market was only a fraction of what it would become, but the origin stories of startups like Retix, Wellfleet and Cisco are early examples of the high stakes model of venture-backed successes and failures in what would become a global tech industry.


The Internet started in the 1960s as a way for government researchers to share information. Computers in the '60s were large and immobile and in order to make use of information stored in any one computer, one had to either travel to the site of the computer or have magnetic computer tapes sent through the conventional postal system.


Another catalyst in the formation of the Internet was the heating up of the Cold War. The Soviet Union's launch of the Sputnik satellite spurred the U.S. Defense Department to consider ways information could still be disseminated even after a nuclear attack. This eventually led to the formation of the ARPANET (Advanced Research Projects Agency Network), the network that ultimately evolved into what we now know as the Internet. ARPANET was a great success but membership was limited to certain academic and research organizations who had contracts with the Defense Department. In response to this, other networks were created to provide information sharing.


January 1, 1983 is considered the official birthday of the Internet. Prior to this, the various computer networks did not have a standard way to communicate with each other. A new communications protocol was established called Transfer Control Protocol/Internetwork Protocol (TCP/IP). This allowed different kinds of computers on different networks to "talk" to each other. ARPANET and the Defense Data Network officially changed to the TCP/IP standard on January 1, 1983, hence the birth of the Internet. All networks could now be connected by a universal language.


The image above is a scale model of the UNIVAC I (the name stood for Universal Automatic Computer) which was delivered to the Census Bureau in 1951. It weighed some 16,000 pounds, used 5,000 vacuum tubes, and could perform about 1,000 calculations per second. It was the first American commercial computer, as well as the first computer designed for business use. (Business computers like the UNIVAC processed data more slowly than the IAS-type machines, but were designed for fast input and output.) The first few sales were to government agencies, the A.C. Nielsen Company, and the Prudential Insurance Company. The first UNIVAC for business applications was installed at the General Electric Appliance Division, to do payroll, in 1954. By 1957 Remington-Rand (which had purchased the Eckert-Mauchly Computer Corporation in 1950) had sold forty-six machines.


ARPANET expanded to connect DOD with those universities of the US that were carrying out defense-related research. It covered most of the major universities across the country. The concept of networking got a boost when University College of London (UK) and Royal Radar Network (Norway) connected to the ARPANET and a network of networks was formed.


The term Internet was coined by Vinton Cerf, Yogen Dalal and Carl Sunshine of Stanford University to describe this network of networks. Together they also developed protocols to facilitate information exchange over the Internet. Transmission Control Protocol (TCP) still forms the backbone of networking.


With commercialization of internet, more and more networks were developed in different part of the world. Each network used different protocols for communicating over the network. This prevented different networks from connecting together seamlessly. In the 1980s, Tim Berners-Lee led a group of Computer scientists at CERN, Switzerland, to create a seamless network of varied networks, called the World Wide Web (WWW).


By 1973 it was clear to the networking vanguard that another protocol layer needed to be inserted into the protocol hierarchy to accommodate the interconnection of diverse types of individual networks.


Computer vision needs lots of data. It runs analyses of data over and over until it discerns distinctions and ultimately recognize images. For example, to train a computer to recognize automobile tires, it needs to be fed vast quantities of tire images and tire-related items to learn the differences and recognize a tire, especially one with no defects.


Much like a human making out an image at a distance, a CNN first discerns hard edges and simple shapes, then fills in information as it runs iterations of its predictions. A CNN is used to understand single images. A recurrent neural network (RNN) is used in a similar way for video applications to help computers understand how pictures in a series of frames are related to one another.


At about the same time, the first computer image scanning technology was developed, enabling computers to digitize and acquire images. Another milestone was reached in 1963 when computers were able to transform two-dimensional images into three-dimensional forms. In the 1960s, AI emerged as an academic field of study, and it also marked the beginning of the AI quest to solve the human vision problem.


1974 saw the introduction of optical character recognition (OCR) technology, which could recognize text printed in any font or typeface.(3) Similarly, intelligent character recognition (ICR) could decipher hand-written text using neural networks.(4) Since then, OCR and ICR have found their way into document and invoice processing, vehicle plate recognition, mobile payments, machine translation and other common applications.


In 1982, neuroscientist David Marr established that vision works hierarchically and introduced algorithms for machines to detect edges, corners, curves and similar basic shapes. Concurrently, computer scientist Kunihiko Fukushima developed a network of cells that could recognize patterns. The network, called the Neocognitron, included convolutional layers in a neural network.


Originally, "hacker" did not carry the negative connotations now associated with the term. In the late 1950s and early 1960s, computers were much different than the desktop or laptop systems most people are familiar with. In those days, most companies and universities used mainframe computers: giant, slow-moving hunks of metal locked away in temperature-controlled glass cages. It cost thousands of dollars to maintain and operate those machines, and programmers had to fight for access time.


Because of the time and money involved, computer programmers began looking for ways to get the most out of the machines. The best and brightest of those programmers created what they called "hacks" - shortcuts that would modify and improve the performance of a computer's operating system or applications and allow more tasks to be completed in a shorter time.


Not until the early 1980s did the word "hacker" earn disdain, when people like Kevin Mitnick, Kevin Poulsen and Vladimir Levin (more on them later) began using computers and the internet for their own questionable gains. Still, for all the negative things hackers have done, I believe they provide a necessary (and even valuable) service, which I'll elaborate on after a brief timeline of some of the high points (or low points, depending on how you look at it) in the history of computer hacking


1983: The movie "War Games," starring Matthew Broderick, is released in theaters. Broderick plays a teenage hacker who taps into a Pentagon supercomputer nicknamed "WOPR" and nearly starts World War III. (WOPR is a spoof of NORAD's old central computer processing system, which had the acronym "BURGR.")


In one of the first high-profile cases against computer hackers, the FBI arrests six teenagers from Milwaukee known as the "414s," named after the city's area code. They are accused of breaking into more than 60 computer networks, including those of Memorial Sloan-Kettering Cancer Center and Los Alamos National Laboratory. One hacker gets immunity for his testimony; the others are given probation.


1984: Eric Corley begins publishing an underground magazine called 2600: The Hacker Quarterly, which quickly becomes a clearinghouse for telephone and computer hacking. The following year, a pair of journalists from St. Louis begin publishing Phrack, an electronic magazine that provides hacking information. 2ff7e9595c


1 view0 comments

Recent Posts

See All

Comments


bottom of page