A (very) Short History of Computing

The first computer was conceived as a machine of cogs and gears (Figure 1.1) but only became practical in the 1950s and 60s with the invention of semi-conductors. In the 1970s, a hardwarecompany called IBM (footnote 1) emerged as the computing leader. In the 1980s, however, software became increasingly important, and by the 1990s a software company called Microsoft (footnote 2) had become the computing frontline leader by giving ordinary people tools like word-processing. During the 1990s computing became more personal, until the World-Wide-Web turned Internet URLs into web site names that people could read (footnote 3). A company called Google (footnote 4) then offered the ultimate personal service, free access to the vast public library we call the Internet, and soon everyone’s gateway to the web was the new computing leader. In the 2000s computing evolved yet again, to become a social medium as well as a personal tool. So now Facebook challenges Google, as Google challenged Microsoft, and as Microsoft challenged IBM.

Yet to design a computer system one must define it, so what is computing? This deceptively simple question requires many answers because computing has re-invented itself every decade or so (Figure 1.2). What began as hardware became about software, then about users and is now about online communities. This chapter analyzes the evolution of computing as it impacts computing design.

  • Charles Babbage (1791-1871) designed the first automatic computing engine. He invented computers but failed to build them. The first complete Babbage Engine was completed in London in 2002, 153 years after it was designed. Difference Engine No. 2, built faithfully to the original drawings, consists of 8,000 parts, weighs five tons, and measures 11 feet. Shown above is Serial Number 2, located in Silicon Valley at the Computer History Museum in Mountain View, California 
  • The evolution of computing is approached here using Bertalanffy’s general systems theory(Bertalanffy, 1968). This theory is based on the observation of discipline isomorphisms, when different specialist fields discover the same abstract equation or law in different contexts, e.g. a social agreement measure that matches a biological diversity measure (Whitworth, 2006). Bertalanffy proposed a “science of sciences”, namely the study of systems in general, since sociologists study social systems, psychologists cognitive systems, computer scientists information systems, and engineers hardware systems. The isomorphisms of science are then general system rules that apply across disciplines.

    Applying general systems theory to the evolution of computing gives the computing levels shown in Figure 1.3, where a computing system can be studied as a mechanical system, a software system, a human system or a social system, by engineers, computer scientists, psychologists and sociologists respectively. Computing began at the mechanical level, added an information level (software), then a human level and finally a community level; it is an example of general system evolution.

    *The Evolution of Computers, 1st, 2nd, 3rd, 4th Generation, and More to Come



    Computers in the form of personal desktop computers, laptops and tablets have become such an important part of everyday living that it can be difficult to remember a time when they did not exist. In reality, computers as they are known and used today are still relatively new. Although computers have technically been in use since the abacus approximately 5000 years ago, it is modern computers that have had the greatest and most profound effect on society. The first full-sized digital computer in history was developed in 1944. Called the Mark I, this computer was used only for calculations and weighed five tons. Despite its size and limited ability it was the first of many that would start off generations of computer development and growth.



  • First Generation Computers

    First generation computers bore little resemblance to computers of today, either in appearance or performance. The first generation of computers took place from 1940 to 1956 and was extremely large in size. The inner workings of the computers at that time were unsophisticated. These early machines required magnetic drums for memory and vacuum tubes that worked as switches and amplifiers. It was the vacuum tubes that were mainly responsible for the large size of the machines and the massive amounts of heat that they released. These computers produced so much heat that they regularly overheated despite large cooling units. First generation computers also used a very basic programming language that is referred to as machine language.



  • Second Generation Computers

    The second generation (from 1956 to 1963) of computers managed to do away with vacuum tubes in lieu of transistors. This allowed them to use less electricity and generate less heat. Second generation computers were also significantly faster than their predecessors. Another significant change was in the size of the computers, which were smaller. Transistor computers also developed core memory which they used alongside magnetic storage.



  • Third Generation Computers

    From 1964 to 1971 computers went through a significant change in terms of speed, courtesy of integrated circuits. Integrated circuits, or semiconductor chips, were large numbers of miniature transistors packed on silicon chips. This not only increased the speed of computers but also made them smaller, more powerful, and less expensive. In addition, instead of the punch cards and the printouts of previous systems, keyboards and monitors were now allowing people to interact with computing machines.



  • Fourth Generation Computers

    The changes with the greatest impact occurred in the years from 1971 to 2010. During this time technology developed to a point where manufacturers could place millions of transistors on a single circuit chip. This was called monolithic integrated circuit technology. It also heralded the invention of the Intel 4004 chip which was the first microprocessor to become commercially available in 1971. This invention led to the dawn of the personal computer industry. By the mid-70s, personal computers such as the Altair 8800 became available to the public in the form of kits and required assembly. By the late 70s and early 80s assembled personal computers for home use, such as the Commodore Pet, Apple II and the first IBM computer, were making their way onto the market. Personal computers and their ability to create networks eventually would lead to the Internet in the early 1990s. The fourth generation of computers also saw the creation of even smaller computers including laptops and hand-held devices. Graphical user interface, or GUI, was also invented during this time. Computer memory and storage also went through major improvements, with an increase in storage capacity and speed.



  • The Fifth Generation of Computers

    In the future, computer users can expect even faster and more advanced computer technology. Computers continue to develop into advanced forms of technology. Fifth generation computing has yet to be truly defined, as there are numerous paths that technology is taking toward the future of computer development. For instance, research is ongoing in the fields of nanotechnology, artificial intelligence, as well as quantum computation.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s