Thursday, March 11, 2010

Networks

Users who have requirements for sharing data have two options: either they share a computer, possibly with several terminals, or they connect their computers to the same network.

In the seventies, users would have limited access to applications on passive terminals connected to a mainframe or minicomputer where both data and logic processing would be centralized.

In the early eighties came the vision promoted by Microsoft and Intel, that users needed terminals with processing power and personal storage, i.e. personal computers, especially for office automation, including word processing and spreadsheets. Despite the reluctance of IT departments, this vision really soared with the demand for laptops and the ability to work anywhere anytime.

In the late eighties, it became pretty obvious that these personal computers, minicomputers and mainframes needed to be interconnected in order to share information. Local area networks (LAN) developed in companies and after a few years, a single network topology survived, that is the TCP/IP protocol over Ethernet connections on Unshielded Twisted Pair (UTP) wires. Basically, UTP refers to the cabling system, Ethernet refers to the hardware that ensures reliable connections, and TCP/IP refers to the protocol that ensures the flow of data once connections are established.

In the early nineties, the trend continued with the development of wide area networks (WAN) as large companies with scattered offices needed to interconnect their distant local area networks. Once again, a single technology survived and all local area networks are now connected to the Internet, which also uses the TCP/IP protocol to build a network of networks.

The location of processing power and data has always triggered a debate opposing IT departments, in favour of centralization and control, to users claiming for a richer toolset:

  • In the mainframe architectures of the seventies, data is stored and programs execute centrally on the mainframe;
  • In the client-server architectures of the late eighties early nineties, data remains on the mainframe, also called “server”, but programs reside and execute on the user’s networked computer, also called “client”. In order to bring the data closer the the users, large mainframes have been split into multiple smaller servers. This is known as “downsizing”.
  • In recent web architectures, shared data still remains centralized on a server, programs reside on the server, but are downloaded and executed by a browser on the client whenever needed. Accordingly, centralized architectures and larger servers have regained interest. As larger computers generate economies of scale, software has developed that allows multiple virtual servers called “virtual machines” to run concurrently on the same computer. Each virtual machine uses its own slice of CPU time and its own dedicated memory space.
  • Cloud computing is an on-demand model for consuming IT resources remotely, based on the Internet, web architectures and virtual machines.

The general trend which comes out is the ability to consume applications, data, storage space, processor time, and more generally any type of IT resources regardless of its technical implementation and geographic location.

NB: Mention Cisco as top manufacturer

No comments: