Thursday, March 11, 2010
Networks
Users who have requirements for sharing data have two options: either they share a computer, possibly with several terminals, or they connect their computers to the same network.
In the seventies, users would have limited access to applications on passive terminals connected to a mainframe or minicomputer where both data and logic processing would be centralized.
In the early eighties came the vision promoted by Microsoft and Intel, that users needed terminals with processing power and personal storage, i.e. personal computers, especially for office automation, including word processing and spreadsheets. Despite the reluctance of IT departments, this vision really soared with the demand for laptops and the ability to work anywhere anytime.
In the late eighties, it became pretty obvious that these personal computers, minicomputers and mainframes needed to be interconnected in order to share information. Local area networks (LAN) developed in companies and after a few years, a single network topology survived, that is the TCP/IP protocol over Ethernet connections on Unshielded Twisted Pair (UTP) wires. Basically, UTP refers to the cabling system, Ethernet refers to the hardware that ensures reliable connections, and TCP/IP refers to the protocol that ensures the flow of data once connections are established.
In the early nineties, the trend continued with the development of wide area networks (WAN) as large companies with scattered offices needed to interconnect their distant local area networks. Once again, a single technology survived and all local area networks are now connected to the Internet, which also uses the TCP/IP protocol to build a network of networks.
The location of processing power and data has always triggered a debate opposing IT departments, in favour of centralization and control, to users claiming for a richer toolset:
- In the mainframe architectures of the seventies, data is stored and programs execute centrally on the mainframe;
- In the client-server architectures of the late eighties early nineties, data remains on the mainframe, also called “server”, but programs reside and execute on the user’s networked computer, also called “client”. In order to bring the data closer the the users, large mainframes have been split into multiple smaller servers. This is known as “downsizing”.
- In recent web architectures, shared data still remains centralized on a server, programs reside on the server, but are downloaded and executed by a browser on the client whenever needed. Accordingly, centralized architectures and larger servers have regained interest. As larger computers generate economies of scale, software has developed that allows multiple virtual servers called “virtual machines” to run concurrently on the same computer. Each virtual machine uses its own slice of CPU time and its own dedicated memory space.
- Cloud computing is an on-demand model for consuming IT resources remotely, based on the Internet, web architectures and virtual machines.
The general trend which comes out is the ability to consume applications, data, storage space, processor time, and more generally any type of IT resources regardless of its technical implementation and geographic location.
NB: Mention Cisco as top manufacturer
Wednesday, March 10, 2010
Other hardware devices
Computers come in many forms and generally the larger the computer, the more powerful it is. We used to have:
- micro-computers or personal computers which sit on a desk;
- mini-computers which are the size of a filing cabinet;
- mainframes which are the size of a room.
Nowadays, there are many more form factors available and you can get computers of almost any size, shape and even colour: desktops, towers, all-in-ones, laptops, netbooks, rack-mounted, blades, tablets,…
Besides computers, information systems include many other types of hardware devices.
Some hardware devices are accessory to a computer and provide an interface with the user. They are called peripheral devices:
- Monitors output a video signal that humans can see;
- Keyboards and mice record user inputs;
- Printers deliver print-outs;
- Scanners and webcams capture images.
Other accessory hardware devices provide an interface with other systems:
- Network interface cards (NIC) connect computers to networks;
- Specific I/O interfaces might be used to pilot robots or gather measurements from equipments.
Then there are all kinds of devices which are architected like computers with a CPU, RAM, persistent storage and a bus, but which are optimized, generally from a user-friendliness perspective, to accomplish extremely well a very specific set of tasks:
- Smart phones fit a pocket and provide phone calls, personal information management (PIM), web browsing and email;
- Game consoles and media centres connect to televisions and provide outstanding graphics and sounds for a great entertainment experience;
- Portable media players like iPods and walkmans are dedicated to providing a similar but portable entertainment experience.
- GPS navigation devices store maps and connect to satellites to help you find your way by car or by foot, wherever you are and wherever you want to go;
- E-readers store and display books in digital format, so you can read them comfortably.
This list is certainly not exhaustive as you can expect more and more devices optimized for specific tasks in specific environments in the years to come, although on the long term it is the author’s firm belief that as smart phones become more powerful, they will take over all kinds of pocket devices. Note that smart phones have already taken over personal digital assistants (PDA) and pocket PCs.
Computers and components
A computer is a machine, which has the ability to execute a set of instructions, a program, repeatedly and extremely rapidly. In this respect, a computer without programs is completely dull.
To make this work, a computer includes several key components:
- A central processing unit (CPU) which people often qualify as the brain of a computer, although the only ability of a CPU is to make calculations on series of digits (0 or 1). An 8-bit CPU shall work on series of 8 bits called bytes. A 64-bit CPU shall work on series of 64 bits. The higher the frequency of a CPU, the higher the number of instructions this CPU can process in a period of time.
- The persistent memory of a computer is essentially the hard drive, although there are many other means to persist data including optical drives, flash drives and solid state drives. Considering the large amounts of data we need to store, persistence relies on inexpensive but slow means.
- To compensate for the slow persistent memory and avoid bottlenecks as the CPU requires more data to process instructions, computer architects have added expensive but fast random access memory (RAM). Basically, the fast RAM acts as a buffer between the fast CPU and the slow hard drive. The first time the CPU requires data, it obviously has to be read from the slow hard drive, but from then on it is available in fast RAM until space needs to be provided for new data. The more often the CPU can find data in RAM, the better is the overall computer performance.
- The eyes, ears, mouth and members through which of a computer interacts with its environment are the input/output (I/O) devices. These I/O devices include monitors, keyboards, mice, printers, scanners, networks, …
- The nervous system which connects all the components of a computer together and ensures the flow of data between them is called the bus. The bus is implemented in the motherboard, which provides connectors to plug the CPU, the RAM and the other components.
There is a misconception that the more powerful the CPU is, the better the computer is and if your computer is not fast enough you need a better CPU. Most of the time, from a pure hardware perspective, adding RAM is the key to improving computer performances.
Moore’s law states that the processing power of a CPU doubles every two years.
Kryder's Law states that the density of hard drives doubles annually.
Nielsen’s Law states that Internet bandwidth grows by 50% annually.
Accordingly, computers make tremendous progresses every year, and you are often better off buying a cheaper computer with the perspective to renew it in a couple of years rather than buying a more expensive one in view to keep it on a longer term.
NB: Add top manufacturers
Tuesday, March 09, 2010
25 rules: table of contents
- IT is nothing more than a tool
- Turn IT into a competitive advantage
- Keep it stupid and simple
- If it is not broken, do not fix it
- Have well-defined requirements
- Plan for double-time and budget
- Do not reinvent the wheel
- Do not develop custom or plan to do it twice
- Avoid being an early adopter
- Beware the salesman
- RFx your purchases
- Free does not mean good
- Try before you buy
- Buy from well-established brands, but do not buy a Roll's Royce
- Good enough is good enough, aka the Pareto rule
- Use an ROI model with KPIs
- Hire the right people and make them do the right things
- Breakdown your projects into smaller projects and proceed iteratively
- Manage change
- Leverage your power users
- Outsourcing versus insourcing
- Best of breeds versus integrated
- Manage the magic triangle with a focus on risks
- Even if you do everything by the books,it always goes wrong
- When it goes wrong, prioritize
25 definitions: table of contents
- Computers and components
- Other hardware devices
- Networks
- Operating systems
- Software
- User interface
- Office automation
- Databases
- ERP (back-office)
- CRM (front-office)
- SRM - SCM
- Document and content management
- Other applications
- Internet
- Web and email
- Rich internet applications
- Hosted versus on site
- Security
- Viruses
- Backup/restore
- Programming
- Project management
- Maintenance and support
- Training
- People and roles
Friday, August 24, 2007
Introduction
IT explained to C-level executives
Welcome to "25 definitions and 25 rules to demystify information technologies".
People including executives always complain about the complexity of information technologies (IT) but a large part of said complexity lies in the vocabulary. Vocabulary creates a gap between the people who master it and juggle with acronyms and the ones who have not taken the time to learn it. On one hand some technophiles abuse and take pleasure in drowning others in technical trivia to make themselves knowledgeable and indispensable. On the other hand every profession has its own vocabulary and you need a set of common terms to communicate effectively. Accounting terms like COGS, DSO, EBIT and PE ratio will probably sound Chinese to any IT engineer, but not to a CEO or a CFO. So you need to learn some IT vocabulary, if you want to play an active role in IT. We have provided here 25 definition topics which present in a concise way the minimum knowledge a senior executive should have.
Understanding the vocabulary is not enough to take informed decisions. You need best practices. A best practice is a technique or methodology that, through experience and research, has proven to reliably lead to a desired result. Keeping records up-to-date at any time is a best practice lauded by most accountants. You need such best practices to serve as guidance as you participate in IT projects. We have included here 25 golden rules of IT which should help you make the difference between good and bad IT engineers and also between good and bad projects.
The goal of this collection of definitions and rules is to bring non-IT senior executives to a sufficient level of knowledge in IT to bridge the gab with IT specialists and help them communicate more effectively. These definitions and rules are concise because senior executives do not have time.
This is provided as a blog to be interactive, so please post your comments, suggestions and own experiences.