Upload
aish-warya
View
18
Download
1
Embed Size (px)
DESCRIPTION
LEARN COMPUTER, COMPUTER HARDWARE, SOFTWARE, MS OFFICE, MICROSOFT WINDOWS COMPONENT, OPERATING SYSTEM, INTERNET, WINDOWS KEYBOARD SHORTCUTS
Citation preview
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
1 OFFICE OFFLINE
COMPUTER A Computer is a programmable machine that receives input, stores and manipulates
data, and provides output in a useful format. Although mechanical examples of
computers have existed through much of recorded human history, the first electronic
computers were developed in the mid-20th century (19401945). These were the size of a large room, consuming as much power as several hundred modern personal
computers (PCs). Modern computers based on integrated circuits are millions to
billions of times more capable than the early machines, and occupy a fraction of the
space. Simple computers are small enough to fit into small pocket devices, and can
be powered by a small battery. Personal computers in their various forms are icons of
the Information Age and are what most people think of as "computers". However, the
embedded computers found in many devices from MP3 players to fighter aircraft and
from toys to industrial robots are the most numerous.
The ability to store and execute lists of instructions called programs makes
computers extremely versatile, distinguishing them from calculators. The ChurchTuring thesis is a mathematical statement of this versatility: any computer with a
certain minimum capability is, in principle, capable of performing the same tasks that
any other computer can perform. Therefore computers ranging from a netbook to a
supercomputer are all able to perform the same computational tasks, given enough
time and storage capacity.
PERSONAL COMPUTER A Personal Computer (PC) is any general-purpose computer whose size, capabilities,
and original sales price make it useful for individuals, and which is intended to be
operated directly by an end user, with no intervening computer operator. This is in
contrast to the batch processing or time-sharing models which allowed large
expensive mainframe systems to be used by many people, usually at the same time,
or large data processing systems which required a full-time staff to operate
efficiently.
A Personal Computer may be a desktop computer, a laptop, tablet PC or a handheld
PC (also called palmtop). The most common microprocessors in personal computers
are x86-compatible CPUs. Software applications for personal computers include
word processing, spreadsheets, databases, Web browsers and e-mail clients, games,
and myriad personal productivity and special-purpose software. Modern personal
computers often have high-speed or dial-up connections to the Internet, allowing
access to the World Wide Web and a wide range of other resources.
A PC may be used at home, or may be found in an office. Personal computers can be
connected to a local area network (LAN) either by a cable or wirelessly.
While early PC owners usually had to write their own programs to do anything useful
with the machines, today's users have access to a wide range of commercial and non-
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
2 OFFICE OFFLINE
commercial software which is provided in ready-to-run form. Since the 1980s,
Microsoft and Intel have dominated much of the personal computer market with the
Wintel platform.
EMBEDDED SYSTEM An Embedded System is a computer system designed to perform one or a few
dedicated functions often with real-time computing constraints. It is embedded as
part of a complete device often including hardware and mechanical parts. By
contrast, a general-purpose computer, such as a personal computer (PC), is designed
to be flexible and to meet a wide range of end-user needs. Embedded systems control
many devices in common use today.
Embedded systems are controlled by one or more main processing cores that is
typically either a microcontroller or a digital signal processor (DSP). The key
characteristic is however being dedicated to handle a particular task, which may
require very powerful processors. For example, air traffic control systems may
usefully be viewed as embedded, even though they involve mainframe computers
and dedicated regional and national networks between airports and radar sites. Each
Radar probably includes one or more embedded systems of its own. Embedded
systems range from portable devices such as digital watches and MP3 players, to
large stationary installations like traffic lights, factory controllers, or the systems
controlling nuclear power plants.
SUPER COMPUTER A Super Computer is a computer that is at the frontline of current processing
capacity, particularly speed of calculation. Supercomputers were introduced in the
1960s and were designed primarily by Seymour Cray at Control Data Corporation
(CDC), which led the market into the 1970s until Cray left to form his own company,
Cray Research. He then took over the supercomputer market with his new designs,
holding the top spot in supercomputing for five years (19851990).
In the 1980s a large number of smaller competitors entered the market, in parallel to
the creation of the minicomputer market a decade earlier, but many of these
disappeared in the mid-1990s "supercomputer market crash".
Today, supercomputers are typically one-of-a-kind custom designs produced by
"traditional" companies such as Cray, IBM and Hewlett-Packard, who had purchased
many of the 1980s companies to gain their experience. As of July 2009, the Cray
Jaguar is the fastest supercomputer in the world.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
3 OFFICE OFFLINE
MAINFRAME COMPUTER Mainframes are powerful computers used mainly by large organizations for critical
applications, typically bulk data processing such as census, industry and consumer
statistics, enterprise resource planning, and financial transaction processing.
The term originally referred to the large cabinets that housed the central processing
unit and main memory of early computers. Later the term was used to distinguish
high-end commercial machines from less powerful units.
Most large-scale computer system architectures were firmly established in the 1960s
and most large computers were based on architecture established during that era up
until the advent of Web servers in the 1990s. Interestingly, the first Web server
running anywhere outside Switzerland ran on an IBM mainframe at Stanford
University as early as 1990.
There were several minicomputer operating systems and architectures that arose in
the 1970s and 1980s, but minicomputers are generally not considered mainframes.
HISTORY OF COMPUTER In 1837, Charles Babbage was the first to conceptualize and design a fully
programmable mechanical computer, his analytical engine. Limited finances and
Babbage's inability to resist tinkering with the design meant that the device was
never completed.
Alan Turing is widely regarded to be the father of modern computer science. In 1936
Turing provided an influential formalization of the concept of the algorithm and
computation with the Turing machine.
George Stibitz is internationally recognized as a father of the modern digital
computer. While working at Bell Labs in November 1937, Stibitz invented and built
a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he
had assembled it), which was the first to use binary circuits to perform an arithmetic
operation.
STORED PROGRAM ARCHITECTURE The defining feature of modern computers which distinguishes them from all other
machines is that they can be programmed. That is to say that a list of instructions (the
program) can be given to the computer and it will store them and carry them out at
some time in the future.
PROGRAMS A 1970s punched card containing one line from a FORTRAN program.The card
reads: "Z(1) = Y + W(1)" and is labeled "PROJ039" for identification purposes.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
4 OFFICE OFFLINE
In practical terms, a computer program may run from just a few instructions to many
millions of instructions, as in a program for a word processor or a web browser. A
typical modern computer can execute billions of instructions per second (gigahertz or
GHz) and rarely make a mistake over many years of operation. Large computer
programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.
Errors in computer programs are called "bugs". Bugs may be benign and not affect
the usefulness of the program, or have only subtle effects. But in some cases they
may cause the program to "hang"become unresponsive to input such as mouse clicks or keystrokes, or to completely fail or "crash". Otherwise benign bugs may
sometimes may be harnessed for malicious intent by an unscrupulous user writing an
"exploit"code designed to take advantage of a bug and disrupt a program's proper execution. Bugs are usually not the fault of the computer. Since computers merely
execute the instructions they are given, bugs are nearly always the result of
programmer error or an oversight made in the program's design.
Example
A traffic light showing red
Suppose a computer is being employed to operate a traffic light at an intersection
between two streets. The computer has the following three basic instructions.
1. ON(Streetname, Color) Turns the light on Streetname with a specified Color on.
2. OFF(Streetname, Color) Turns the light on Streetname with a specified Color off.
3. WAIT(Seconds) Waits a specifed number of seconds.
4. START Starts the program
5. REPEAT Tells the computer to repeat a specified part of the program in a loop.
FUNCTION A general purpose computer has four main components: the arithmetic logic unit
(ALU), the control unit, the memory, and the input and output devices (collectively
termed I/O). These parts are interconnected by busses, often made of groups of wires.
Inside each of these parts are thousands to trillions of small electrical circuits which
can be turned off or on by means of an electronic switch. Each circuit represents a bit
(binary digit) of information so that when the circuit is on it represents a "1", and
when off it represents a "0" (in positive logic representation). The circuits are
arranged in logic gates so that one or more of the circuits may control the state of one
or more of the other circuits.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
5 OFFICE OFFLINE
Arithmetic/logic unit (ALU)
An Arithmetic Logic Unit (ALU) is a digital circuit that performs arithmetic and
logical operations. The ALU is a fundamental building block of the central
processing unit (CPU) of a computer, and even the simplest microprocessors contain
one for purposes such as maintaining timers. The processors found inside modern
CPUs and graphics processing units (GPUs) accommodate very powerful and very
complex ALUs; a single component may contain a number of ALUs.
Control unit
The control unit (often called a control system or central controller) manages the
computer's various components; it reads and interprets (decodes) the program
instructions, transforming them into a series of control signals which activate other
parts of the computer. Control systems in advanced computers may change the order
of some instructions so as to improve performance.
A key component common to all CPUs is the program counter, a special memory cell
(a register) that keeps track of which location in memory the next instruction is to be
read from.
Memory
Computer data storage, often called storage or memory, refers to computer
components, devices, and recording media that retain digital data used for computing
for some interval of time. Computer data storage provides one of the core functions
of the modern computer, that of information retention. It is one of the fundamental
components of all modern computers, and coupled with a central processing unit
(CPU, a processor), implements the basic computer model.
Input/output (I/O)
Hard disk drives are common storage devices used with computers.
I/O is the means by which a computer exchanges information with the outside world.
Devices that provide input or output to the computer are called peripherals. On a
typical personal computer, peripherals include input devices like the keyboard and
mouse, and output devices such as the display and printer. Hard disk drives, floppy
disk drives and optical disc drives serve as both input and output devices. Computer
networking is another form of I/O.
Often, I/O devices are complex computers in their own right with their own CPU and
memory. A graphics processing unit might contain fifty or more tiny computers that
perform the calculations necessary to display 3D graphics. Modern desktop
computers contain many smaller computers that assist the main CPU in performing
I/O.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
6 OFFICE OFFLINE
MULTITASKING In computing, Multitasking is a method by which multiple tasks, also known as
processes, share common processing resources such as a CPU. In the case of a
computer with a single CPU, only one task is said to be running at any point in time,
meaning that the CPU is actively executing instructions for that task. Multitasking
solves the problem by scheduling which task may be the one running at any given
time, and when another waiting task gets a turn. The act of reassigning a CPU from
one task to another one is called a context switch. When context switches occur
frequently enough the illusion of parallelism is achieved. Even on computers with
more than one CPU (called multiprocessor machines), multitasking allows many
more tasks to be run than there are CPUs.
MULTIPROCESSING Multiprocessing is the use of two or more central processing units (CPUs) within a
single computer system. The term also refers to the ability of a system to support
more than one processor and/or the ability to allocate tasks between them. There are
many variations on this basic theme, and the definition of multiprocessing can vary
with context, mostly as a function of how CPUs are defined. Multiprocessing
sometimes refers to the execution of multiple concurrent software processes in a
system as opposed to a single process at any one instant. However, the terms
multitasking or multiprogramming are more appropriate to describe this concept,
which is implemented mostly in software, whereas multiprocessing is more
appropriate to describe the use of multiple hardware CPUs. A system can be both
multiprocessing and multiprogramming, only one of the two, or neither of the two.
COMPUTER NETWORKING Computer networking is the engineering discipline concerned with the
communication between computer systems or devices. A computer network is any set
of computers or devices connected to each other with the ability to exchange data.
Computer networking is sometimes considered a sub-discipline of
telecommunications, computer science, information technology and/or computer
engineering since it relies heavily upon the theoretical and practical application of
these scientific and engineering disciplines. The three types of networks are: the
Internet, the intranet, and the extranet.
Examples of different network methods are:
Local area network (LAN), which is usually a small network constrained to a small geographic area. An example of a LAN would be a computer network
within a building.
Metropolitan area network (MAN), which is used for medium size area.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
7 OFFICE OFFLINE
Examples for a city or a state.
Wide area network (WAN) that is usually a larger network that covers a large geographic area.
Wireless LANs and WANs (WLAN & WWAN) are the wireless equivalent of the LAN and WAN.
HARDWARE A personal computer is made up of multiple physical components of computer
hardware, upon which can be installed an operating system and a multitude of
software to perform the operator's desired functions. Though a PC comes in many
different forms, a typical personal computer consists of a case or chassis in a tower
shape (desktop) and the following parts:
1. Monitor
2.Motherboard
3.CPU
4.RAMMemory
5.Expansion Card
6. Power Supply
7. Optical disc drives
8. Hard Disk
9. Keyboard
10. Mouse
MONITOR A monitor or display (sometimes called a visual display unit) is an electronic visual
display for computers. The monitor comprises the display device, circuitry, and an
enclosure. The display device in modern monitors is typically a thin film transistor
liquid crystal display (TFT-LCD), while older monitors use a cathode ray tube
(CRT). The size of a display is usually given as the distance between two opposite
screen corners. One problem with this method is that it does not distinguish between
the aspect ratios of monitors with identical diagonal sizes, despite the fact that the
area of a given diagonal span decreases as it becomes less square. For example, a 4:3
21-inch (53.3 cm) monitor has an area of about 211 sq in (1,361 cm2), while a 16:9
21-inch widescreen has about 188 sq in (1,213 cm2).
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
8 OFFICE OFFLINE
MOTHERBOARD The motherboard is the main component inside the case. It is a large rectangular
board with integrated circuitry that connects the rest of the parts of the computer
including the CPU, the RAM, the disk drives (CD, DVD, hard disk, or any others) as
well as any peripherals connected via the ports or the expansion slots. Computer
Hardware course is very important for the future because they are now essential in
business.
CENTRAL PROCESSING UNIT The Central Processing Unit (CPU) is the portion of a computer system that carries
out the instructions of a computer program, and is the primary element carrying out
the computer's functions. This term has been in use in the computer industry at least
since the early 1960s. The form, design and implementation of CPUs have changed
dramatically since the earliest examples, but their fundamental operation remains
much the same.
RANDOM-ACCESS MEMORY Random-access memory (usually known by its acronym, RAM) is a form of
computer data storage. Today, it takes the form of integrated circuits that allow stored
data to be accessed in any order (i.e., at random). The word random thus refers to the
fact that any piece of data can be returned in a constant time, regardless of its
physical location and whether or not it is related to the previous piece of data.
EXPANSION CARD The Expansion Card (also expansion board, adapter card or accessory card) in
computing is a printed circuit board that can be inserted into an expansion slot of a
computer motherboard to add additional functionality to a computer system.
POWER SUPPLY Power supply is a reference to a source of electrical power. A device or system that
supplies electrical or other types of energy to an output load or group of loads is
called a power supply unit or PSU. The term is most commonly applied to electrical
energy supplies, less often to mechanical ones, and rarely to others.
OPTICAL DISC DRIVE An Optical Disc Drive is a disk drive that uses laser light or electromagnetic waves
near the light spectrum as part of the process of reading or writing data to or from
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
9 OFFICE OFFLINE
optical discs. Some drives can only read from discs, but recent drives are commonly
both readers and recorders. Recorders are sometimes called burners or writers.
Compact discs, DVDs, HD DVDs and Blu-ray discs are common types of optical
media which can be read and recorded by such drives.
HARD DISK DRIVE A Hard Disk Drive (HDD) is a non-volatile storage device that stores digitally
encoded data on rapidly rotating rigid (i.e. hard) platters with magnetic surfaces.
Strictly speaking, "drive" refers to the motorized mechanical aspect that is distinct
from its medium, such as a tape drive and its tape, or a floppy disk drive and its
floppy disk. Early HDDs had removable media; however, an HDD today is typically
a sealed unit (except for a filtered vent hole to equalize air pressure) with fixed
media.
KEYBOARD A Keyboard is an input device, partially modeled after the typewriter keyboard,
which uses an arrangement of buttons or keys, to act as mechanical levers or
electronic switches. A keyboard typically has characters engraved or printed on the
keys and each press of a key typically corresponds to a single written symbol.
However, to produce some symbols requires pressing and holding several keys
simultaneously or in sequence. Most keyboard keys produce letters, numbers or
signs, other keys or simultaneous key presses can produce actions.
MOUSE A Mouse (plural mice, mouses, or mouse devices.) is a pointing device that functions
by detecting two-dimensional motion relative to its supporting surface. Physically, a
mouse consists of an object held under one of the user's hands, with one or more
buttons. It sometimes features other elements, such as "wheels", which allow the user
to perform various system-dependent operations, or extra buttons or features can add
more control or dimensional input. The mouse's motion typically translates into the
motion of a cursor on a display, which allows for fine control of a Graphical User
Interface.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
10 OFFICE OFFLINE
OPERATING SYSTEM
An operating system (OS) is software, consisting of programs and data, that runs on
computers and manages the computer hardware and provides common services for
efficient execution of various application software.
For hardware functions such as input and output and memory allocation, the
operating system acts as an intermediary between application programs and the
computer hardware, although the application code is usually executed directly by the
hardware, but will frequently call the OS or be interrupted by it. Operating systems
are found on almost any device that contains a computerfrom cellular phones and video game consoles to supercomputers and web servers.
Examples of popular modern operating systems for personal computers are Microsoft
Windows, Mac OS X, and GNU/Linux.
Examples of operating systems
Microsoft Windows
Microsoft Windows is a family of proprietary operating systems most commonly
used on personal computers. It is the most common family of operating systems for
the personal computer, with about 90% of the market share. Currently, the most
widely used version of the Windows family is Windows XP, released on October 25,
2001. The newest version is Windows 7 for personal computers and Windows Server
2008 R2 for servers.
Microsoft Windows originated in 1981 as an add-on to the older MS-DOS operating
system for the IBM PC. First publicly released in 1985, Windows came to dominate
the business world of personal computers, and went on to set a number of industry
standards and commonplace applications. Beginning with Windows XP, all modern
versions are based on the Windows NT kernel. Current versions of Windows run on
IA-32 and x86-64 processors, although older versions sometimes supported other
architectures.
Windows is also used on servers, supporting applications such as web servers and
database servers. In recent years, Microsoft has spent significant marketing and
research & development money to demonstrate that Windows is capable of running
any enterprise application, which has resulted in consistent price/performance
records (see the TPC) and significant acceptance in the enterprise market. However,
its usage in servers is not as widespread as personal computers, and here Windows
actively competes against Linux and BSD for market share, while still capturing a
steady majority by some accounts.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
11 OFFICE OFFLINE
Unix and Unix-like operating systems
Ken Thompson wrote B, mainly based on BCPL, which he used to write Unix, based
on his experience in the MULTICS project. B was replaced by C, and Unix
developed into a large, complex family of inter-related operating systems which have
been influential in every modern operating system (see History). The Unix-like
family is a diverse group of operating systems, with several major sub-categories
including System V, BSD, and GNU/Linux. The name "UNIX" is a trademark of The
Open Group which licenses it for use with any operating system that has been shown
to conform to their definitions. "Unix-like" is commonly used to refer to the large set
of operating systems which resemble the original Unix.
Unix-like systems run on a wide variety of machine architectures. They are used
heavily for servers in business, as well as workstations in academic and engineering
environments. Free Unix variants, such as GNU/Linux and BSD, are popular in these
areas.
Some Unix variants like HP's HP-UX and IBM's AIX are designed to run only on
that vendor's hardware. Others, such as Solaris, can run on multiple types of
hardware, including x86 servers and PCs. Apple's Mac OS X, a hybrid kernel-based
BSD variant derived from NeXTSTEP, Mach, and FreeBSD, has replaced Apple's
earlier (non-Unix) Mac OS.
Unix interoperability was sought by establishing the POSIX standard. The POSIX
standard can be applied to any operating system, although it was originally created
for various Unix variants.
BSD and its descendants
A subgroup of the Unix family is the Berkeley Software Distribution family, which
includes FreeBSD, NetBSD, and OpenBSD. These operating systems are most
commonly found on webservers, although they can also function as a personal
computer OS. The Internet owes much of its existence to BSD, as many of the
protocols now commonly used by computers to connect, send and receive data over a
network were widely implemented and refined in BSD. The world wide web was
also first demonstrated on a number of computers running an OS based on BSD
called NextStep.
BSD has its roots in Unix. In 1974, University of California, Berkeley installed its
first Unix system. Over time, students and staff in the computer science department
there began adding new programs to make things easier, such as text editors. When
Berkely received new VAX computers in 1978 with Unix installed, the school's
undergraduates modified Unix even more in order to take advantage of the
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
12 OFFICE OFFLINE
computer's hardware possibilities. The Defense Advanced Research Projects Agency
of the US Department of Defense took interest, and decided to fund the project.
Many schools, corporations, and government organizations took notice and started to
use Berkeley's version of Unix instead of the official one distributed by AT&T. Steve
Jobs, upon leaving Apple Inc. in 1985, formed NeXT Inc., a company that
manufactured high-end computers running on a variation of BSD called NeXTSTEP.
One of these computers was used by Tim Berners-Lee as the first webserver to create
the World Wide Web.
Developers like Keith Bostic encouraged the project to replace any non-free code
that originated with Bell Labs. Once this was done, however, AT&T sued.
Eventually, after two years of legal disputes, the BSD project came out ahead and
spawned a number of free derivatives, such as FreeBSD and NetBSD. In this two
year wait, GNU and Linux appeared.
Mac OS X
Mac OS X is a line of partially proprietary graphical operating systems developed,
marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently
shipping Macintosh computers. Mac OS X is the successor to the original Mac OS,
which had been Apple's primary operating system since 1984. Unlike its predecessor,
Mac OS X is a UNIX operating system built on technology that had been developed
at NeXT through the second half of the 1980s and up until Apple purchased the
company in early 1997.
The operating system was first released in 1999 as Mac OS X Server 1.0, with a
desktop-oriented version (Mac OS X v10.0) following in March 2001. Since then, six
more distinct "client" and "server" editions of Mac OS X have been released, the
most recent being Mac OS X v10.6, which was first made available on August 28,
2009. Releases of Mac OS X are named after big cats; the current version of Mac OS
X is "Snow Leopard".
The server edition, Mac OS X Server, is architecturally identical to its desktop
counterpart but usually runs on Apple's line of Macintosh server hardware. Mac OS
X Server includes work group management and administration software tools that
provide simplified access to key network services, including a mail transfer agent, a
Samba server, an LDAP server, a domain name server, and others.
Plan 9
Ken Thompson, Dennis Ritchie and Douglas McIlroy at Bell Labs designed and
developed the C programming language to build the operating system Unix.
Programmers at Bell Labs went on to develop Plan 9 and Inferno, which were
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
13 OFFICE OFFLINE
engineered for modern distributed environments. Plan 9 was designed from the start
to be a networked operating system, and had graphics built-in, unlike Unix, which
added these features to the design later. It is currently released under the Lucent
Public License. Inferno was sold to Vita Nuova Holdings and has been released
under a GPL/MIT license.
Linux and GNU
Linux is the generic name for a UNIX-like operating system that can be used on a
wide range of devices from supercomputers to wristwatches. The Linux kernel is
released under an open source license, so anyone can read and modify its code. It has
been modified to run on a large variety of electronics. Although estimates suggest it
is used on only 0.5-2% of all personal computers, it has been widely adopted for use
in servers and embedded systems (such as cell phones). Linux has superseded Unix
in most places, and is used on the 10 most powerful supercomputers in the world.
The GNU project is a mass collaboration of programmers who seek to create a
completely free and open operating system that was similar to Unix but with
completely original code. It was started in 1983 by Richard Stallman, and is
responsible for many of the parts of most Linux variants. For this reason, Linux is
often called GNU/Linux. Thousands of pieces of software for virtually every
operating system are licensed under the GNU General Public License. Meanwhile,
the Linux kernel began as a side project of Linus Torvalds, a university student from
Finland. In 1991, Torvalds began work on it, and posted information about his
project on a newsgroup for computer students and programmers. He received a wave
of support and volunteers who ended up creating a full-fledged kernel. Programmers
from GNU took notice, and members of both projects worked to integrate the
finished GNU parts into the linux kernel in order to create a full-fledged operating
system.
Google Chrome OS
Chrome is an operating system based on the Linux kernel and designed by Google.
Chrome targets computer users who spend most of their time on the Internetit is technically only a web browser with no other applications, and relies on Internet
applications used in the web browser to accomplish tasks such as word processing
and media viewing.
Other
Older operating systems which are still used in niche markets include OS/2 from
IBM and Microsoft; Mac OS, the non-Unix precursor to Apple's Mac OS X; BeOS;
XTS-300. Some, most notably Haiku, RISC OS, MorphOS, AmigaOS 4 and
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
14 OFFICE OFFLINE
FreeMint continue to be developed as minority platforms for enthusiast communities
and specialist applications. OpenVMS formerly from DEC, is still under active
development by Hewlett-Packard. Yet other operating systems are used almost
exclusively in academia, for operating systems education or to do research on
operating system concepts. A typical example of a system that fulfills both roles is
MINIX, while for example Singularity is used purely for research.
Components
The components of an operating system all exist in order to make the different parts
of a computer work together. All softwarefrom financial databases to film editorsneeds to go through the operating system in order to use any of the hardware, whether it be as simple as a mouse or keyboard or complex as an Internet
connection.
The user interface
An example of the command line. Each command is typed out after the 'prompt', and
then its output appears below, working its way down the screen. The current
command prompt is at the bottom. An example of a graphical user interface.
Programs take the form of images on the screen, and the files, folders, and
applications take the form of icons and symbols. A mouse is used to navigate the
computer. Every computer that receives some sort of human input needs a user
interface, which allows a person to interact with the computer. While devices like
keyboards, mice and touchscreens make up the hardware end of this task, the user
interface makes up the software for it. The two most common forms of a user
interface have historically been the Command-line interface, where computer
commands are typed out line-by-line, and the Graphical user interface, where a visual
environment (most commonly with windows, buttons, and icons) is present.
Graphical user interfaces
Most of the modern computer systems support graphical user interfaces (GUI), and
often include them. In some computer systems, such as the original implementations
of Microsoft Windows and the Mac OS, the GUI is integrated into the kernel.
While technically a graphical user interface is not an operating system service,
incorporating support for one into the operating system kernel can allow the GUI to
be more responsive by reducing the number of context switches required for the GUI
to perform its output functions. Other operating systems are modular, separating the
graphics subsystem from the kernel and the Operating System. In the 1980s UNIX,
VMS and many others had operating systems that were built this way. GNU/Linux
and Mac OS X are also built this way. Modern releases of Microsoft Windows such
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
15 OFFICE OFFLINE
as Windows Vista implement a graphics subsystem that is mostly in user-space;
however the graphics drawing routines of versions between Windows NT 4.0 and
Windows Server 2003 exist mostly in kernel space. Windows 9x had very little
distinction between the interface and the kernel.
Many computer operating systems allow the user to install or create any user
interface they desire. The X Window System in conjunction with GNOME or KDE is
a commonly found setup on most Unix and Unix-like (BSD, GNU/Linux, Solaris)
systems. A number of Windows shell replacements have been released for Microsoft
Windows, which offer alternatives to the included Windows shell, but the shell itself
cannot be separated from Windows.
Numerous Unix-based GUIs have existed over time, most derived from X11.
Competition among the various vendors of Unix (HP, IBM, Sun) led to much
fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed
for various reasons, and were eventually eclipsed by the widespread adoption of
GNOME and KDE. Prior to free software-based toolkits and desktop environments,
Motif was the prevalent toolkit/desktop combination (and was the basis upon which
CDE was developed).
Graphical user interfaces evolve over time. For example, Windows has modified its
user interface almost every time a new major version of Windows is released, and the
Mac OS GUI changed dramatically with the introduction of Mac OS X in 1999.[14]
The kernel
With the aid of the firmware and device drivers, the operating system provides the
most basic level of control over all of the computer's hardware devices. It manages
memory access for programs in the RAM, it determines which programs get access
to which hardware resources, it sets up or resets the CPU's operating states for
optimal operation at all times, and it organizes the data for long-term non-volatile
storage with file systems on such media as disks, tapes, flash memory, etc.
Program execution
The operating system acts as an interface between an application and the hardware.
The user interacts with the hardware from "the other side". The operating system is a
set of services which simplifies development of applications. Executing a program
involves the creation of a process by the operating system. The kernel creates a
process by assigning memory and other resources, establishing a priority for the
process (in multi-tasking systems), loading program code into memory, and
executing the program. The program then interacts with the user and/or other devices
and performs its intended function.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
16 OFFICE OFFLINE
Interrupts
Interrupts are central to operating systems, as they provide an efficient way for the
operating system to interact with and react to its environment. The alternativehaving the operating system "watch" the various sources of input for events (polling)
that require actioncan be found in older systems with very small stacks (50 or 60 bytes) but fairly, unusual in modern systems with fairly large stacks. Interrupt-based
programming is directly supported by most modern CPUs. Interrupts provide a
computer with a way of automatically saving local register contexts, and running
specific code in response to events. Even very basic computers support hardware
interrupts, and allow the programmer to specify code which may be run when that
event takes place.
When an interrupt is received, the computer's hardware automatically suspends
whatever program is currently running, saves its status, and runs computer code
previously associated with the interrupt; this is analogous to placing a bookmark in a
book in response to a phone call. In modern operating systems, interrupts are handled
by the operating system's kernel. Interrupts may come from either the computer's
hardware or from the running program.
When a hardware device triggers an interrupt, the operating system's kernel decides
how to deal with this event, generally by running some processing code. The amount
of code being run depends on the priority of the interrupt (for example: a person
usually responds to a smoke detector alarm before answering the phone). The
processing of hardware interrupts is a task that is usually delegated to software called
device driver, which may be either part of the operating system's kernel, part of
another program, or both. Device drivers may then relay information to a running
program by various means.
A program may also trigger an interrupt to the operating system. If a program wishes
to access hardware for example, it may interrupt the operating system's kernel, which
causes control to be passed back to the kernel. The kernel will then process the
request. If a program wishes additional resources (or wishes to shed resources) such
as memory, it will trigger an interrupt to get the kernel's attention.
Modes
Modern CPUs support multiple modes of operation. CPUs with this capability use at
least two modes: protected mode and supervisor mode. The supervisor mode is used
by the operating system's kernel for low level tasks that need unrestricted access to
hardware, such as controlling how memory is written and erased, and communication
with devices like graphics cards. Protected mode, in contrast, is used for almost
everything else. Applications operate within protected mode, and can only use
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
17 OFFICE OFFLINE
hardware by communicating with the kernel, which controls everything in supervisor
mode. CPUs might have other modes similar to protected mode as well, such as the
virtual modes in order to emulate older processor types, such as 16-bit processors on
a 32-bit one, or 32-bit processors on a 64-bit one.
When a computer first starts up, it is automatically running in supervisor mode. The
first few programs to run on the computer, being the BIOS, bootloader and the
operating system have unlimited access to hardware - and this is required because, by
definition, initializing a protected environment can only be done outside of one.
However, when the operating system passes control to another program, it can place
the CPU into protected mode.
In protected mode, programs may have access to a more limited set of the CPU's
instructions. A user program may leave protected mode only by triggering an
interrupt, causing control to be passed back to the kernel. In this way the operating
system can maintain exclusive control over things like access to hardware and
memory.
The term "protected mode resource" generally refers to one or more CPU registers,
which contain information that the running program isn't allowed to alter. Attempts
to alter these resources generally causes a switch to supervisor mode, where the
operating system can deal with the illegal operation the program was attempting (for
example, by killing the program).
Memory management
Among other things, a multiprogramming operating system kernel must be
responsible for managing all system memory which is currently in use by programs.
This ensures that a program does not interfere with memory already used by another
program. Since programs time share, each program must have independent access to
memory.
Cooperative memory management, used by many early operating systems, assumes
that all programs make voluntary use of the kernel's memory manager, and do not
exceed their allocated memory. This system of memory management is almost never
seen any more, since programs often contain bugs which can cause them to exceed
their allocated memory. If a program fails, it may cause memory used by one or more
other programs to be affected or overwritten. Malicious programs or viruses may
purposefully alter another program's memory, or may affect the operation of the
operating system itself. With cooperative memory management, it takes only one
misbehaved program to crash the system.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
18 OFFICE OFFLINE
Memory protection enables the kernel to limit a process' access to the computer's
memory. Various methods of memory protection exist, including memory
segmentation and paging. All methods require some level of hardware support (such
as the 80286 MMU), which doesn't exist in all computers.
In both segmentation and paging, certain protected mode registers specify to the CPU
what memory address it should allow a running program to access. Attempts to
access other addresses will trigger an interrupt which will cause the CPU to re-enter
supervisor mode, placing the kernel in charge. This is called a segmentation violation
or Seg-V for short, and since it is both difficult to assign a meaningful result to such
an operation, and because it is usually a sign of a misbehaving program, the kernel
will generally resort to terminating the offending program, and will report the error.
Windows 3.1-Me had some level of memory protection, but programs could easily
circumvent the need to use it. A general protection fault would be produced
indicating a segmentation violation had occurred, however the system would often
crash anyway.
Virtual memory
Many operating systems can "trick" programs into using memory scattered around
the hard disk and RAM as if it is one continuous chunk of memory called virtual
memory.
The use of virtual memory addressing (such as paging or segmentation) means that
the kernel can choose what memory each program may use at any given time,
allowing the operating system to use the same memory locations for multiple tasks.
If a program tries to access memory that isn't in its current range of accessible
memory, but nonetheless has been allocated to it, the kernel will be interrupted in the
same way as it would if the program were to exceed its allocated memory. (See
section on memory management.) Under UNIX this kind of interrupt is referred to as
a page fault.
When the kernel detects a page fault it will generally adjust the virtual memory range
of the program which triggered it, granting it access to the memory requested. This
gives the kernel discretionary power over where a particular application's memory is
stored, or even whether or not it has actually been allocated yet.
In modern operating systems, memory which is accessed less frequently can be
temporarily stored on disk or other media to make that space available for use by
other programs. This is called swapping, as an area of memory can be used by
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
19 OFFICE OFFLINE
multiple programs, and what that memory area contains can be swapped or
exchanged on demand.
Multitasking
Multitasking refers to the running of multiple independent computer programs on the
same computer; giving the appearance that it is performing the tasks at the same
time. Since most computers can do at most one or two things at one time, this is
generally done via time-sharing, which means that each program uses a share of the
computer's time to execute.
An operating system kernel contains a piece of software called a scheduler which
determines how much time each program will spend executing, and in which order
execution control should be passed to programs. Control is passed to a process by the
kernel, which allows the program access to the CPU and memory. Later, control is
returned to the kernel through some mechanism, so that another program may be
allowed to use the CPU. This so-called passing of control between the kernel and
applications is called a context switch.
An early model which governed the allocation of time to programs was called
cooperative multitasking. In this model, when control is passed to a program by the
kernel, it may execute for as long as it wants before explicitly returning control to the
kernel. This means that a malicious or malfunctioning program may not only prevent
any other programs from using the CPU, but it can hang the entire system if it enters
an infinite loop.
Modern operating systems extend the concepts of application preemption to device
drivers and kernel code, so that the operating system has preemptive control over
internal run-times as well.
The philosophy governing preemptive multitasking is that of ensuring that all
programs are given regular time on the CPU. This implies that all programs must be
limited in how much time they are allowed to spend on the CPU without being
interrupted. To accomplish this, modern operating system kernels make use of a
timed interrupt. A protected mode timer is set by the kernel which triggers a return to
supervisor mode after the specified time has elapsed. On many single user operating
systems cooperative multitasking is perfectly adequate, as home computers generally
run a small number of well tested programs. Windows NT was the first version of
Microsoft Windows which enforced preemptive multitasking, but it didn't reach the
home user market until Windows XP.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
20 OFFICE OFFLINE
Disk access and file systems
Filesystems allow users and programs to organize and sort files on a computer, often
through the use of directories (or "folders")
Access to data stored on disks is a central feature of all operating systems. Computers
store data on disks using files, which are structured in specific ways in order to allow
for faster access, higher reliability, and to make better use out of the drive's available
space. The specific way in which files are stored on a disk is called a file system, and
enables files to have names and attributes. It also allows them to be stored in a
hierarchy of directories or folders arranged in a directory tree.
Early operating systems generally supported a single type of disk drive and only one
kind of file system. Early file systems were limited in their capacity, speed, and in
the kinds of file names and directory structures they could use. These limitations
often reflected limitations in the operating systems they were designed for, making it
very difficult for an operating system to support more than one file system.
While many simpler operating systems support a limited range of options for
accessing storage systems, operating systems like UNIX and GNU/Linux support a
technology known as a virtual file system or VFS. An operating system such as
UNIX supports a wide array of storage devices, regardless of their design or file
systems, allowing them to be accessed through a common application programming
interface (API). This makes it unnecessary for programs to have any knowledge
about the device they are accessing. A VFS allows the operating system to provide
programs with access to an unlimited number of devices with an infinite variety of
file systems installed on them, through the use of specific device drivers and file
system drivers.
A connected storage device, such as a hard drive, is accessed through a device driver.
The device driver understands the specific language of the drive and is able to
translate that language into a standard language used by the operating system to
access all disk drives. On UNIX, this is the language of block devices.
When the kernel has an appropriate device driver in place, it can then access the
contents of the disk drive in raw format, which may contain one or more file systems.
A file system driver is used to translate the commands used to access each specific
file system into a standard set of commands that the operating system can use to talk
to all file systems. Programs can then deal with these file systems on the basis of
filenames, and directories/folders, contained within a hierarchical structure. They can
create, delete, open, and close files, as well as gather various information about them,
including access permissions, size, free space, and creation and modification dates.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
21 OFFICE OFFLINE
Various differences between file systems make supporting all file systems difficult.
Allowed characters in file names, case sensitivity, and the presence of various kinds
of file attributes makes the implementation of a single interface for every file system
a daunting task. Operating systems tend to recommend using (and so support
natively) file systems specifically designed for them; for example, NTFS in Windows
and ext3 and ReiserFS in GNU/Linux. However, in practice, third party drives are
usually available to give support for the most widely used file systems in most
general-purpose operating systems (for example, NTFS is available in GNU/Linux
through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through FS-
driver and rfstool).
Support for file systems is highly varied among modern operating systems, although
there are several common file systems which almost all operating systems include
support and drivers for. Operating systems vary on file system support and on the
disk formats they may be installed on. Under Windows, each file system is usually
limited in application to certain media; for example, CDs must use ISO 9660 or
UDF, and as of Windows Vista, NTFS is the only file system which the operating
system can be installed on. It is possible to install GNU/Linux onto many types of
file systems. Unlike other operating systems, GNU/Linux and UNIX allow any file
system to be used regardless of the media it is stored in, whether it is a hard drive, a
disc (CD,DVD...), an USB key, or even contained within a file located on another
file system.
Device drivers
A device driver is a specific type of computer software developed to allow
interaction with hardware devices. Typically this constitutes an interface for
communicating with the device, through the specific computer bus or
communications subsystem that the hardware is connected to, providing commands
to and/or receiving data from the device, and on the other end, the requisite interfaces
to the operating system and software applications. It is a specialized hardware-
dependent computer program which is also operating system specific that enables
another program, typically an operating system or applications software package or
computer program running under the operating system kernel, to interact
transparently with a hardware device, and usually provides the requisite interrupt
handling necessary for any necessary asynchronous time-dependent hardware
interfacing needs.
The key design goal of device drivers is abstraction. Every model of hardware (even
within the same class of device) is different. Newer models also are released by
manufacturers that provide more reliable or better performance and these newer
models are often controlled differently. Computers and their operating systems
cannot be expected to know how to control every device, both now and in the future.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
22 OFFICE OFFLINE
To solve this problem, operative systems essentially dictate how every type of device
should be controlled. The function of the device driver is then to translate these
operative system mandated function calls into device specific calls. In theory a new
device, which is controlled in a new manner, should function correctly if a suitable
driver is available. This new driver will ensure that the device appears to operate as
usual from the operating system's point of view.
Under versions of Windows before Vista and versions of Linux before 2.6, all driver
execution was co-operative, meaning that if a driver entered an infinite loop it would
freeze the system. More recent revisions of these operating systems incorporate
kernel preemption, where the kernel interrupts the driver to give it tasks, and then
separates itself from the process until it receives a response from the device driver, or
gives it more tasks to do.
Networking
Currently most operating systems support a variety of networking protocols,
hardware, and applications for using them. This means that computers running
dissimilar operating systems can participate in a common network for sharing
resources such as computing, files, printers, and scanners using either wired or
wireless connections. Networks can essentially allow a computer's operating system
to access the resources of a remote computer to support the same functions as it could
if those resources were connected directly to the local computer. This includes
everything from simple communication, to using networked file systems or even
sharing another computer's graphics or sound hardware. Some network services
allow the resources of a computer to be accessed transparently, such as SSH which
allows networked users direct access to a computer's command line interface.
Client/server networking involves a program on a computer somewhere which
connects via a network to another computer, called a server. Servers offer (or host)
various services to other network computers and users. These services are usually
provided through ports or numbered access points beyond the server's network
address. Each port number is usually associated with a maximum of one running
program, which is responsible for handling requests to that port. A daemon, being a
user program, can in turn access the local hardware resources of that computer by
passing requests to the operating system kernel.
Many operating systems support one or more vendor-specific or open networking
protocols as well, for example, SNA on IBM systems, DECnet on systems from
Digital Equipment Corporation, and Microsoft-specific protocols (SMB) on
Windows. Specific protocols for specific tasks may also be supported such as NFS
for file access. Protocols like ESound, or esd can be easily extended over the network
to provide sound from local applications, on a remote system's sound hardware.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
23 OFFICE OFFLINE
Security
A computer being secure depends on a number of technologies working properly. A
modern operating system provides access to a number of resources, which are
available to software running on the system, and to external devices like networks via
the kernel.
The operating system must be capable of distinguishing between requests which
should be allowed to be processed, and others which should not be processed. While
some systems may simply distinguish between "privileged" and "non-privileged",
systems commonly have a form of requester identity, such as a user name. To
establish identity there may be a process of authentication. Often a username must be
quoted, and each username may have a password. Other methods of authentication,
such as magnetic cards or biometric data, might be used instead. In some cases,
especially connections from the network, resources may be accessed with no
authentication at all (such as reading files over a network share). Also covered by the
concept of requester identity is authorization; the particular services and resources
accessible by the requester once logged into a system are tied to either the requester's
user account or to the variously configured groups of users to which the requester
belongs.
In addition to the allow/disallow model of security, a system with a high level of
security will also offer auditing options. These would allow tracking of requests for
access to resources (such as, "who has been reading this file?"). Internal security, or
security from an already running program is only possible if all possibly harmful
requests must be carried out through interrupts to the operating system kernel. If
programs can directly access hardware and resources, they cannot be secured.
External security involves a request from outside the computer, such as a login at a
connected console or some kind of network connection. External requests are often
passed through device drivers to the operating system's kernel, where they can be
passed onto applications, or carried out directly. Security of operating systems has
long been a concern because of highly sensitive data held on computers, both of a
commercial and military nature. The United States Government Department of
Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC)
which is a standard that sets basic requirements for assessing the effectiveness of
security. This became of vital importance to operating system makers, because the
TCSEC was used to evaluate, classify and select computer systems being considered
for the processing, storage and retrieval of sensitive or classified information.
Network services include offerings such as file sharing, print services, email, web
sites, and file transfer protocols (FTP), most of which can have compromised
security. At the front line of security are hardware devices known as firewalls or
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
24 OFFICE OFFLINE
intrusion detection/prevention systems. At the operating system level, there are a
number of software firewalls available, as well as intrusion detection/prevention
systems. Most modern operating systems include a software firewall, which is
enabled by default. A software firewall can be configured to allow or deny network
traffic to or from a service or application running on the operating system. Therefore,
one can install and be running an insecure service, such as Telnet or FTP, and not
have to be threatened by a security breach because the firewall would deny all traffic
trying to connect to the service on that port.
An alternative strategy, and the only sandbox strategy available in systems that do
not meet the Popek and Goldberg virtualization requirements, is the operating system
not running user programs as native code, but instead either emulates a processor or
provides a host for a p-code based system such as Java.
Internal security is especially relevant for multi-user systems; it allows each user of
the system to have private files that the other users cannot tamper with or read.
Internal security is also vital if auditing is to be of any use, since a program can
potentially bypass the operating system, inclusive of bypassing auditing.
In modern operating systems, there're many in-built security modules to prevent
these malicious threats. As an example, with Microsoft Windows 7 OS, there is a
program called Microsoft security essentials to prevent all these security holes.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
25 OFFICE OFFLINE
OPERATING SYSTEM
WINDOWS
Microsoft Windows is a series of software operating systems and graphical user
interfaces produced by Microsoft. Microsoft first introduced an operating
environment named Windows in November 20, 1985 as an add-on to MS-DOS in
response to the growing interest in graphical user interfaces (GUIs). Microsoft
Windows came to dominate the world's personal computer market, overtaking Mac
OS, which had been introduced in 1984. As of October 2009, Windows had
approximately 91% of the market share of the client operating systems for usage on
the Internet. The most recent client version of Windows is Windows 7; the most
recent server version is Windows Server 2008 R2; the most recent mobile OS version
is Windows Phone 7.
WINDOWS XP
Windows XP is an operating system that was produced by Microsoft for use on
personal computers, including home and business desktops, laptops, and media
centers. It was first released in August 2001, and is the most popular version of
Windows, based on installed user base. The name "XP" is short for "eXPerience."
Windows XP was the successor to both Windows 2000 and Windows Me, and was
the first consumer-oriented operating system produced by Microsoft to be built on
the Windows NT kernel and architecture. Windows XP was released for retail sale on
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
26 OFFICE OFFLINE
October 25, 2001, and over 400 million copies were in use in January 2006,
according to an estimate in that month by an IDC analyst. It was succeeded by
Windows Vista, which was released to volume license customers on November 8,
2006, and worldwide to the general public on January 30, 2007. Direct OEM and
retail sales of Windows XP ceased on June 30, 2008. Microsoft continued to sell
Windows XP through their System Builders (smaller OEMs who sell assembled
computers) program until January 31, 2009. XP may continue to be available as these
sources run through their inventory or by purchasing Windows 7 Ultimate, Windows
7 Pro, Windows Vista Ultimate or Windows Vista Business, and then downgrading
to Windows XP. The most common editions of the operating system were Windows
XP Home Edition, which was targeted at home users, and Windows XP Professional,
which offered additional features such as support for Windows Server domains and
two physical processors, and was targeted at power users, business and enterprise
clients. Windows XP Media Center Edition has additional multimedia features
enhancing the ability to record and watch TV shows, view DVD movies, and listen to
music. Windows XP Tablet PC Edition was designed to run stylus applications built
using the Tablet PC platform.
Windows XP was eventually released for two additional architectures, Windows XP
64-bit Edition for IA-64 (Itanium) processors and Windows XP Professional x64
Edition for x86-64. There is also Windows XP Embedded, a component version of
the Windows XP Professional, and editions for specific markets such as Windows
XP Starter Edition. By mid 2009, a manufacturer revealed the first Windows XP
powered cellular telephone.
The NT-based versions of Windows, which are programmed in C, C++, and
assembly, are known for their improved stability and efficiency over the 9x versions
of Microsoft Windows. Windows XP presented a significantly redesigned graphical
user interface, a change Microsoft promoted as more user-friendly than previous
versions of Windows. A new software management facility called Side-by-Side
Assembly was introduced to ameliorate the "DLL hell" that plagues 9x versions of
Windows. It is also the first version of Windows to use product activation to combat
illegal copying. Windows XP had also been criticized by some users for security
vulnerabilities, tight integration of applications such as Internet Explorer 6 and
Windows Media Player, and for aspects of its default user interface. Later versions
with Service Pack 2, Service Pack 3, and Internet Explorer 8 addressed some of these
concerns.
During development, the project was codenamed "Whistler", after Whistler, British
Columbia, as many Microsoft employees skied at the Whistler-Blackcomb ski resort.
According to web analytics data generated by W3Schools, as of October 2010,
Windows XP is the most widely used operating system for accessing the Internet in
the world with a 48.9% market share, having peaked at 76.1% in January 2007.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
27 OFFICE OFFLINE
WINDOWS VISTA
Windows Vista is an operating system released in several variations developed by
Microsoft for use on personal computers, including home and business desktops,
laptops, tablet PCs, and media center PCs. Prior to its announcement on July 22,
2005, Windows Vista was known by its codename "Longhorn." Development was
completed on November 8, 2006; over the following three months it was released in
stages to computer hardware and software manufacturers, business customers, and
retail channels. On January 30, 2007, it was released worldwide, and was made
available for purchase and download from Microsoft's website. The release of
Windows Vista came more than five years after the introduction of its predecessor,
Windows XP, the longest time span between successive releases of Microsoft
Windows desktop operating systems. It was succeeded by Windows 7 which was
released to manufacturing on July 22, 2009, and for the general public on October 22,
2009.
Windows Vista contains many changes and new features, including an updated
graphical user interface and visual style dubbed Aero, a redesigned search function,
multimedia tools including Windows DVD Maker, and redesigned networking,
audio, print, and display sub-systems. Vista aims to increase the level of
communication between machines on a home network, using peer-to-peer technology
to simplify sharing files and media between computers and devices. Windows Vista
includes version 3.0 of the .NET Framework, allowing software developers to write
applications without traditional Windows APIs.
Microsoft's primary stated objective with Windows Vista has been to improve the
state of security in the Windows operating system. One common criticism of
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
28 OFFICE OFFLINE
Windows XP and its predecessors is their commonly exploited security
vulnerabilities and overall susceptibility to malware, viruses and buffer overflows. In
light of this, Microsoft chairman Bill Gates announced in early 2002 a company-
wide "Trustworthy Computing initiative" which aims to incorporate security work
into every aspect of software development at the company. Microsoft stated that it
prioritized improving the security of Windows XP and Windows Server 2003 above
finishing Windows Vista, thus delaying its completion.
While these new features and security improvements have garnered positive reviews,
Vista has also been the target of much criticism and negative press. Criticism of
Windows Vista has targeted its high system requirements, its more restrictive
licensing terms, the inclusion of a number of new digital rights management
technologies aimed at restricting the copying of protected digital media, lack of
compatibility with some pre-Vista hardware and software, and the number of
authorization prompts for User Account Control. As a result of these and other
issues, Windows Vista had seen initial adoption and satisfaction rates lower than
Windows XP. However, with an estimated 330 million Internet users as of January
2009, it had been announced that Vista usage had surpassed Microsofts pre-launch two-year-out expectations of achieving 200 million users. At the release of Windows
7 (October 2009), Windows Vista (with approximately 400 million Internet users)
was the second most widely used operating system on the Internet with an
approximately 18.6% market share, the most widely used being Windows XP with an
approximately 63.3% market share. As of May 2010, Windows Vista's market share
estimates range from 15.26% to 26.04%.
WINDOWS 7
Windows 7 is the latest release of Microsoft Windows, a series of operating systems
produced by Microsoft for use on personal computers, including home and business
desktops, laptops, netbooks, tablet PCs, and media center PCs. Windows 7 was
released to manufacturing on July 22, 2009, and reached general retail availability on
October 22, 2009, less than three years after the release of its predecessor, Windows
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
29 OFFICE OFFLINE
Vista. Windows 7's server counterpart, Windows Server 2008 R2, was released at the
same time.
Unlike its predecessor, Windows Vista, which introduced a large number of new
features, Windows 7 was intended to be a more focused, incremental upgrade to the
Windows line, with the goal of being compatible with applications and hardware
which Windows Vista was not at the time. Presentations given by Microsoft in 2008
focused on multi-touch support, a redesigned Windows shell with a new taskbar,
referred to as the Superbar, a home networking system called HomeGroup, and
performance improvements.
Some standard applications that have been included with prior releases of Microsoft
Windows, including Windows Calendar, Windows Mail, Windows Movie Maker,
and Windows Photo Gallery, are not included in Windows 7; most are instead
offered separately at no charge as part of the Windows Live Essentials suite.
WINDOWS 8
A roadmap timeline slide shown by Microsoft at the 2009 Professional Developers
Conference shows that a product code-named Windows 8 is scheduled to be released
sometime between 2011 (Beta) and 2012. Development and other aspects of
Windows 8 have not been detailed in public, although job listings have mentioned
improved functionality for file access in branch offices.
A Microsoft KB article confirmed that Windows 8 is the next version of Windows.
The article has now been changed to remove references to Windows 8.
A leaked document from Microsoft indicates that Windows 8 might feature faster
startup, an App Store, integrated web applications, improved digital media support
(including AVC HD and 3D video), faster resumes from low-power states, and USB
3.0 and Bluetooth 3.0 support. Windows 8 is likely to include facial recognition (due
to the increase of webcams that are integrated into computers and its latest add-on to
its Xbox 360 console, Kinect). A Dutch representative posted on the company's blog
that Microsoft is working on the next edition of Windows and that "it will take about
two years before Windows 8 will be on the market". Steve Ballmer hinted at UK
Tech Days that Windows 8 will feature major improvements for touch usage.
On October 22, 2010, Ballmer stated that "the next generation of Windows, would be
the riskiest product Microsoft ever made" but he did not add any details on why
Windows 8 will be the "riskiest product".
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
30 OFFICE OFFLINE
LINUX
Linux refers to the family of Unix-like computer operating systems using the Linux
kernel. Linux can be installed on a wide variety of computer hardware, ranging from
mobile phones, tablet computers and video game consoles, to mainframes and
supercomputers. Linux is a leading server operating system, and runs the 10 fastest
supercomputers in the world. Use of Linux for desktop computers has increased in
recent years, partly owing to the popular Ubuntu, Fedora, and openSUSE
distributions and the emergence of netbooks with pre-installed Linux systems and
smartphones running embedded Linux.
The development of Linux is one of the most prominent examples of free and open
source software collaboration; typically all the underlying source code can be used,
freely modified, and redistributed, both commercially and non-commercially, by
anyone under licenses such as the GNU General Public License. Typically Linux is
packaged in a format known as a Linux distribution for desktop and server use.
Linux distributions include the Linux kernel and all of the supporting software
required to run a complete system, such as utilities and libraries, the X Window
System, the GNOME and KDE desktop environments, and the Apache HTTP Server.
Commonly used applications with desktop Linux systems include the Mozilla
Firefox web-browser, the OpenOffice.org office application suite and the GIMP
image editor.
The name "Linux" comes from the Linux kernel, originally written in 1991 by Linus
Torvalds. The main supporting user space system tools and libraries from the GNU
Project (announced in 1983 by Richard Stallman) are the basis for the Free Software
Foundation's preferred name GNU/Linux.
UNIX
The Unix operating system was conceived and implemented in 1969 at AT&T's Bell
Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas
McIlroy, and Joe Ossanna. It was first released in 1971 and was initially entirely
written in assembly language, a common practice at the time. Later, in a key
pioneering approach in 1973, Unix was re-written in the programming language C by
Dennis Ritchie (with exceptions to the kernel and I/O). The availability of an
operating system written in a high-level language allowed easier portability to
different computer platforms. With a legal glitch forcing AT&T to license the
operating system's source code. Unix quickly grew and became widely adopted by
academic institutions and businesses.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
31 OFFICE OFFLINE
GNU
The GNU Project, started in 1983 by Richard Stallman, had the goal of creating a
"complete Unix-compatible software system" composed entirely of free software.
Work began in 1984. Later, in 1985, Stallman started the Free Software Foundation
and wrote the GNU General Public License (GNU GPL) in 1989. By the early 1990s,
many of the programs required in an operating system (such as libraries, compilers,
text editors, a Unix shell, and a windowing system) were completed, although low-
level elements such as device drivers, daemons, and the kernel were stalled and
incomplete. Linus Torvalds has said that if the GNU kernel had been available at the
time (1991), he would not have decided to write his own.
Uses
As well as those designed for general purpose use on desktops and servers,
distributions may be specialized for different purposes including: computer
architecture support, embedded systems, stability, security, localization to a specific
region or language, targeting of specific user groups, support for real-time
applications, or commitment to a given desktop environment. Furthermore, some
distributions deliberately include only free software. Currently, over three hundred
distributions are actively developed, with about a dozen distributions being most
popular for general-purpose use.
Linux is a widely ported operating system kernel. The Linux kernel runs on a highly
diverse range of computer architectures: in the hand-held ARM-based iPAQ and the
mainframe IBM System z9, System z10 in devices ranging from mobile phones to
supercomputers. Specialized distributions exist for less mainstream architectures.
The ELKS kernel fork can run on Intel 8086 or Intel 80286 16-bit microprocessors,
while the Clinux kernel fork may run on systems without a memory management
unit. The kernel also runs on architectures that were only ever intended to use a
manufacturer-created operating system, such as Macintosh computers (with both
PowerPC and Intel processors), PDAs, video game consoles, portable music players,
and mobile phones.
There are several industry associations and hardware conferences devoted to
maintaining and improving support for diverse hardware under Linux, such as
FreedomHEC.
Desktop
The popularity of Linux on standard desktops (and laptops) has been increasing over
the years. Currently most distributions include a graphical user environment. The two
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
32 OFFICE OFFLINE
most popular such environments are GNOME and KDE, both of which are mature
and support a wide variety of languages.
The performance of Linux on the desktop has been a controversial topic; for example
in 2007 Con Kolivas accused the Linux community of favoring performance on
servers. He quit Linux kernel development because he was frustrated with this lack
of focus on the desktop, and then gave a "tell all" interview on the topic. Since then a
significant effort has been expended improving the desktop experience. Projects such
as upstart aim for a faster boot time. There are several companies that do port their
own or other companies' games to Linux.
Many types of applications available for Microsoft Windows and Mac OS X are also
available for Linux. Commonly, either a free software application will exist which
does the functions of an application found on another operating system, or that
application will have a version that works on Linux (such as Skype). Furthermore,
the Wine project provides a Windows compatibility layer to run unmodified
Windows applications on Linux. CrossOver is a proprietary solution based on the
open source Wine project that supports running Windows versions of Microsoft
Office, Intuit applications such as Quicken and QuickBooks, Adobe Photoshop
versions through CS2, and many popular games such as World of Warcraft and Team
Fortress 2. In other cases, where there is no Linux port of some software in areas
such as desktop publishing and professional audio, there is equivalent software
available on Linux.
Many popular applications are available for a wide variety of operating systems. For
example Mozilla Firefox, and OpenOffice.org have downloadable versions for all
major operating systems. Furthermore, some applications were initially developed for
Linux (such as Pidgin, and GIMP) and, due to their popularity, were ported to other
operating systems (including Windows and Mac OS X).
A growing number of proprietary desktop applications are also supported on Linux;
see List of proprietary software for Linux. In the field of animation and visual
effects, most high end software, such as AutoDesk Maya, Softimage XSI and Apple
Shake, is available for Linux, Windows and/or Mac OS X.
The collaborative nature of free software development allows distributed teams to
perform language localization of some Linux distributions for use in locales where
localizing proprietary systems would not be cost-effective. For example the Sinhalese
language version of the Knoppix distribution was available significantly before
Microsoft Windows XP was translated to Sinhalese. In this case the Lanka Linux
User Group played a major part in developing the localized system by combining the
knowledge of university professors, linguists, and local developers.
THAMIRA INFORMATION TECHNOLOGY EDUCATION AND TRAINING INSTITUTE
33 OFFICE OFFLINE
Installing new software in Linux is typically done through the use of package
managers such as Synaptic Package Manager, PackageKit, and Yum Extender. While
major Linux distributions have extensive repositories (tens of thousands of
packages), not all the software that can run on Linux is available from the official
repositories. Alternatively, users can install packages from unofficial repositories,
download pre-compiled packages directly from websites, or compile the source code
by themselves. All these methods come with different degrees of difficulty,
compiling the source code is in general considered a challenging process for new
Linux users, but it's hardly needed in modern distributions.
Servers, mainframes and supercomputers
Linux distributions have long been used as server operating systems, and have risen
to prominence in that area; Netcraft reported in September 2006 that eight of the ten
most reliable internet hosting companies ran Linux distributions on their web servers.
(since June 2008, Linux distributions represented five of the top ten, FreeBSD three
of ten, and Microsoft two of ten; since February 2010, Linux distributions
represented six of the top ten, FreeBSD two of ten, and Microsoft one of ten.)
Linux distributions are the cornerstone of the LAMP server-software combination
(Linux, Apache, MySQL, Perl/PHP/Python) which has achieved popularity among
developers, and which is one of the more common platforms for website hosting.
Linux distributions have become increasingly popular on mainframes in the last
decade due to pricing, compared to other mainframe operating systems. In December
2009, computer giant IBM reported that it would predominantly market and sell
mainframe-based Enterprise Linux Server.
Linux distributions are also commonly used as operating systems for
supercomputers: since June 2010, out of the top 500 systems, 455 (91%) run a Linux
distribution. Linux was also selected as the operating system for the world's most
powerful supercomputer, IBM's Sequoia which will become