Wednesday, August 4, 2010

Softwares

Software, by definition, is the collection of computer programs, procedures and documentation that performs different tasks on a computer system. The term 'software' was first used by John Tukey in 1958. At the very basic level, computer software consists of a machine language that consists of groups of binary values, which specify processor instructions. The processor instructions change the state of computer hardware in a predefined sequence. Briefly, computer software is the language in which a computer speaks. There are different types of computer software. What are their major types? Let us see.


Major Types of Software

Programming Software:
This is one of the most commonly known and popularly used forms of computer software. These software come in forms of tools that assist a programmer in writing computer programs. Computer programs are sets of logical instructions that make a computer system perform certain tasks. The tools that help the programmers in instructing a computer system include text editors, compilers and interpreters.
A programming tool or software development tool is a program or application that software developers use to create, debug, maintain, or otherwise support other programs and applications. The term usually refers to relatively simple programs that can be combined together to accomplish a task, much as one might use multiple hand tools to fix a physical object.

System Software:

It helps in running the computer hardware and the computer system. System software is a collection of operating systems; devise drivers, servers, windowing systems and utilities. System software helps an application programmer in abstracting away from hardware, memory and other internal complexities of a computer.

The operating system and utility programs are the two major categories of system software. Just as the processor is the nucleus of the computer system, the operating system is the nucleus of all software activity. Application Software: It enables the end users to accomplish certain specific tasks. Business software, databases and educational software are some forms of application software. Different word processors, which are dedicated for specialized tasks to be performed by the user, are other examples of application software.


The operating system is the most important program that runs on a computer. Every general-purpose computer must have an operating system to run other programs. Operating systems perform basic tasks, such as

i. recognizing input from the keyboard
ii.sending output to the display screen
iii.keeping track of files and directories on the disk
iv.controlling peripheral devices such as disk drives and printers.

It is the first program loaded into memory when the computer is turned on and, in a sense, brings life to the computer hardware. Without it, you cannot use your word processing software, spreadsheet software, or any other applications.


Without an operating system, you cannot communicate with your computer. When you give the computer a command, the operating system relays the instructions to the 'brain' of the computer, called the microprocessor or CPU. You cannot speak directly to the CPU because it only understands machine language. When you are working in an application software program, such as Microsoft Word, commands that you give the application are sent through the operating system to the CPU. Windows2000, Window95/98, Mac OS, Unix and DOS are all examples of operating systems.




Apart from these three basic types of software, there are some other well-known forms of computer software like inventory management software, ERP, utility software, accounting software and others. Take a look at some of them.

Application software, also known as an application, is computer software designed to help the user to perform singular or multiple related specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players.


Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but typically do not directly apply them in the performance of tasks that benefit the user. A simple, if imperfect analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system). The power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user.
Application software classification


There are many types of application software:

An application suite consists of multiple applications bundled together. They usually have related functions, features and user interfaces, and may be able to interact with each other, e.g. open each other's files. Business applications often come in suites, e.g. Microsoft Office, OpenOffice.org, and iWork, which bundle together a word processor, a spreadsheet, etc.; but suites exist for other purposes, e.g. graphics or music.

Enterprise software addresses the needs of organization processes and data flow, often in a large distributed environment. (Examples include Financial, Customer Relationship Management, and Supply Chain Management). Note that Departmental Software is a sub-type of Enterprise Software with a focus on smaller organizations or groups within a large organization. (Examples include Travel Expense Management, and IT Helpdesk)

Enterprise infrastructure software provides common capabilities needed to support enterprise software systems. (Examples include Databases, Email servers, and Network and Security Management)

Information worker software addresses the needs of individuals to create and manage information, often for individual projects within a department, in contrast to enterprise management. Examples include time management, resource management, documentation tools, analytical, and collaborative. Word processors, spreadsheets, email and blog clients, personal information system, and individual media editors may aid in multiple information worker tasks.

Content access software is software used primarily to access content without editing, but may include software that allows for content editing. Such software addresses the needs of individuals and groups to consume digital entertainment and published digital content. (Examples include Media Players, Web Browsers, Help browsers, and Games)

Educational software is related to content access software, but has the content and/or features adapted for use in by educators or students. For example, it may deliver evaluations (tests), track progress through material, or include collaborative capabilities.

Simulation software are computer software for simulation of physical or abstract systems for either research, training or entertainment purposes.

Media development software addresses the needs of individuals who generate print and electronic media for others to consume, most often in a commercial or educational setting. This includes Graphic Art software, Desktop Publishing software, Multimedia Development software, HTML editors, Digital Animation editors, Digital Audio and Video composition, and many others.[2]

Product engineering software is used in developing hardware and software products. This includes computer aided design (CAD), computer aided engineering (CAE), computer language editing and compiling tools, Integrated Development Environments, and Application Programmer Interfaces.

Utility Software:

Also known as service routine, utility software helps in the management of computer hardware and application software. It performs a small range of tasks. Disk defragmenters, systems utilities and virus scanners are some of the typical examples of utility software.
Utility software is a kind of system software designed to help analyze, configure, optimize and maintain the computer. A single piece of utility software is usually called a utility (abbr. util) or tool.


Utility software should be contrasted with application software, which allows users to do things like creating text documents, playing games, listening to music or surfing the web. Rather than providing these kinds of user-oriented or output-oriented functionality, utility software usually focuses on how the computer infrastructure (including the computer hardware, operating system, application software and data storage) operates. Due to this focus, utilities are often rather technical and targeted at people with an advanced level of computer knowledge.

Most utilities are highly specialized and designed to perform only a single task or a small range of tasks. However, there are also some utility suites that combine several features in one software.

Most major operating systems come with several pre-installed utilities.

Utility software categories


Disk storage utilities

Disk defragmenters can detect computer files whose contents are broken across several locations on the hard disk, and move the fragments to one location to increase efficiency.

Disk checkers can scan the contents of a hard disk to find files or areas that are corrupted in some way, or were not correctly saved, and eliminate them for a more efficiently operating hard drive.

Disk cleaners can find files that are unnecessary to computer operation, or take up considerable amounts of space. Disk cleaner helps the user to decide what to delete when their hard disk is full.

Disk space analyzers for the visualization of disk space usage by getting the size for each folder (including sub folders) & files in folder or drive. showing the distribution of the used space.

Disk partitions can divide an individual drive into multiple logical drives, each with its own file system which can be mounted by the operating system and treated as an individual drive.

Backup utilities can make a copy of all information stored on a disk, and restore either the entire disk (e.g. in an event of disk failure) or selected files (e.g. in an event of accidental deletion).

Disk compression utilities can transparently compress/uncompress the contents of a disk, increasing the capacity of the disk.

File managers provide a convenient method of performing routine data management tasks, such as deleting, renaming, cataloging, uncataloging, moving, copying, merging, generating and modifying data sets.

Archive utilities output a stream or a single file when provided with a directory or a set of files. Archive utilities, unlike archive suites, usually do not include compression or encryption capabilities. Some archive utilities may even have a separate un-archive utility for the reverse operation.

System profilers provide detailed information about the software installed and hardware attached to the computer.

Anti-virus utilities scan for computer viruses.

Hex editors directly modify the text or data of a file. These files could be data or an actual program.

Data compression utilities output a shorter stream or a smaller file when provided with a stream or file.

Cryptographic utilities encrypt and decrypt streams and files.

Launcher applications provide a convenient access point for application software.

Registry cleaners clean and optimize the Windows registry by removing old registry keys that are no longer in use.

Network utilities analyze the computer's network connectivity, configure network settings, check data transfer or log events.

Command line interface (CLI) and Graphical user interface (GUI) Allows the user to contact and make changes to the operating system


Data Backup and Recovery Software:
 An ideal data backup and recovery software provides functionalities beyond simple copying of data files. This software often supports user needs of specifying what is to be backed up and when. Backup and recovery software preserve the original organization of files and allow an easy retrieval of the backed up data.

This was an overview of the major types of software. Computer software are widely popular today and hence we cannot imagine a world of computers without them. We would not have been able to use computers if not for the software. What is fascinating about the world of computers is that it has its own languages, its ways of communication with our human world and human interaction with the computers is possible, thanks to computer software. I wonder, if the word 'soft' in ‘software’ implies ‘soft-spokenness’, which is an important quality of a pleasant communication.

Types Of Computer

According to size

Microcomputers:

A microcomputer is a computer with a microprocessor as its central processing unit. They are physically small compared to mainframe and minicomputers. Many microcomputers (when equipped with a keyboard and screen for input and output) are also personal computers (in the generic sense)
This type of computers include systems that are for general purpose and for business needs. They are usually called PC’s (Personal Computers) based on the microprocessor. Examples are desktop computers, laptop, notebook etc

Monitors, keyboards and other devices for input and output may be integrated or separate. Computer memory in the form of RAM, and at least one other less volatile, memory storage device are usually combined with the CPU on a system bus in a single unit. Other devices that make up a complete microcomputer system include, batteries, a power supply unit, a keyboard and various input/output devices used to convey information to and from a human operator (printers, monitors, human interface devices) Microcomputers are designed to serve only a single user at a time, although they can often be modified with software or hardware to concurrently serve more than one user. Microcomputers fit well on or under desks or tables, so that they are within easy access of the user. Bigger computers like minicomputers, mainframes, and supercomputers take up large cabinets or even a dedicated room.



A microcomputer comes equipped with at least one type of data storage, usually RAM. Although some microcomputers (particularly early 8-bit home micros) perform tasks using RAM alone, some form of secondary storage is normally desirable. In the early days of home micros, this was often a data cassette deck (in many cases as an external unit). Later, secondary storage (particularly in the form of floppy disk and hard disk drives) were built in to the microcomputer case itself.



Server:

Basically server runs a network of computers and optimizes services to the other computers interlinked with that server. Example File Server, Print Server, Chat Server etc.
A server computer is a computer, or series of computers, that link other computers or electronic devices together. They often provide essential services across a network, either to private users inside a large organization or to public users via the internet. For example, when you enter a query in a search engine, the query is sent from your computer over the internet to the servers that store all the relevant web pages. The results are sent back by the server to your computer.

Many servers have dedicated functionality such as web servers, print servers, and database servers. Enterprise servers are servers that are used in a business context.

The server is used quite broadly in information technology. Despite the many Server branded products available (such as Server editions of Hardware, Software and/or Operating Systems), in theory any computerised process that shares a resource to one or more client processes is a Server. To illustrate this, take the common example of File Sharing. While the existence of files on a machine does not classify it as a server, the mechanism which shares these files to clients by the operating system is the Server.

Similarly, consider a web server application (such as the multiplatform "Apache HTTP Server"). This web server software can be run on any capable computer. For example, while a laptop or Personal Computer is not typically known as a server, they can in these situations fulfil the role of one, and hence be labelled as one. It is in this case that the machine's purpose as a web server classifies it in general as a Server.

In the hardware sense, the word server typically designates computer models intended for running software applications under the heavy demand of a network environment. In this client–server configuration one or more machines, either a computer or a computer appliance, share information with each other with one acting as a host for the other.

While nearly any personal computer is capable of acting as a network server, a dedicated server will contain features making it more suitable for production environments. These features may include a faster CPU, increased high-performance RAM, and typically more than one large hard drive. More obvious distinctions include marked redundancy in power supplies, network connections, and even the servers themselves.

Between the 1990s and 2000s an increase in the use of dedicated hardware saw the advent of self-contained server appliances. One well-known product is the Google Search Appliance, a unit which combines hardware and software in an out-of-the-box packaging. Simpler examples of such appliances include switches, routers, gateways, and print server, all of which are available in a near plug-and-play configuration.

Modern operating systems such as Microsoft Windows or Linux distributions rightfully seem to be designed with a client–server architecture in mind. These OSes attempt to abstract hardware, allowing a wide variety of software to work with components of the computer. In a sense, the operating system can be seen as serving hardware to the software, which in all but low-level programming languages must interact using an API.

WorkStation:


This device is a very powerful microcomputer. It is designated with high speed, large storage capacity and additional memory. Workstation hosts those application that needs powerful processing like complex scientific calculations graphics related tasks and game development.

A workstation is a high-end microcomputer designed for technical or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local area network and run multi-user operating systems. The term workstation has also been used to refer to a mainframe computer terminal or a PC connected to a network.



Historically, workstations had offered higher performance than personal computers, especially with respect to CPU and graphics, memory capacity and multitasking capability. They are optimized for the visualization and manipulation of different types of complex data such as 3D mechanical design, engineering simulation (e.g. computational fluid dynamics), animation and rendering of images, and mathematical plots. Consoles consist of a high resolution display, a keyboard and a mouse at a minimum, but also offer multiple displays, graphics tablets, 3D mice (devices for manipulating and navigating 3D objects and scenes), etc. Workstations are the first segment of the computer market to present advanced accessories and collaboration tools.


Presently, the workstation market is highly commoditized and is dominated by large PC vendors, such as Dell and HP, selling Microsoft Windows/Linux running on Intel Xeon/AMD Opteron. Alternative UNIX based platforms are provided by Apple Inc., Sun Microsystems, and SGI.



Mainframe:


This machine is used by companies or business where thousands of instructions executes simultaneously and are done in a limited time. Workers are connected with each other and working on the same data. These machines are very huge and are very rarely used.

Mainframes (often colloquially referred to as Big Iron[1]) are powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing.



The term originally referred to the large cabinets that housed the central processing unit and main memory of early computers.[2][3] Later the term was used to distinguish high-end commercial machines from less powerful units.


Most large-scale computer system architectures were firmly established in the 1960s and most large computers were based on architecture established during that era up until the advent of Web servers in the 1990s. (The first Web server running anywhere outside Switzerland ran on an IBM mainframe at Stanford University as early as 1991. See History of the World Wide Web for details.)


There were several minicomputer operating systems and architectures that arose in the 1970s and 1980s, but minicomputers are generally not considered mainframes. (UNIX arose as a minicomputer operating system; Unix has scaled up over the years to acquire some mainframe characteristics.)


Many defining characteristics of "mainframe" were established in the 1960s, but those characteristics continue to expand and evolve to the present day.

Modern mainframe computers have abilities not so much defined by their single task computational speed (usually defined as MIPS — Millions of Instructions Per Second) as by their redundant internal engineering and resulting high reliability and security, extensive input-output facilities, strict backward compatibility with older software, and high utilization rates to support massive throughput. These machines often run for years without interruption, with repairs and hardware upgrades taking place during normal operation.



Software upgrades are only non-disruptive when using facilities such as IBM's z/OS and Parallel Sysplex, with workload sharing so one system can take over another's application while it is being refreshed. More recently, there are several IBM mainframe installations that have delivered over a decade of continuous business service as of 2007, with hardware upgrades not interrupting service.[citation needed] Mainframes are defined by high availability, one of the main reasons for their longevity, because they are typically used in applications where downtime would be costly or catastrophic. The term Reliability, Availability and Serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning (and implementation) is required to exploit these features.


In the 1960s, most mainframes had no interactive interface. They accepted sets of punched cards, paper tape, and/or magnetic tape and operated solely in batch mode to support back office functions, such as customer billing. Teletype devices were also common, at least for system operators. By the early 1970s, many mainframes acquired interactive user interfaces and operated as timesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. Many mainframes supported graphical terminals (and terminal emulation) but not graphical user interfaces by the 1980s, but end user computing was largely obsoleted in the 1990s by the personal computer. Nowadays most mainframes have partially or entirely phased out classic terminal access for end-users in favor of Web user interfaces. Developers and operational staff typically continue to use terminals or terminal emulators.[citation needed]


Historically, mainframes acquired their name in part because of their substantial size, and because of requirements for specialized heating, ventilation, and air conditioning (HVAC), and electrical power. Those requirements ended by the mid-1990s with CMOS mainframe designs replacing the older bipolar technology. IBM claims its newer mainframes can reduce data center energy costs for power and cooling, and that they can reduce physical space requirements compared to server farms.



Supercomputers:


These computers are very powerful from all other computers in terms of strength and expense. Usually it is a very large computer but some time some other big computers work parallel to do a heavy task. Supercomputers are doing the great job of tasks that others computers don’t have the capability to do so. It performs massive amount of scientific calculations, weather forecasting, critical decryption of data, testing for engineering tasks.

A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".



Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. As of May 2010[update], the Cray Jaguar is the fastest supercomputer in the world.


The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs, IBM Cell, FPGAs. Most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.


Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion). A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.


Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.


As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.

INTERNET

The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks of local to global scope that are linked by a broad array of electronic and optical networking technologies. The Internet carries a vast array of information resources and services, most notably the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support electronic mail.


Most traditional communications media, such as telephone and television services, are reshaped or redefined using the technologies of the Internet, giving rise to services such as Voice over Internet Protocol (VoIP) and IPTV. Newspaper publishing has been reshaped into Web sites, blogging, and web feeds. The Internet has enabled or accelerated the creation of new forms of human interactions through instant messaging, Internet forums, and social networking sites.

The origins of the Internet reach back to the 1960s when the United States funded research projects of its military agencies to build robust, fault-tolerant, and distributed computer networks. This research and a period of civilian funding of a new U.S. backbone by the National Science Foundation spawned worldwide participation in the development of new networking technologies and led to the commercialization of an international network in the mid 1990s, and resulted in the following popularization of countless applications in virtually every aspect of modern human life. As of 2009, an estimated quarter of Earth's population uses the services of the Internet.

The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own standards. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.


History of Internet
The USSR's launch of Sputnik spurred the United States to create the Advanced Research Projects Agency (ARPA or DARPA) in February 1958 to regain a technological lead.[2][3] ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. The IPTO's purpose was to find ways to address the US Military's concern about survivability of their communications networks, and as a first step interconnect their computers at the Pentagon, Cheyenne Mountain, and SAC HQ. J. C. R. Licklider, a promoter of universal networking, was selected to head the IPTO. Licklider moved from the Psycho-Acoustic Laboratory at Harvard University to MIT in 1950, after becoming interested in information technology. At MIT, he served on a committee that established Lincoln Laboratory and worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of time-sharing.


Modern uses

The Internet is allowing greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections and web applications.

The Internet can now be accessed almost anywhere by numerous means, especially through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet from anywhere there is a wireless network supporting that device's technology. Within the limitations imposed by small screens and other limited facilities of such pocket-sized devices, services of the Internet, including email and the web, may be available. Service providers may restrict the services offered and wireless data transmission charges may be significantly higher than other access methods.

The Internet has also become a large market for companies; some of the biggest companies today have grown by taking advantage of the efficient nature of low-cost advertising and commerce through the Internet, also known as e-commerce. It is the fastest way to spread information to a vast number of people simultaneously. The Internet has also subsequently revolutionized shopping—for example; a person can order a CD online and receive it in the mail within a couple of days, or download it directly in some cases. The Internet has also greatly facilitated personalized marketing which allows a company to market a product to a specific person or a specific group of people more so than any other advertising medium. Examples of personalized marketing include online communities such as MySpace, Friendster, Facebook, Twitter, Orkut and others which thousands of Internet users join to advertise themselves and make friends online. Many of these users are young teens and adolescents ranging from 13 to 25 years old. In turn, when they advertise themselves they advertise interests and hobbies, which online marketing companies can use as information as to what those users will purchase online, and advertise their own companies' products to those users.

The low cost and nearly instantaneous sharing of ideas, knowledge, and skills has made collaborative work dramatically easier, with the help of collaborative software. Not only can a group cheaply communicate and share ideas, but the wide reach of the Internet allows such groups to easily form in the first place. An example of this is the free software movement, which has produced, among other programs, Linux, Mozilla Firefox, and OpenOffice.org. Internet "chat", whether in the form of IRC chat rooms or channels, or via instant messaging systems, allow colleagues to stay in touch in a very convenient way when working at their computers during the day. Messages can be exchanged even more quickly and conveniently than via e-mail. Extensions to these systems may allow files to be exchanged, "whiteboard" drawings to be shared or voice and video contact between team members.

Version control systems allow collaborating teams to work on shared sets of documents without either accidentally overwriting each other's work or having members wait until they get "sent" documents to be able to make their contributions. Business and project teams can share calendars as well as documents and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing. Social and political collaboration is also becoming more widespread as both Internet access and computer literacy grow. From the flash mob 'events' of the early 2000s to the use of social networking in the 2009 Iranian election protests, the Internet allows people to work together more effectively and in many more ways than was possible without it.

The Internet allows computer users to remotely access other computers and information stores easily, wherever they may be across the world. They may do this with or without the use of security, authentication and encryption technologies, depending on the requirements. This is encouraging new ways of working from home, collaboration and information sharing in many industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information e-mailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice. An office worker away from their desk, perhaps on the other side of the world on a business trip or a holiday, can open a remote desktop session into his normal office PC using a secure Virtual Private Network (VPN) connection via the Internet. This gives the worker complete access to all of his or her normal files and data, including e-mail and other applications, while away from the office. This concept has been referred to among system administrators as the Virtual Private Nightmare, because it extends the secure perimeter of a corporate network into its employees' homes.




Professor Leonard Kleinrock
Professor Leonard Kleinrock with one of the first ARPANET Interface Message Processors at UCLAAt the IPTO, Licklider's successor Ivan Sutherland in 1965 got Lawrence Roberts to start a project to make a network, and Roberts based the technology on the work of Paul Baran,[4] who had written an exhaustive study for the United States Air Force that recommended packet switching (opposed to circuit switching) to achieve better network robustness and disaster survivability. Roberts had worked at the MIT Lincoln Laboratory originally established to work on the design of the SAGE system. UCLA professor Leonard Kleinrock had provided the theoretical foundations for packet networks in 1962, and later, in the 1970s, for hierarchical routing, concepts which have been the underpinning of the development towards today's Internet.



Sutherland's successor Robert Taylor convinced Roberts to build on his early packet switching successes and come and be the IPTO Chief Scientist. Once there, Roberts prepared a report called Resource Sharing Computer Networks which was approved by Taylor in June 1968 and laid the foundation for the launch of the working ARPANET the following year.



After much work, the first two nodes of what would become the ARPANET were interconnected between Kleinrock's Network Measurement Center at the UCLA's School of Engineering and Applied Science and Douglas Engelbart's NLS system at SRI International (SRI) in Menlo Park, California, on October 29, 1969. The third site on the ARPANET was the Culler-Fried Interactive Mathematics centre at the University of California at Santa Barbara, and the fourth was the University of Utah Graphics Department. In an early sign of future growth, there were already fifteen sites connected to the young ARPANET by the end of 1971.

The ARPANET was one of the "eve" networks of today's Internet. In an independent development, Donald Davies at the UK National Physical Laboratory also discovered the concept of packet switching in the early 1960s, first giving a talk on the subject in 1965, after which the teams in the new field from two sides of the Atlantic ocean first became acquainted. It was actually Davies' coinage of the wording "packet" and "packet switching" that was adopted as the standard terminology. Davies also built a packet switched network in the UK called the Mark I in 1970. [5]
Following the demonstration that packet switching worked on the ARPANET, the British Post Office, Telenet, DATAPAC and TRANSPAC collaborated to create the first international packet-switched network service. In the UK, this was referred to as the International Packet Switched Service (IPSS), in 1978. The collection of X.25-based networks grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The X.25 packet switching standard was developed in the CCITT (now called ITU-T) around 1976.

A plaque commemorating the birth of the Internet
A plaque commemorating the birth of the Internet at Stanford UniversityX.25 was independent of the TCP/IP protocols that arose from the experimental work of DARPA on the ARPANET, Packet Radio Net and Packet Satellite Net during the same time period.





The early ARPANET ran on the Network Control Program (NCP), a standard designed and first implemented in December 1970 by a team called the Network Working Group (NWG) led by Steve Crocker. To respond to the network's rapid growth as more and more locations connected, Vinton Cerf and Robert Kahn developed the first description of the now widely used TCP protocols during 1973 and published a paper on the subject in May 1974. Use of the term "Internet" to describe a single global TCP/IP network originated in December 1974 with the publication of RFC 675, the first full specification of TCP that was written by Vinton Cerf, Yogen Dalal and Carl Sunshine, then at Stanford University. During the next nine years, work proceeded to refine the protocols and to implement them on a wide range of operating systems. The first TCP/IP-based wide-area network was operational by January 1, 1983 when all hosts on the ARPANET were switched over from the older NCP protocols. In 1985, the United States' National Science Foundation (NSF) commissioned the construction of the NSFNET, a university 56 kilobit/second network backbone using computers called "fuzzballs" by their inventor, David L. Mills. The following year, NSF sponsored the conversion to a higher-speed 1.5 megabit/second network. A key decision to use the DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of the Supercomputer program at NSF.



The opening of the network to commercial interests began in 1988. The US Federal Networking Council approved the interconnection of the NSFNET to the commercial MCI Mail system in that year and the link was made in the summer of 1989. Other commercial electronic e-mail services were soon connected, including OnTyme, Telemail and Compuserve. In that same year, three commercial Internet service providers (ISPs) were created: UUNET, PSINet and CERFNET. Important, separate networks that offered gateways into, then later merged with, the Internet include Usenet and BITNET. Various other commercial and educational networks, such as Telenet, Tymnet, Compuserve and JANET were interconnected with the growing Internet. Telenet (later called Sprintnet) was a large privately funded national computer network with free dial-up access in cities throughout the U.S. that had been in operation since the 1970s. This network was eventually interconnected with the others in the 1980s as the TCP/IP protocol became increasingly popular. The ability of TCP/IP to work over virtually any pre-existing communication networks allowed for a great ease of growth, although the rapid growth of the Internet was due primarily to the availability of an array of standardized commercial routers from many companies, the availability of commercial Ethernet equipment for local-area networking, and the widespread implementation and rigorous standardization of TCP/IP on UNIX and virtually every other common operating system.



This NeXT Computer was used by Sir Tim Berners-Lee
This NeXT Computer was used by Sir Tim Berners-Lee at CERN and became the world's first Web server.Although the basic applications and guidelines that make the Internet possible had existed for almost two decades, the network did not gain a public face until the 1990s. On 6 August 1991, CERN, a pan European organization for particle research, publicized the new World Wide Web project. The Web was invented by British scientist Tim Berners-Lee in 1989. An early popular web browser was ViolaWWW, patterned after HyperCard and built using the X Window System. It was eventually replaced in popularity by the Mosaic web browser. In 1993, the National Center for Supercomputing Applications at the University of Illinois released version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously academic, technical Internet. By 1996 usage of the word Internet had become commonplace, and consequently, so had its use as a synecdoche in reference to the World Wide Web.



Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks, such as FidoNet, have remained separate). During the 1990s, it was estimated that the Internet grew by 100 percent per year, with a brief period of explosive growth in 1996 and 1997.[6] This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.[7] The estimated population of Internet users is 1.67 billion as of June 30, 2009.[8]

Protocols
The complex communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been delegated to the Internet Engineering Task Force (IETF).[9] The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting discussions and final standards are published in a series of publications, each called a Request for Comments (RFC), freely available on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies.



The Internet Standards describe a framework known as the Internet Protocol Suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the Application Layer, the space for the application-specific networking methods used in software applications, e.g., a web browser program. Below this top layer, the Transport Layer connects applications on different hosts via the network (e.g., client–server model) with appropriate data exchange methods. Underlying these layers are the core networking technologies, consisting of two layers. The Internet Layer enables computers to identify and locate each other via Internet Protocol (IP) addresses, and allows them to connect to one-another via intermediate (transit) networks. Lastly, at the bottom of the architecture, is a software layer, the Link Layer, that provides connectivity between hosts on the same local network link, such as a local area network (LAN) or a dial-up connection. The model, also known as TCP/IP, is designed to be independent of the underlying hardware which the model therefore does not concern itself with in any detail. Other models have been developed, such as the Open Systems Interconnection (OSI) model, but they are not compatible in the details of description, nor implementation, but many similarities exist and the TCP/IP protocols are usually included in the discussion of OSI networking.



The most prominent component of the Internet model is the Internet Protocol (IP) which provides addressing systems (IP addresses) for computers on the Internet. IP enables internetworking and essentially establishes the Internet itself. IP Version 4 (IPv4) is the initial version used on the first generation of the today's Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion which is estimated to enter its final stage in approximately 2011.[10] A new protocol version, IPv6, was developed in the mid 1990s which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 is currently in commercial deployment phase around the world and Internet address registries (RIRs) have begun to urge all resource managers to plan rapid adoption and conversion.



IPv6 is not interoperable with IPv4. It essentially establishes a "parallel" version of the Internet not directly accessible with IPv4 software. This means software upgrades or translator facilities are necessary for every networking device that needs to communicate on the IPv6 Internet. Most modern computer operating systems are already converted to operate with both versions of the Internet Protocol. Network infrastructures, however, are still lagging in this development. Aside from the complex physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is defined by its interconnections and routing policies.

Structure




The Internet structure and its usage characteristics have been studied extensively. It has been determined that both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks. Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as GEANT, GLORIAD, Internet2 (successor of the Abilene Network), and the UK's national research and education network JANET. These in turn are built around smaller networks (see also the list of academic computer network organizations).



Many computer scientists describe the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system". The Internet is extremely heterogeneous; for instance, data transfer rates and physical characteristics of connections vary widely. The Internet exhibits "emergent phenomena" that depend on its large-scale organization. For example, data transfer rates exhibit temporal self-similarity. The principles of the routing and addressing methods for traffic in the Internet reach back to their origins the 1960s when the eventual scale and popularity of the network could not be anticipated. Thus, the possibility of developing alternative structures is investigated.

Services of Internet

Information



Many people use the terms Internet and World Wide Web, or just the Web, interchangeably, but the two terms are not synonymous. The World Wide Web is a global set of documents, images and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs). URIs allow providers to symbolically identify services and clients to locate and address web servers, file servers, and other databases that store documents and provide resources and access them using the Hypertext Transfer Protocol (HTTP), the primary carrier protocol of the Web. HTTP is only one of the hundreds of communication protocols used on the Internet. Web services may also use HTTP to allow software systems to communicate in order to share and exchange business logic and data.

World Wide Web browser software, such as Microsoft's Internet Explorer, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, let users navigate from one web page to another via hyperlinks embedded in the documents. These documents may also contain any combination of computer data, including graphics, sounds, text, video, multimedia and interactive content including games, office applications and scientific demonstrations. Through keyword-driven Internet research using search engines like Yahoo! and Google, users worldwide have easy, instant access to a vast and diverse amount of online information. Compared to printed encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization of information.

The Web has also enabled individuals and organizations to publish ideas and information to a potentially large audience online at greatly reduced expense and time delay. Publishing a web page, a blog, or building a website involves little initial cost and many cost-free services are available. Publishing and maintaining large, professional web sites with attractive, diverse and up-to-date information is still a difficult and expensive proposition, however. Many individuals and some companies and groups use web logs or blogs, which are largely used as easily updatable online diaries. Some commercial organizations encourage staff to communicate advice in their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result. One example of this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public's interest in their work. Collections of personal web pages published by large service providers remain popular, and have become increasingly sophisticated. Whereas operations such as Angelfire and GeoCities have existed since the early days of the Web, newer offerings from, for example, Facebook and MySpace currently have large followings. These operations often brand themselves as social network services rather than simply as web page hosts.

Advertising on popular web pages can be lucrative, and e-commerce or the sale of products and services directly via the Web continues to grow. In the early days, web pages were usually created as sets of complete and isolated HTML text files stored on a web server. More recently, websites are more often created using content management or wiki software with, initially, very little content. Contributors to these systems, who may be paid staff, members of a club or other organization or members of the public, fill underlying databases with content using editing pages designed for that purpose, while casual visitors view and read this content in its final HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.

Communication

E-mail is an important communications service available on the Internet. The concept of sending electronic text messages between parties in a way analogous to mailing letters or memos predates the creation of the Internet. Today it can be important to distinguish between internet and internal e-mail systems. Internet e-mail may travel and be stored unencrypted on many other networks and machines out of both the sender's and the recipient's control. During this time it is quite possible for the content to be read and even tampered with by third parties, if anyone considers it important enough. Purely internal or intranet mail systems, where the information never leaves the corporate or organization's network, are much more secure, although in any organization there will be IT and other personnel whose job may involve monitoring, and occasionally accessing, the e-mail of other employees not addressed to them. Pictures, documents and other files can be sent as e-mail attachments. E-mails can be cc-ed to multiple e-mail addresses.

Internet telephony is another common communications service made possible by the creation of the Internet. VoIP stands for Voice-over-Internet Protocol, referring to the protocol that underlies all Internet communication. The idea began in the early 1990s with walkie-talkie-like voice applications for personal computers. In recent years many VoIP systems have become as easy to use and as convenient as a normal telephone. The benefit is that, as the Internet carries the voice traffic, VoIP can be free or cost much less than a traditional telephone call, especially over long distances and especially for those with always-on Internet connections such as cable or ADSL. VoIP is maturing into a competitive alternative to traditional telephone service. Interoperability between different providers has improved and the ability to call or receive a call from a traditional telephone is available. Simple, inexpensive VoIP network adapters are available that eliminate the need for a personal computer.
Voice quality can still vary from call to call but is often equal to and can even exceed that of traditional calls. Remaining problems for VoIP include emergency telephone number dialling and reliability. Currently, a few VoIP providers provide an emergency service, but it is not universally available. Traditional phones are line-powered and operate during a power failure; VoIP does not do so without a backup power source for the phone equipment and the Internet access devices. VoIP has also become increasingly popular for gaming applications, as a form of communication between players. Popular VoIP clients for gaming include Ventrilo and Teamspeak. Wii, PlayStation 3, and Xbox 360 also offer VoIP chat features.

Data transfer

File sharing is an example of transferring large amounts of data across the Internet. A computer file can be e-mailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or FTP server for easy download by others. It can be put into a "shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. In any of these cases, access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests. These simple features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products.

Streaming media refers to the act that many existing radio and television broadcasters promote Internet "feeds" of their live audio and video streams (for example, the BBC). They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access on-line media in much the same way as was previously possible only with a television or radio receiver. The range of available types of content is much wider, from specialized technical webcasts to on-demand popular multimedia services. Podcasting is a variation on this theme, where—usually audio—material is downloaded and played back on a computer or shifted to a portable media player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material worldwide.

Webcams can be seen as an even lower-budget extension of this phenomenon. While some webcams can give full-frame-rate video, the picture is usually either small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with a vast number of users. It uses a flash-based web player to stream and show video files. Registered users may upload an unlimited amount of video and build their own personal profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands of videos daily.

Social impact
 
The Internet has enabled entirely new forms of social interaction, activities, and organizing, thanks to its basic features such as widespread usability and access. Social networking websites such as Facebook, Twitter and MySpace have created new ways to socialize and interact. Users of these sites are able to add a wide variety of information to pages, to pursue common interests, and to connect with others. It is also possible to find existing acquaintances, to allow communication among existing groups of people. Sites like LinkedIn foster commercial and business connections. YouTube and Flickr specialize in users' videos and photographs.


In the first decade of the 21st century the first generation is raised with widespread availability of Internet connectivity, bringing consequences and concerns in areas such as personal privacy and identity, and distribution of copyrighted materials. These "digital natives" face a variety of challenges that were not present for prior generations.
The Internet has achieved new relevance as a political tool, leading to Internet censorship by some states. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing in order to carry out their mission, having given rise to Internet activism. Some governments, such as those of Iran, North Korea, Myanmar, the People's Republic of China, and Saudi Arabia, restrict what people in their countries can access on the Internet, especially political and religious content.[citation needed] This is accomplished through software that filters domains and content so that they may not be easily accessed or obtained without elaborate circumvention.[original research?]

In Norway, Denmark, Finland[19] and Sweden, major Internet service providers have voluntarily, possibly to avoid such an arrangement being turned into law, agreed to restrict access to sites listed by authorities. While this list of forbidden URLs is only supposed to contain addresses of known child pornography sites, the content of the list is secret.[citation needed] Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet, but do not mandate filtering software. There are many free and commercially available software programs, called content-control software, with which a user can choose to block offensive websites on individual computers or networks, in order to limit a child's access to pornographic materials or depiction of violence.

The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Today, many Internet forums have sections devoted to games and funny videos; short cartoons in the form of Flash movies are also popular. Over 6 million people use blogs or message boards as a means of communication and for the sharing of ideas. The pornography and gambling industries have taken advantage of the World Wide Web, and often provide a significant source of advertising revenue for other websites.[citation needed] Although many governments have attempted to restrict both industries' use of the Internet, this has generally failed to stop their widespread popularity.[citation needed]

One main area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing games to online gambling. This has revolutionized the way many people interact[citation needed] while spending their free time on the Internet. While online gaming has been around since the 1970s,[citation needed] modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Non-subscribers were limited to certain types of game play or certain games. Many people use the Internet to access and download music, movies and other works for their enjoyment and relaxation. Free and fee-based services exist for all of these activities, using centralized servers and distributed peer-to-peer technologies. Some of these sources exercise more care with respect to the original artists' copyrights than others.

Many people use the World Wide Web to access news, weather and sports reports, to plan and book vacations and to find out more about their interests. People use chat, messaging and e-mail to make and stay in touch with friends worldwide, sometimes in the same way as some previously had pen pals. The Internet has seen a growing number of Web desktops, where users can access their files and settings via the Internet.

Cyberslacking can become a serious drain on corporate resources; the average UK employee spent 57 minutes a day surfing the Web while at work, according to a 2003 study by Peninsula Business Services.[20] Internet addiction disorder is excessive computer use that interferes with daily life. Some psychologists believe that ordinary internet use has other effects on individuals for instance interfering with the deep thinking that leads to true creativity.

Tuesday, August 3, 2010

Windows

INTRODUCTION

It is the most widely used operating system for desktop and laptop computers. It is developed by Microsoft, Windows primarily runs on x86-based CPUs, although some versions run on Intel's Itanium CPUs. Windows provides a graphical user interface and desktop environment in which applications are displayed in resizable, movable windows on screen.

Windows comes in both client and server versions, all of which support networking, the difference being that the server versions are designed to be dedicated servers. The client versions of Windows may also share data over the network and can be configured to grant access to all or specific files only. Windows PCs are used to access a variety of servers on the network, including Windows servers, UNIX, Linux and NetWare servers and mainframes.



The Graphical User Interface (GUI) is introduced by Microsoft in the 1980s. It achieved widespread success with version 3.0 in the early 1990s. Subsequent versions are Windows 3.1 (the upgrade that introduced True type Fonts, making Desktop Publishing possible for many more users); 3.11 (Windows for Work-groups, which included Network support); 95 (the first 32-bit version); and 98. Windows 95 introduced many usability improvements, including long filenames; Windows 98 is very similar internally, except that it uses a new file system that supports larger Hard Disks, and it added a modified user interface more closely integrated with Web browsing.



TYPES OF WINDOWS

There are two types of windows that can appear on our Desktop: the application window and the document window. An application window contains a running program. Application windows have menu bars. A document window may appear inside an application window and may contain documents, files, groups or Document windows do not have menu bars.


HISTORY OF MICROSOFT WINDOWS

Following is a summary of Windows versions, starting with Windows7, the future version of Windows.

1983 Microsoft Windows was announced November 10, 1983.
1985 Microsoft Windows 1.0 is introduced in November 20, 1985 and is initially sold for $100.00.
1987 Microsoft Windows 2.0 was released December 9, 1987 and is initially sold for $100.00.
1987 Microsoft Windows/386 or Windows 386 is introduced December 9, 1987 and is initially sold for $100.00.
1988 Microsoft Windows/286 or Windows 286 is introduced June, 1988 and is initially sold for $100.00.
1990 Microsoft Windows 3.0 was released May, 22 1990. Microsoft Windows 3.0 full version was priced at $149.95 and the upgrade version was priced at $79.95.
1991 Following its decision not to develop operating systems cooperatively with IBM, Microsoft changes the name of OS/2 to Windows NT.
1991 Microsoft Windows 3.0 or Windows 3.0a with multimedia was released October, 1991.
1992 Microsoft Windows 3.1 was released April, 1992 and sells more than 1 Million copies within the first two months of its release.
1992 Microsoft Windows for Workgroups 3.1 was released October, 1992.
1993 Microsoft Windows NT 3.1 was released July 27, 1993.
1993 The number of licensed users of Microsoft Windows now totals more than 25 Million.
1994 Microsoft Windows for Workgroups 3.11 was released February, 1994.
1994 Microsoft Windows NT 3.5 was released September 21, 1994.
1995 Microsoft Windows NT 3.51 was released May 30, 1995.
1995 Microsoft Windows 95 was released August 24, 1995 and sells more than 1 Million copies within 4 days.
1996 Microsoft Windows NT 4.0 was released July 29, 1996.
1996 Microsoft Windows CE 1.0 was released November, 1996.
1997 Microsoft Windows CE 2.0 was released November, 1997.
1998 Microsoft Windows 98 was released June, 1998.
1998 Microsoft Windows CE 2.1 was released July, 1998.
1998 In October of 1998 Microsoft announced that future releases of Windows NT would no longer have the initials of NT and that the next edition would be Windows 2000.
1999 Microsoft Windows 98 SE (Second Edition) was released May 5, 1999.
1999 Microsoft Windows CE 3.0 was released 1999.
2000 On January 4th at CES Bill Gates announces the new version of Windows CE will be called Pocket PC.
2000 Microsoft Windows 2000 was released February 17, 2000.
2000 Microsoft Windows ME (Millennium) released June 19, 2000.
2001 Microsoft Windows XP is released October 25, 2001.
2001 Microsoft Windows XP 64-Bit Edition (Version 2002) for Itanium systems is released March 28, 2003.
2003 Microsoft Windows Server 2003 is released March 28, 2003.
2003 Microsoft Windows XP 64-Bit Edition (Version 2003) for Itanium 2 systems is released on March 28, 2003.
2003 Microsoft Windows XP Media Center Edition 2003 is released on December 18, 2003.
2004 Microsoft Windows XP Media Center Edition 2005 is released on October 12, 2004.
2005 Microsoft Windows XP Professional x64 Edition is released on April 24, 2005.
2005 Microsoft announces it's next operating system, codenamed "Longhorn" will be named Windows Vista on July 23, 2005.
2006 Microsoft releases Microsoft Windows Vista to corporations on November 30, 2006.
2007 Microsoft releases Microsoft Windows Vista and Office 2007 to the general public January 30, 2007.
2009 Microsoft releases Windows 7 October 22, 2009.

 Windows 7 2009
Windows 7 is a version of Microsoft Windows, a series of operating systems produced by Microsoft for use on personal computers, including home and business desktops, laptops, netbooks, tablet PCs, and media center PCs.[4] Windows 7 was released to manufacturing on July 22, 2009,[5] and reached general retail availability on October 22, 2009,[6] less than three years after the release of its predecessor, Windows Vista. Windows 7's server counterpart, Windows Server 2008 R2, was released at the same time.
which introduced a large number of new features, Windows 7 was intended to be a more focused, incremental upgrade to the Windows line, with the goal of being compatible with applications and hardware with which Windows Vista is already compatible.[7] Presentations given by Microsoft in 2008 focused on multi-touch support, a redesigned Windows Shell with a new taskbar, referred to as the Superbar, a home networking system called HomeGroup,[8] and performance improvements. Some standard applications that have been included with prior releases of Microsoft Windows, including Windows Calendar, Windows Mail, Windows Movie Maker, and Windows Photo Gallery, are not included in Windows 7;[9][10] most are instead offered separately at no charge as part of the Windows Live Essentials suite.[11]

Windows Server 2008 (2007)

Windows Server 2008 is the name of the next server operating system from Microsoft. It is the successor to Windows Server 2003. It was known as codename Windows Server “Longhorn” until May 15, 2007, when Bill Gates announced its official title during his keynote address at WinHEC.
Windows Server 2008 is the server operating system containing many of the new client features from Windows Vista. This is a similar relationship to that between Windows Server 2003 and Windows XP.
Beta 1 was released on July 27, 2005. Beta 2 was announced and released on May 23, 2006 at WinHEC 2006, and Beta 3 was released publicly on April 25, 2007.



 Windows Vista 2007
Windows Vista is a line of graphical operating systems used on personal computers, including home and business desktops, notebook computers, Tablet PCs, and media centers. Prior to its announcement on July 22, 2005, Windows Vista was known by its codename “Longhorn”. Development was completed on November 8, 2006; over the following three months it was released in stages to computer hardware and software manufacturers, business customers, and retail channels. On January 30, 2007, it was released worldwide to the general public, and was made available for purchase and downloading from Microsoft’s web site. The release of Windows Vista comes more than five years after the introduction of its predecessor, Windows XP, making it the longest time span between two releases of Microsoft Windows.



Windows XP Media Center Edition (October 2005)

Windows XP Media Center Edition (MCE) is a version of Windows XP designed to serve as a home-entertainment hub. The last version, Windows XP Media Center Edition 2005, was released on October 12, 2004.

 
Windows Server 2003 (April 2003)

Windows Server 2003 is a server operating system produced by Microsoft. Introduced on April 24, 2003 as the successor to Windows 2000 Server, it is considered by Microsoft to be the cornerstone of their Windows Server System line of business server products.

According to Microsoft, Windows Server 2003 is more scalable and delivers better performance than its predecessor, Windows 2000.

 
Windows XP (October 2001)

Windows XP is a line of proprietary operating systems developed by Microsoft for use on general-purpose computer systems, including home and business desktops, notebook computers, and media centers. The letters “XP” stand for eXPerience. Codenamed “Whistler”, after Whistler, British Columbia, as many Microsoft employees skied at the Whistler-Blackcomb ski resort during its development, Windows XP is the successor to both Windows 2000 and Windows Me, and is the first consumer-oriented operating system produced by Microsoft to be built on the Windows NT kernel and architecture. Windows XP was first released on October 25, 2001, and over 400 million copies are in use, according to a January 2006 estimate by an IDC analyst. It is succeeded by Windows Vista, which was released to volume license customers on November 8, 2006, and worldwide to the general public on January 30, 2007.

Windows ME Millennium Edition (July 2000)


Windows Millennium Edition, or Windows Me, is a hybrid 16-bit/32-bit graphical operating system released on September 14, 2000 by Microsoft.





Windows 2000 ( Feb 2000 )

Windows 2000 (also referred to as Win2K) is an interruptible, graphical and business-oriented operating system that was designed to work with either uniprocessor or symmetric multi-processor 32-bit Intel x86 computers. It is part of the Microsoft Windows NT line of operating systems and was released on February 17, 2000. It was succeeded by Windows XP in October 2001 and Windows Server 2003 in April 2003. Windows 2000 is classified as a hybrid kernel operating system.

Windows 98 Second Edition (May 1999)


Windows 98 Second Edition (SE) is an update to Windows 98, released on May 5, 1999. It includes fixes for many minor issues, improved USB support, and the replacement of Internet Explorer 4.0 with the significantly faster Internet Explorer 5. Also included is Internet Connection Sharing, which allows multiple computers on a LAN to share a single Internet connection through Network Address Translation. Other features in the update include Microsoft NetMeeting 3.0 and integrated support for DVD-ROM drives. However, it is not a free upgrade for Windows 98, but a stand-alone product. This can cause problems if programs specifically request Windows 98 SE, but the user only owns Windows 98.



Windows 98 (June 1998)

Windows 98 (codenamed Memphis and formerly known as Windows 97) is a graphical operating system released on June 25, 1998 by Microsoft and the successor to Windows 95. Like its predecessor, it is a hybrid 16-bit/32-bit monolithic product based on MS-DOS.







Windows NT 4.0 (September 1996)


Windows NT 4.0 is the fourth release of Microsoft’s Windows NT line of operating systems, released to manufacturing on July 29, 1996. It is a 32-bit Windows system available in both workstation and server editions with a graphical environment similar to that of Windows 95. The “NT” designation in the product’s title initially stood for “New Technology” according to Bill Gates, but now no longer has any specific meaning.



Windows 95 (August 1995)


Windows 95 was a consumer-oriented graphical user interface-based operating system. It was released on August 24, 1995 by Microsoft, and was a significant progression from the company’s previous Windows products. During development it was referred to as Windows 4.0 or by the internal codename Chicago.


 
Windows NT 3.51 (November 1994)


Windows NT 3.51 is the third release of Microsoft’s Windows NT line of operating systems. It was released on May 30, 1995, nine months after Windows NT 3.5. The release provided two notable feature improvements; firstly NT 3.51 was the first of a short-lived outing of Microsoft Windows on the PowerPC CPU architecture. The second most significant enhancement offered through the release was that it provided client/server support for interoperating with Windows 95, which was released three months after NT 3.51. Windows NT 4.0 became its successor a year later; Microsoft continued to support 3.51 until December 31, 2001.


Windows NT 3.5 (September 1994)


Windows NT 3.5 is the second release of the Microsoft Windows NT operating system. It was released on September 21, 1994.

One of the primary goals during Windows NT 3.5′s development was to increase the speed of the operating system; as a result, the project was given the codename “Daytona” in reference to the Daytona International Speedway in Daytona Beach, Florida.




Windows for Workgroups 3.11 (November 1993)


Windows for Workgroups 3.11 (originally codenamed Snowball) was released in December 1993. It supported 32-bit file access, full 32-bit network redirectors, and the VCACHE.386 file cache, shared between them. The standard execution mode of the Windows kernel was discontinued in Windows for Workgroups 3.11.





Windows NT 3.1 (August 1993)Windows NT 3.1 is the first release of Microsoft’s Windows NT line of server and business desktop operating systems, and was released to manufacturing on July 27, 1993. The version number was chosen to match the one of Windows 3.1, the then-latest GUI from Microsoft, on account of the similar visual appearance of the user interface. Two editions of NT 3.1 were made available, Windows NT 3.1 and Windows NT Advanced Server.



Windows for Workgroups 3.1 (October 1992)


Windows for Workgroups 3.1 (originally codenamed Kato), released in October 1992, features native networking support. Windows for Workgroups 3.1 is an extended version of Windows 3.1 which comes with SMB file sharing support via the NetBEUI and/or IPX network protocols, includes the Hearts card game, and introduced VSHARE.386, the Virtual Device Driver version of the SHARE.EXE Terminate and Stay Resident program.



Windows 3.1 (April 1992)


Windows 3.1x is a graphical user interface and a part of the Microsoft Windows software family. Several editions were released between 1992 and 1994, succeeding Windows 3.0. This family of Windows can run in either Standard or 386 Enhanced memory modes. The exception is Windows for Workgroups 3.11, which can only officially run in 386 Enhanced mode


Windows 3.0 (May 1990)


Windows 3.0 is the third major release of Microsoft Windows, and came out on May 22, 1990. It became the first widely successful version of Windows and a powerful rival to Apple Macintosh and the Commodore Amiga on the GUI front. It was succeeded by Windows 3.1.


Windows 2.1 (June 1988)


Windows 2.1x is a family of Microsoft Windows graphical user interface-based operating environments.

Less than a year after the release of Windows 2.0, Windows/286 2.1 and Windows/386 2.1 were released on May 27, 1988.


Windows 2.03 (December 1987)


Windows 2.0 is a version of the Microsoft Windows graphical user interface-based operating environment that superseded Windows 1.0. Windows 2.0 was said to more closely match Microsoft’s pre-release publicity for Windows 1.0, than Windows 1.0 did.

Windows 1.01 (June 1985)


Windows 1.0 is a 16-bit graphical operating environment released on November 20, 1985. It was Microsoft’s first attempt to implement a multi-tasking graphical user interface-based operating environment on the PC platform.






Hardware basics
Monitor: The big TV-like thing. Probably has its own on/off switch as well as brightness,
Screen: The part of the monitor where all the action takes place — similar to a TV set screen.
System unit: The main body of the computer. Houses the main on/off switch plus access to the floppy disk and CD-ROM drives.
Mouse: Our main tool for navigating (getting around) and for making the computer do what we want it to do. I’ll talk about mice in more detail in a moment.
Keyboard: Laid out like a standard typewriter, the keyboard is used for typing and, in some cases, can also be used as an alternative to the mouse.

Computer software refers to the somewhat invisible stuff that makes the computer do whatever it is we want it to do. Any program that we purchase or download, as well as any pictures, music, or other stuff we put “in our computer” is software. Software is information that’s recorded to some kind of disk, such as a floppy disk, CD-ROM, or the hard disk that resides permanently inside our computer. So with the basic concepts of hardware and software covered, let’s start talking about how we use that stuff.

Mouse Basics

The one piece of hardware we need to get comfy with right off the bat is the mouse. To use the mouse, rest our hand comfortably on it, with our index finger resting (but not pressing) on the left mouse button. When the computer is on, we’ll see a little arrow, called the mouse pointer, on the screen. As we roll the mouse around on a mouse pad or on our desktop, the mouse pointer moves in the same direction as we move the mouse.

The following list explains basic mouse terminology we need to know:

• Mouse button (or primary mouse button): Usually the mouse button on the left — the one that rests comfortably under our index finger when we rest our right hand on the mouse.
• Right mouse button (or secondary mouse button): The mouse button on the right.
• Point: To move the mouse so that the mouse pointer is touching, or “hovering over,” some object on the screen.
• Click: To point to an item and then press and release the primary mouse button.
• Double-click: To point to an item and then click the primary mouse button twice in rapid succession — click click!
• Right-click: To point to an item and then press and release the secondary mouse button.
• Drag: To hold down the primary mouse button while moving the mouse.
• Right-drag: To hold down the secondary mouse button while moving the mouse.

Windows XP is geared toward two-button mouse operation. If our mouse has a little wheel in the middle, we can use that for scrolling. If our mouse has three buttons on it, we can ignore the button in the middle for now. I’ll show we how we can get some hands-on experience using our mouse in a moment. If we’re a lefty, we can configure a mouse for left-hand use. Doing so makes the button on the right the primary mouse button and the button on the left the secondary mouse button (so our index finger is still over the primary mouse button).

Components of Microsoft Windows

The desktop
The desktop, proper, is the large area of the screen. Everything else we see on the screen is actually resting on top of this virtual desktop. As mentioned, from the moment we start our computer, to the moment we turn it off, the desktop is always there — when it’s completely covered by some large program window.

The mouse pointer
The mouse pointer is the little indicator that moves when we move the mouse. As mentioned, to point to something, we rest this mouse pointer on it. Sometimes the mouse pointer appears as a hollow arrow. Other times, it has a different shape, depending on where it’s currently resting. When the computer is busy doing something, the mouse pointer turns to a little hourglass symbol. That means “Wait — the computer is doing something.” Wait until the mouse pointer changes back to a little arrow (or some other symbol) before we try clicking anything else on-screen.

The desktop icons
Each little picture on the desktop is an icon. Each icon, in turn, represents some program we can run, or some location on our computer where things are stored. The desktop icons on our computer probably won’t match the ones shown in the figure, because different computers have different programs installed. And all Windows users (including we) can easily add new desktop icons, and delete unused ones, to their liking. To open an icon, we either click or double-click it, depending on how our copy of Windows XP is currently configured. If we click a desktop icon and it doesn’t open up into a window, our computer is set up for double-clicking. We’ll have to double-click icons to open them for the time being. The section

The taskbar
The taskbar is the colored strip along the bottom of the desktop. In a sense, the taskbar is like the center desk draour of a real desk. It provides quick access to frequently used programs and features of Windows. Even when some large program window is covering the Windows desktop and its icons, the taskbar can remain visible on the screen so that we can get to the things if offers. As discussed in the sections that follow, the taskbar contains the Start button, the Quick Launch toolbar, and the Notifications area. If we don’t see the taskbar at all, it’s probably hidden (out of the way for the moment). Typically, to bring the taskbar into view, we must move the mouse pointer down to the very bottom of the screen. If the taskbar doesn’t slide into view automatically, we may have to drag it up. To do so, move the mouse button to the very bottom of the screen, hold down the primary (left) mouse button, drag the mouse pointer upward a half inch or so, and then release the mouse button.

The Start button
The Start button, as the name implies, is where we can start any program on our computer. When we click the Start button, the Start menu opens. The Start menu is divided into two sections. The left half of the menu provides access to frequently used programs. The right side provides access to frequently used folders (places where things that are “in our computer” are stored), as well as access to Help and Support and other features of Windows. Our Start menu won’t look exactly like the one in the figure. Again, that’s because it provides options, programs, and features that might be unique to our computer.

Icons
A pea-sized object on our computer screen is called an icon. There are probably some icons right on top of our desktop, as well as some smaller icons in the Quick Launch toolbar and Notifications area of the taskbar. Icons also appear within many of the program windows we open on our desktop. The appearance of an icon often gives us some clue about what kind of stuff is inside the icon and what is likely to appear when we open the icon. The following list summarizes the main types of icons we’ll come across:

Folder icon: Represents a folder, a place on the computer where files are stored. Opening a folder icon displays the contents of that folder. For example, in the My Documents, My Music, My Pictures, XP Bible on Max and 01Chap desktop icons are all folder icons. Two of those folders, My Pictures and 01Chap are currently open in the desktop. Each of those folders contains still more icons.

Program icon: Represents a program. When we open a program icon, we start the program it represents. For example, opening the Internet Explorer icon launches the Microsoft Internet Explorer program. There’s no real consistency to program icons. Each is just a “logo” of the underlying program.

Document icon: Represents a document; typically this is something we can change and print. The icon usually has a little dog-ear fold in the upper-right corner to resemble a paper document. For example, inside the window in the lour-right corner, many of the icons represent Microsoft Word documents (hence the letter W in the icon). The Grandmom icon in the upper My Pictures window is also a document icon. It represents a picture stored on disk. I’m currently viewing the contents of that folder in Thumbnails view, displays a small thumbnail-sized image of the actual photo, as opposed to some generic icon.

Shortcut icon: The little arrow in the lour-left corner of an icon identifies that icon as a shortcut to some program, document, folder, or Web site. Unlike most icons, which generally represent an actual file or location on our disk, shortcut icons just provide quick access to things.


We can manipulate virtually all icons by using the set of basic skills in the following list:
• As we know, we can open any icon by double-clicking it. If we’ve opted to switch to the single-click approach, we also can open the icon with a single-click. Whatever the icon represents will open in a window atop the desktop, as discussed in a moment.
• To move an icon, drag it to any new location on the screen. To move a bunch of icons, first select the icons we want to move by dragging the mouse pointer. Then drag the whole selection to a new place on the screen.

Tip Remember, to drag something means to rest the mouse pointer on the item we want to move, and then to hold down the mouse button as we move the mouse pointer to the new location. To drop the item at the new location, just release the mouse button.

• To see all the options available for an icon, right-click the icon to open its shortcut menu.
• To organize all the icons on the desktop, right-click an empty part of the desktop and choose Arrange Icons By on the shortcut menu that appears. Then click whichever option we prefer (Name, Type, and so forth). Choosing Name will arrange the icons into (roughly) alphabetic order (although some icons, such as My Documents, My Computer, and Recycle Bin, tend to stay near the upper-left corner of the screen).
• To have Windows XP automatically arrange icons for us, right-click an empty part of the desktop or the window and chooses Arrange Icons By from the menu, and then chooses Auto Arrange from the submenu that appears. After we have done this, however, we cannot move icons, because they will immediately jump back into their original place. To turn off the automatic arrangement, repeat this step. When Auto Arrange has a check mark next to it, that feature is currently turned on.
• If we prefer to put icons into our own order, and want them neatly arranged, choose Arrange Icons By → Align to Grid. After we do so, the icons will align on an invisible grid, creating a neater appearance.


FOLDERS

If we have worked with DOS or earlier versions of Windows, we are probably familiar with directories and subdirectories. Windows replaced the concepts of directories and subdirectories with the concept of folders. A folder is like a directory in that it holds programs and files. We can also put folders within folders just as we used to put directories within directories (subdirectories). We will want to use folders to organize our computer and the work we do on the computer.

Creating folder: Creating a folder is easy. If we are on the Desktop, click the right mouse button and select New. Choose Folder from the pop up menu and a new folder will be displayed on the Desktop. The current name of the folder “New Folder” will be highlighted so all we have to do is type in the folder name we prefer. We can use any characters in the name except the following: / \ *
< >? “


General operations related to folder
To move or copy an item, click on the item with the right mouse button and hold it down. Drag the item to the new location. To move an item, select the Move Here option. To make a copy of the item, select the Copy Here option. This technique works when we are working on the Desktop, in Windows Explorer and in My Computer.
To see what resides in the folder, double click on the folder.


RECYCLE BIN
Sometimes we will create documents or folders and later we will find that we no longer need them. It’s a good idea to keep our computer free of files and folders that are no longer needed. We can get rid of unneeded files and folders three ways:

Keyboard
The easiest way to delete a file, folder or program is to highlight the item by clicking on it with the right mouse button. Once the item is highlighted, press the Delete key on the keyboard.

Right Mouse Button
We can delete a file or folder by clicking on the unwanted item with the right mouse button. Choose Delete from the pop up menu. We will be shown the same confirmation message. To send the item and its contents to the Recycle Bin, click on Yes and the item will be deleted.

Drag and Drop
We can drag the unwanted item directly to the Recycle Bin. Point at the file or folder with the mouse, click the left mouse button and hold it down, and drag the file or folder to the Recycle Bin on the Desktop.

Restore in recycle bin
We can also restore the deleted files to there from where it was deleted. For this just open the recycle bin and right click on the icon of deleted item. Then, it will be restored in its own place.

DIALOG BOX
A dialog box is sort of like a window. Instead of representing an entire program, however, a dialog box generally contains some simple settings from which we can choose. The term dialog box comes from the fact that we carry on a kind of “dialogue” with the box by making selections from the options it presents. Controls within a dialog box are similar to the controls on any other kind of machine, be it a car, dishwasher, or stereo. Controls enable us to control how a program behaves and looks.

TITLE BAR
The title bar shows the System Menu icon, the title of the window or name of the program being run in the window, and the buttons for resizing and closing the window.

It consists of following options:

Minimize button:
When we click the Minimize button, the windows disappears and shrinks to a button in the taskbar.

Maximize/Restore button:
 Clicking the Maximize button expands the window to full-screen size (a quick way to hide other windows that may be distracting we). When the window is full-screen size, the Maximize button turns into the Restore button. To return the window to its previous size, click the Restore button.

Close button:
 Clicking the Close button closes the window, taking it off the screen and out of he taskbar as well. To restart the program in the future, we’ll need to go through whatever procedure we usually perform to start that program.

Sizing pad
The sizing pad in the lour-right corner of the window enables us to size the window. Just point to the sizing pad and then drag it outward to enlarge the window, or inward to shrink the window. We can actually size a window by dragging any edge or any corner of the window. The sizing pad just provides for a slightly larger target on which to rest the mouse pointer.


MENU BAR
Many windows that we open will have a menu bar across the top. The menu bar offers access to all the features that the program within the window has to offer. When we click on a menu option, a menu drops down.

TOOLBAR
Some windows also have a toolbar just below the menu bar. The toolbar provides one-click access to the most frequently used menu commands. Most toolbars provide ToolTips, a brief description that appears on the screen after we rest the mouse pointer on the button for a few seconds. Other programs, including WordPad, might show the descriptive text for the button we’re pointing to down in the status bar.

Toolbars are optional in most programs. We can turn them on and off using options from that program’s View menu. Some programs even offer customizable toolbars (although WordPad isn’t one of them). If a toolbar can be customized, right-clicking the toolbar and choosing Customize from its shortcut menu will take we to the options for customizing the toolbar. For future reference, keep in mind that if we’re looking to learn more about the toolbars in a specific program, we can open that program’s help system and search for the word toolbar.

Status bar
The status bar along the bottom of a window plays different roles in different programs. However, a common role is to display helpful information. For example, the status bar at the bottom of the WordPad window often displays the helpful message For Help, press F1 to let we know that help is available for the program. When we point to a toolbar button in WordPad, the status bar message changes to describe the purpose of that button.

System menu
The System menu enables us to move, size, and close the window by using the keyboard rather than the mouse. We might find this handy if we do a lot of typing and prefer not to take our hands off the keyboard to manage a window. To open the System menu, press Alt + Spacebar (hold down the Alt key, press and release the spacebar, and then release the Alt key) or click the System menu icon in the upper-left corner of the window. When the System menu is open, we can choose options in the usual manner. Click the option we want. Alternatively, on the keyboard, type the underlined letter of the option we want; for example, type the letter N to choose the Minimize option.


Networking in windows
Windows comes in both client and server versions, all of which support networking, the difference being that the server versions are designed to be dedicated servers. The client versions of Windows may also share data over the network and can be configured to grant access to all or specific files only. Windows PCs are used to access a variety of servers on the network, including Windows servers, UNIX, Linux and NetWare servers and mainframes.

Some shortcut key used in windows
General keyboard shortcuts:
• CTRL+C Copy.
• CTRL+X Cut.
• CTRL+V Paste.
• CTRL+Z Undo.
• DELETE Delete.
• SHIFT+DELETE Delete selected item permanently without placing the item in the Recycle Bin.
• CTRL while dragging an item Copy selected item.
• CTRL+SHIFT while dragging an item Create shortcut to select item.
• F2 Rename selected item.
• CTRL+RIGHT ARROW Move the insertion point to the beginning of the next word.
• CTRL+LEFT ARROW Move the insertion point to the beginning of the previous word.
• CTRL+DOWN ARROW Move the insertion point to the beginning of the next paragraph.
• CTRL+UP ARROW Move the insertion point to the beginning of the previous paragraph.
• CTRL+SHIFT with any of the arrow keys Highlight a block of text.
• SHIFT with any of the arrow keys Select more than one item in a window or on the desktop, or select text within a document.
• CTRL+A Select all.
• F3 Search for a file or folder.
• ALT+ENTER View properties for the selected item.
• ALT+F4 Close the active item, or quit the active program.
• ALT+Enter Displays the properties of the selected object.
• ALT+SPACEBAR Opens the shortcut menu for the active window.
• CTRL+F4 Close the active document in programs that allow we to have multiple documents open simultaneously.
• ALT+TAB Switch between open items.
• ALT+ESC Cycle through items in the order they oure opened.
• F6 Cycle through screen elements in a window or on the desktop.
• F4 Display the Address bar list in My Computer or Windows Explorer.
• SHIFT+F10 Display the shortcut menu for the selected item.
• ALT+SPACEBAR Display the System menu for the active window.
• CTRL+ESC Display the Start menu.
• ALT+Underlined letter in menu name Display the corresponding menu.
• Underlined letter in a command name on an open menu Carry out the corresponding command.
• F10 Activate the menu bar in the active program.
• RIGHT ARROW Open the next menu to the right, or open a submenu.
• LEFT ARROW Open the next menu to the left, or close a submenu.
• F5 Refresh the active window.
• BACKSPACE View the folder one level up in My Computer or Windows Explorer.
• ESC Cancel the current task.
• SHIFT when we insert a CD into the CD-ROM drive Prevent the CD from automatically playing.


Dialog box keyboard shortcuts:
• CTRL+TAB Move forward through tabs.
• CTRL+SHIFT+TAB Move backward through tabs.
• TAB Move forward through options.
• SHIFT+TAB Move backward through options.
• ALT+ Underlined letter Carry out the corresponding command or select the corresponding option.
• ENTER Carry out the command for the active option or button.
• SPACEBAR Select or clear the check box if the active option is a check box.
• Arrow keys Select a button if the active option is a group of option buttons.
• F1 Display Help.
• F4 Display the items in the active list.
• BACKSPACE Open a folder one level up if a folder is selected in the Save As or Open dialog box.