Wednesday, August 4, 2010

Softwares

Software, by definition, is the collection of computer programs, procedures and documentation that performs different tasks on a computer system. The term 'software' was first used by John Tukey in 1958. At the very basic level, computer software consists of a machine language that consists of groups of binary values, which specify processor instructions. The processor instructions change the state of computer hardware in a predefined sequence. Briefly, computer software is the language in which a computer speaks. There are different types of computer software. What are their major types? Let us see.


Major Types of Software

Programming Software:
This is one of the most commonly known and popularly used forms of computer software. These software come in forms of tools that assist a programmer in writing computer programs. Computer programs are sets of logical instructions that make a computer system perform certain tasks. The tools that help the programmers in instructing a computer system include text editors, compilers and interpreters.
A programming tool or software development tool is a program or application that software developers use to create, debug, maintain, or otherwise support other programs and applications. The term usually refers to relatively simple programs that can be combined together to accomplish a task, much as one might use multiple hand tools to fix a physical object.

System Software:

It helps in running the computer hardware and the computer system. System software is a collection of operating systems; devise drivers, servers, windowing systems and utilities. System software helps an application programmer in abstracting away from hardware, memory and other internal complexities of a computer.

The operating system and utility programs are the two major categories of system software. Just as the processor is the nucleus of the computer system, the operating system is the nucleus of all software activity. Application Software: It enables the end users to accomplish certain specific tasks. Business software, databases and educational software are some forms of application software. Different word processors, which are dedicated for specialized tasks to be performed by the user, are other examples of application software.


The operating system is the most important program that runs on a computer. Every general-purpose computer must have an operating system to run other programs. Operating systems perform basic tasks, such as

i. recognizing input from the keyboard
ii.sending output to the display screen
iii.keeping track of files and directories on the disk
iv.controlling peripheral devices such as disk drives and printers.

It is the first program loaded into memory when the computer is turned on and, in a sense, brings life to the computer hardware. Without it, you cannot use your word processing software, spreadsheet software, or any other applications.


Without an operating system, you cannot communicate with your computer. When you give the computer a command, the operating system relays the instructions to the 'brain' of the computer, called the microprocessor or CPU. You cannot speak directly to the CPU because it only understands machine language. When you are working in an application software program, such as Microsoft Word, commands that you give the application are sent through the operating system to the CPU. Windows2000, Window95/98, Mac OS, Unix and DOS are all examples of operating systems.




Apart from these three basic types of software, there are some other well-known forms of computer software like inventory management software, ERP, utility software, accounting software and others. Take a look at some of them.

Application software, also known as an application, is computer software designed to help the user to perform singular or multiple related specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players.


Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but typically do not directly apply them in the performance of tasks that benefit the user. A simple, if imperfect analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system). The power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user.
Application software classification


There are many types of application software:

An application suite consists of multiple applications bundled together. They usually have related functions, features and user interfaces, and may be able to interact with each other, e.g. open each other's files. Business applications often come in suites, e.g. Microsoft Office, OpenOffice.org, and iWork, which bundle together a word processor, a spreadsheet, etc.; but suites exist for other purposes, e.g. graphics or music.

Enterprise software addresses the needs of organization processes and data flow, often in a large distributed environment. (Examples include Financial, Customer Relationship Management, and Supply Chain Management). Note that Departmental Software is a sub-type of Enterprise Software with a focus on smaller organizations or groups within a large organization. (Examples include Travel Expense Management, and IT Helpdesk)

Enterprise infrastructure software provides common capabilities needed to support enterprise software systems. (Examples include Databases, Email servers, and Network and Security Management)

Information worker software addresses the needs of individuals to create and manage information, often for individual projects within a department, in contrast to enterprise management. Examples include time management, resource management, documentation tools, analytical, and collaborative. Word processors, spreadsheets, email and blog clients, personal information system, and individual media editors may aid in multiple information worker tasks.

Content access software is software used primarily to access content without editing, but may include software that allows for content editing. Such software addresses the needs of individuals and groups to consume digital entertainment and published digital content. (Examples include Media Players, Web Browsers, Help browsers, and Games)

Educational software is related to content access software, but has the content and/or features adapted for use in by educators or students. For example, it may deliver evaluations (tests), track progress through material, or include collaborative capabilities.

Simulation software are computer software for simulation of physical or abstract systems for either research, training or entertainment purposes.

Media development software addresses the needs of individuals who generate print and electronic media for others to consume, most often in a commercial or educational setting. This includes Graphic Art software, Desktop Publishing software, Multimedia Development software, HTML editors, Digital Animation editors, Digital Audio and Video composition, and many others.[2]

Product engineering software is used in developing hardware and software products. This includes computer aided design (CAD), computer aided engineering (CAE), computer language editing and compiling tools, Integrated Development Environments, and Application Programmer Interfaces.

Utility Software:

Also known as service routine, utility software helps in the management of computer hardware and application software. It performs a small range of tasks. Disk defragmenters, systems utilities and virus scanners are some of the typical examples of utility software.
Utility software is a kind of system software designed to help analyze, configure, optimize and maintain the computer. A single piece of utility software is usually called a utility (abbr. util) or tool.


Utility software should be contrasted with application software, which allows users to do things like creating text documents, playing games, listening to music or surfing the web. Rather than providing these kinds of user-oriented or output-oriented functionality, utility software usually focuses on how the computer infrastructure (including the computer hardware, operating system, application software and data storage) operates. Due to this focus, utilities are often rather technical and targeted at people with an advanced level of computer knowledge.

Most utilities are highly specialized and designed to perform only a single task or a small range of tasks. However, there are also some utility suites that combine several features in one software.

Most major operating systems come with several pre-installed utilities.

Utility software categories


Disk storage utilities

Disk defragmenters can detect computer files whose contents are broken across several locations on the hard disk, and move the fragments to one location to increase efficiency.

Disk checkers can scan the contents of a hard disk to find files or areas that are corrupted in some way, or were not correctly saved, and eliminate them for a more efficiently operating hard drive.

Disk cleaners can find files that are unnecessary to computer operation, or take up considerable amounts of space. Disk cleaner helps the user to decide what to delete when their hard disk is full.

Disk space analyzers for the visualization of disk space usage by getting the size for each folder (including sub folders) & files in folder or drive. showing the distribution of the used space.

Disk partitions can divide an individual drive into multiple logical drives, each with its own file system which can be mounted by the operating system and treated as an individual drive.

Backup utilities can make a copy of all information stored on a disk, and restore either the entire disk (e.g. in an event of disk failure) or selected files (e.g. in an event of accidental deletion).

Disk compression utilities can transparently compress/uncompress the contents of a disk, increasing the capacity of the disk.

File managers provide a convenient method of performing routine data management tasks, such as deleting, renaming, cataloging, uncataloging, moving, copying, merging, generating and modifying data sets.

Archive utilities output a stream or a single file when provided with a directory or a set of files. Archive utilities, unlike archive suites, usually do not include compression or encryption capabilities. Some archive utilities may even have a separate un-archive utility for the reverse operation.

System profilers provide detailed information about the software installed and hardware attached to the computer.

Anti-virus utilities scan for computer viruses.

Hex editors directly modify the text or data of a file. These files could be data or an actual program.

Data compression utilities output a shorter stream or a smaller file when provided with a stream or file.

Cryptographic utilities encrypt and decrypt streams and files.

Launcher applications provide a convenient access point for application software.

Registry cleaners clean and optimize the Windows registry by removing old registry keys that are no longer in use.

Network utilities analyze the computer's network connectivity, configure network settings, check data transfer or log events.

Command line interface (CLI) and Graphical user interface (GUI) Allows the user to contact and make changes to the operating system


Data Backup and Recovery Software:
 An ideal data backup and recovery software provides functionalities beyond simple copying of data files. This software often supports user needs of specifying what is to be backed up and when. Backup and recovery software preserve the original organization of files and allow an easy retrieval of the backed up data.

This was an overview of the major types of software. Computer software are widely popular today and hence we cannot imagine a world of computers without them. We would not have been able to use computers if not for the software. What is fascinating about the world of computers is that it has its own languages, its ways of communication with our human world and human interaction with the computers is possible, thanks to computer software. I wonder, if the word 'soft' in ‘software’ implies ‘soft-spokenness’, which is an important quality of a pleasant communication.

Types Of Computer

According to size

Microcomputers:

A microcomputer is a computer with a microprocessor as its central processing unit. They are physically small compared to mainframe and minicomputers. Many microcomputers (when equipped with a keyboard and screen for input and output) are also personal computers (in the generic sense)
This type of computers include systems that are for general purpose and for business needs. They are usually called PC’s (Personal Computers) based on the microprocessor. Examples are desktop computers, laptop, notebook etc

Monitors, keyboards and other devices for input and output may be integrated or separate. Computer memory in the form of RAM, and at least one other less volatile, memory storage device are usually combined with the CPU on a system bus in a single unit. Other devices that make up a complete microcomputer system include, batteries, a power supply unit, a keyboard and various input/output devices used to convey information to and from a human operator (printers, monitors, human interface devices) Microcomputers are designed to serve only a single user at a time, although they can often be modified with software or hardware to concurrently serve more than one user. Microcomputers fit well on or under desks or tables, so that they are within easy access of the user. Bigger computers like minicomputers, mainframes, and supercomputers take up large cabinets or even a dedicated room.



A microcomputer comes equipped with at least one type of data storage, usually RAM. Although some microcomputers (particularly early 8-bit home micros) perform tasks using RAM alone, some form of secondary storage is normally desirable. In the early days of home micros, this was often a data cassette deck (in many cases as an external unit). Later, secondary storage (particularly in the form of floppy disk and hard disk drives) were built in to the microcomputer case itself.



Server:

Basically server runs a network of computers and optimizes services to the other computers interlinked with that server. Example File Server, Print Server, Chat Server etc.
A server computer is a computer, or series of computers, that link other computers or electronic devices together. They often provide essential services across a network, either to private users inside a large organization or to public users via the internet. For example, when you enter a query in a search engine, the query is sent from your computer over the internet to the servers that store all the relevant web pages. The results are sent back by the server to your computer.

Many servers have dedicated functionality such as web servers, print servers, and database servers. Enterprise servers are servers that are used in a business context.

The server is used quite broadly in information technology. Despite the many Server branded products available (such as Server editions of Hardware, Software and/or Operating Systems), in theory any computerised process that shares a resource to one or more client processes is a Server. To illustrate this, take the common example of File Sharing. While the existence of files on a machine does not classify it as a server, the mechanism which shares these files to clients by the operating system is the Server.

Similarly, consider a web server application (such as the multiplatform "Apache HTTP Server"). This web server software can be run on any capable computer. For example, while a laptop or Personal Computer is not typically known as a server, they can in these situations fulfil the role of one, and hence be labelled as one. It is in this case that the machine's purpose as a web server classifies it in general as a Server.

In the hardware sense, the word server typically designates computer models intended for running software applications under the heavy demand of a network environment. In this client–server configuration one or more machines, either a computer or a computer appliance, share information with each other with one acting as a host for the other.

While nearly any personal computer is capable of acting as a network server, a dedicated server will contain features making it more suitable for production environments. These features may include a faster CPU, increased high-performance RAM, and typically more than one large hard drive. More obvious distinctions include marked redundancy in power supplies, network connections, and even the servers themselves.

Between the 1990s and 2000s an increase in the use of dedicated hardware saw the advent of self-contained server appliances. One well-known product is the Google Search Appliance, a unit which combines hardware and software in an out-of-the-box packaging. Simpler examples of such appliances include switches, routers, gateways, and print server, all of which are available in a near plug-and-play configuration.

Modern operating systems such as Microsoft Windows or Linux distributions rightfully seem to be designed with a client–server architecture in mind. These OSes attempt to abstract hardware, allowing a wide variety of software to work with components of the computer. In a sense, the operating system can be seen as serving hardware to the software, which in all but low-level programming languages must interact using an API.

WorkStation:


This device is a very powerful microcomputer. It is designated with high speed, large storage capacity and additional memory. Workstation hosts those application that needs powerful processing like complex scientific calculations graphics related tasks and game development.

A workstation is a high-end microcomputer designed for technical or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local area network and run multi-user operating systems. The term workstation has also been used to refer to a mainframe computer terminal or a PC connected to a network.



Historically, workstations had offered higher performance than personal computers, especially with respect to CPU and graphics, memory capacity and multitasking capability. They are optimized for the visualization and manipulation of different types of complex data such as 3D mechanical design, engineering simulation (e.g. computational fluid dynamics), animation and rendering of images, and mathematical plots. Consoles consist of a high resolution display, a keyboard and a mouse at a minimum, but also offer multiple displays, graphics tablets, 3D mice (devices for manipulating and navigating 3D objects and scenes), etc. Workstations are the first segment of the computer market to present advanced accessories and collaboration tools.


Presently, the workstation market is highly commoditized and is dominated by large PC vendors, such as Dell and HP, selling Microsoft Windows/Linux running on Intel Xeon/AMD Opteron. Alternative UNIX based platforms are provided by Apple Inc., Sun Microsystems, and SGI.



Mainframe:


This machine is used by companies or business where thousands of instructions executes simultaneously and are done in a limited time. Workers are connected with each other and working on the same data. These machines are very huge and are very rarely used.

Mainframes (often colloquially referred to as Big Iron[1]) are powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing.



The term originally referred to the large cabinets that housed the central processing unit and main memory of early computers.[2][3] Later the term was used to distinguish high-end commercial machines from less powerful units.


Most large-scale computer system architectures were firmly established in the 1960s and most large computers were based on architecture established during that era up until the advent of Web servers in the 1990s. (The first Web server running anywhere outside Switzerland ran on an IBM mainframe at Stanford University as early as 1991. See History of the World Wide Web for details.)


There were several minicomputer operating systems and architectures that arose in the 1970s and 1980s, but minicomputers are generally not considered mainframes. (UNIX arose as a minicomputer operating system; Unix has scaled up over the years to acquire some mainframe characteristics.)


Many defining characteristics of "mainframe" were established in the 1960s, but those characteristics continue to expand and evolve to the present day.

Modern mainframe computers have abilities not so much defined by their single task computational speed (usually defined as MIPS — Millions of Instructions Per Second) as by their redundant internal engineering and resulting high reliability and security, extensive input-output facilities, strict backward compatibility with older software, and high utilization rates to support massive throughput. These machines often run for years without interruption, with repairs and hardware upgrades taking place during normal operation.



Software upgrades are only non-disruptive when using facilities such as IBM's z/OS and Parallel Sysplex, with workload sharing so one system can take over another's application while it is being refreshed. More recently, there are several IBM mainframe installations that have delivered over a decade of continuous business service as of 2007, with hardware upgrades not interrupting service.[citation needed] Mainframes are defined by high availability, one of the main reasons for their longevity, because they are typically used in applications where downtime would be costly or catastrophic. The term Reliability, Availability and Serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning (and implementation) is required to exploit these features.


In the 1960s, most mainframes had no interactive interface. They accepted sets of punched cards, paper tape, and/or magnetic tape and operated solely in batch mode to support back office functions, such as customer billing. Teletype devices were also common, at least for system operators. By the early 1970s, many mainframes acquired interactive user interfaces and operated as timesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. Many mainframes supported graphical terminals (and terminal emulation) but not graphical user interfaces by the 1980s, but end user computing was largely obsoleted in the 1990s by the personal computer. Nowadays most mainframes have partially or entirely phased out classic terminal access for end-users in favor of Web user interfaces. Developers and operational staff typically continue to use terminals or terminal emulators.[citation needed]


Historically, mainframes acquired their name in part because of their substantial size, and because of requirements for specialized heating, ventilation, and air conditioning (HVAC), and electrical power. Those requirements ended by the mid-1990s with CMOS mainframe designs replacing the older bipolar technology. IBM claims its newer mainframes can reduce data center energy costs for power and cooling, and that they can reduce physical space requirements compared to server farms.



Supercomputers:


These computers are very powerful from all other computers in terms of strength and expense. Usually it is a very large computer but some time some other big computers work parallel to do a heavy task. Supercomputers are doing the great job of tasks that others computers don’t have the capability to do so. It performs massive amount of scientific calculations, weather forecasting, critical decryption of data, testing for engineering tasks.

A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".



Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. As of May 2010[update], the Cray Jaguar is the fastest supercomputer in the world.


The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs, IBM Cell, FPGAs. Most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.


Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion). A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.


Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.


As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.