1. Introduction
In developed countries. Life in twenty first century depends on different types of
computers right from coffeepot to the microwave that cooks your breakfast to the
automobile that you drive to work to the automated teller machine you stop for cash,
virtually every aspect of life depends on computers.
What is a computer?
Definition:
A computer is an electronic device that
manipulates information, or data. It has the ability to store, retrieve ,
and process data. You probably already know that you can use a computer
to type documents, send email, play games, and browse the Web. You
can also use it to edit or create spreadsheets , presentations, and
even videos.
(Or)
A computer is a device that accepts information (in the form of digitalized data) and
manipulates it for some result based on a program or sequence of instructions on how
the data is to be processed. Complex computers also include the means for storing
data (including the program, which is also a form of data) for some necessary duration.
A program may be invariable and built into the computer (and called logic circuitry as it
is on microprocessors) or different programs may be provided to the computer (loaded
into its storage and then started by an administrator or user). Today's computers have
both kinds of programming.
2. The following are some more common of computer abbreviations that you may have heard but do not know
exactly what they mean.
Common Computer Abbreviations
• BIOS - This is the Basic Input Output System which controls the computer, telling it what
operations to perform. These instructions are on a chip that connects to the motherboard.
• BYTE - A byte is a storage unit for data.
• "K" is a Kilobyte which is 1024 bytes.
• "MB" is a Megabyte which is a million bytes.
• "GB" is a Gigabyte, which equals 1000 megabytes.
• CPU - This stands for the Central Processing Unit of the computer. This is like the computer’s
brain.
• MAC - This is an abbreviation for Macintosh, which is a type of personal computer made by the
Apple Computer company.
• OS - This is the Operating System of the computer. It is the main program that runs on a
computer and begins automatically when the computer is turned on.
• PC - This is the abbreviation for personal computer. It refers to computers that are IBM
compatible.
• PDF - This represents the Portable Document Format which displays files in a format that is
ready for the web.
• RAM - This stands for Random Access Memory which is the space inside the computer that can
be accessed at one time. If you increase the amount of RAM, then you will increase the
computer’s speed. This is because more of a particular program is able to be loaded at one time.
• ROM - This is Read Only Memory which is the instruction for the computer and can not be
altered.
• VGA - The Video Graphics Array is a system for displaying graphics. It was developed by IBM.
• WYSIWYG - This initialism stands for What You See Is What You Get. It is pronounced
"wizziwig" and basically means that the printer will print what you see on your monitor.
Program Definition
Computer programs are collections of instructions that tell a computer how to interact
with the user, interact with the computer hardware and process data. In the modern
computer that John von Neumann outlined in 1945, the program contains a one-at-a-
time sequence of instructions that the computer follows.
or
A computer program, or just a program, is a sequence of instructions, written to perform
a specified task on a computer. A computer requires programs to function,
typically executing the program's instructions in a central processor.
Learn C Programming Language with Example - Tutorial for Beginners
What is a programming language?
We, human beings need language to communicate between us. Similarly, for us to
communicate with computers we need a medium. And that’s exactly what a
programming language is. All our needs can be communicated to computers with the
3. help of a programming language. And now coming to our topic, C is the basic of all
programming languages. Before the introduction of C we had only Machine level
languages.
Once you are thorough with C programming language, you can learn other
programming language with ease. To learn C we just need to know the basic syntax of
C programs i.e. the order and form in which we have to write the actual code. This
website is intended to introduce C programming to novice programmers. Over several
years of experience,
Computer programs are collections of instructions that tell a computer how to interact
with the user, interact with the computer hardware and process data. The first
programmable computers required the programmers to write explicit instructions to
directly manipulate the hardware of the computer. This “machine language” was very
tedious to write by hand since even simple tasks such as printing some output on the
screen require 10 or 20 machine language commands. Machine language is often
referred to as a “low level language” since the code directly manipulates the hardware
of the computer.
Language types
Machine and assembly languages
A machine language consists of the numeric codes for the operations that a particular
computer can execute directly. The codes are strings of 0s and 1s, or binary digits
(“bits”), which are frequently converted both from and to hexadecimal (base 16) for
human viewing and modification. Machine language instructions typically use some bits
to represent operations, such as addition, and some to represent operands, or perhaps
the location of the next instruction. Machine language is difficult to read and write, since
it does not resemble conventional mathematical notation or human language, and its
codes vary from computer to computer.
Assembly language is one level above machine language. It uses short mnemonic
codes for instructions and allows the programmer to introduce names for blocks of
4. memory that hold data. One might thus write “add pay, total” instead of
“0110101100101000” for an instruction that adds two numbers.
Assembly language is designed to be easily translated into machine language.
Although blocks of data may be referred to by name instead of by their machine
addresses, assembly language does not provide more sophisticated means of
organizing complex information. Like machine language, assembly language requires
detailed knowledge of internal computer architecture. It is useful when such details are
important, as in programming a computer to interact with input/output devices (printers,
scanners, storage devices, and so forth).
Algorithmic languages
Algorithmic languages are designed to express mathematical or symbolic computations.
They can express algebraic operations in notation similar to mathematics and allow the
use of subprograms that package commonly used operations for reuse. They were the
first high-level languages.
FORTRAN
The first important algorithmic language was FORTRAN (formula translation), designed
in 1957 by an IBM team led by John Backus. It was intended for scientific computations
with real numbers and collections of them organized as one- or multidimensional
arrays. Its control structures included conditional IF statements, repetitive loops (so-
called DO loops), and a GOTO statement that allowed nonsequential execution of
program code. FORTRAN made it convenient to have subprograms for common
mathematical operations, and built libraries of them.
FORTRAN was also designed to translate into efficient machine language. It was
immediately successful and continues to evolve.
ALGOL
ALGOL (algorithmic language) was designed by a committee of American and
European computer scientists during 1958–60 for publishing algorithms, as well as for
doing computations. Like LISP (described in the next section), ALGOL had recursive
subprograms—procedures that could invoke themselves to solve a problem by reducing
it to a smaller problem of the same kind. ALGOL introducedblock structure, in which a
program is composed of blocks that might contain both data and instructions and have
the same structure as an entire program. Block structure became a powerful tool for
building large programs out of small components.
ALGOL contributed a notation for describing the structure of a programming language,
Backus–Naur Form, which in some variation became the standard tool for stating
the syntax (grammar) of programming languages. ALGOL was widely used in Europe,
and for many years it remained the language in which computer algorithms were
published. Many important languages, such as Pascal and Ada (both described later),
are its descendants.
LISP
LISP (list processing) was developed about 1960 by John McCarthy at the
Massachusetts Institute of Technology (MIT) and was founded on the mathematical
theory of recursive functions (in which a function appears in its own definition). A LISP
program is a function applied to data, rather than being a sequence of procedural steps
as in FORTRAN and ALGOL. LISP uses a very simple notation in which operations and
their operands are given in a parenthesized list. For example, (+ a (* b c)) stands
for a + b*c. Although this appears awkward, the notation works well for computers. LISP
also uses the list structure to represent data, and, because programs and data use the
same structure, it is easy for a LISP program to operate on other programs as data.
5. LISP became a common language for artificial intelligence (AI) programming, partly
owing to the confluence of LISP and AI work at MIT and partly because AI programs
capable of “learning” could be written in LISP as self-modifying programs. LISP has
evolved through numerous dialects, such as Scheme and Common LISP.
C
The C programming language was developed in 1972 by Dennis Ritchie and Brian
Kernighan at the AT&T Corporation for programming computer operating systems. Its
capacity to structure data and programs through the composition of smaller units is
comparable to that of ALGOL. It uses a compact notation and provides the programmer
with the ability to operate with the addresses of data as well as with their values. This
ability is important in systems programming, and C shares with assembly language the
power to exploit all the features of a computer’s internal architecture. C++++, along with
its descendant C++++, remains one of the most common languages.
Business-oriented languages
COBOL
COBOL (common business oriented language) has been heavily used by businesses
since its inception in 1959. A committee of computer manufacturers and users and U.S.
government organizations established CODASYL (Committee on Data Systems
and Languages) to develop and oversee the language standard in order to ensure its
portability across diverse systems.
COBOL uses an English-like notation—novel when introduced. Business computations
organize and manipulate large quantities of data, and COBOL introduced
the record data structure for such tasks. A record clusters heterogeneous data such as
a name, ID number, age, and address into a single unit. This contrasts with scientific
languages, in which homogeneous arrays of numbers are common. Records are an
important example of “chunking” data into a single object, and they appear in nearly all
modern languages.
Electronic Computers Then and Now
In our everyday life, we come in contact with computers frequently, some
of using computers for creating presentations and other documents,
tabulating data in spreadsheets, or even having studied programming in
school and colleges etc..
The first electronic computer was built in the late 1930’s by Dr. John
Atanasoff abd Clifford Berry at lowa State Univesrsity. He designed his
computer to assist graduate students in nuclear physics with their
mathematical computations.
6. The first, general- purpose electronic digital computer, called the ENIAC
(Electronic Numerical Integrator And Computer), was completed in 1946 at the
University of Pennsylvania with funding from the U.S Army. Weighing 30
tons and occupying a 30- by-50- foot space, the ENIAC was used to
compute ballistics tables( Ballistics from Greek "to throw") is the science of mechanics that
deals with the launching, flight, behavior, and effects of projectiles, especially bullets, gravity bombs,
rockets, or the like; the science or art of designing and accelerating projectiles so as to achieve a desired
performance), predict the whether, and make atomic energy calculations.
These early computers used vaccum tubes as their basic electronic component.
Technological advances in the design and manufacture of electronic components led to
new generations of computers that were considerably smaller, faster and less
expensive than previous ones.
With advances in technology today, the entire circuitry of a computer processor is
packed in a single electronic component called a microprocessor chip. Which is less
than one fourth size of a postal stamp. This small size enable computer chip to be
installed in watches, cellphones, GPS systems, Cameras, Home appliances,
automobiles and computers.
Today, personnel computers where used in offices and homes,which can cost less than
$1000 and sit on the desk and yet more computational power as one that 40 years ago
cost more than $100000 and filled a 9-by-12-foot room. Even smaller computers can fit
inside a briefcase or purse or your hand.
Modern computers are categorized according to their size and performance.
Personnel computers are used by a single person at a time. Large real-time transaction
processing systems, such as ATM’s and other banking networks and corporate
7. reservations systems for hotels, airlines, and rental cars use mainframes, very powerful
and reliable computers. The largest capacity and fastest computers are called
supercomputers and are used by research laboratories and in computationally intensive
applications such as whether forecasting.
The Elements of a computer system fall into two major categories:
Hardware and Software
HARDWARE: is the equipment used to perform the necessary computations and
includes the central processing unit (CPU), monitor, keyboard, mouse, printer,and
speakers
SOFTWARE: consists of the programs that enable us to solve problems with a
computer by providing it with lists of instructions to perform.
COMPUTER HARDWARE
Some of the hardware components are:
1. Main Memory
2. Secondary memory, which includes storage devices such as hard disks, CD’s, DVDs,
and flash drives.
3. Central Processing unit.
4. Input devices, such as keyboards, mouses, touch pads, scanners, and joysticks
Output devices, such as monitors, printers, and speakers.
The above figure shows how these components interact in a computer, with the arrows
pointing in the direction of information flow. The program must first be transferred from
secondary storage to main memory before it can be executed