9.1.2 Computer operation
As has already been mentioned, the
fundamental role of a computer is the manipulation of data. Numbers are used
both in quantifying items of data and also in the form of codes that define the
computational operations that are to be executed. All numbers that are used for
these two purposes must be stored within the computer memory and also
transported along the communication buses. A detailed consideration of the
conventions used for representing numbers within the computer is therefore
required.
Number systems
The decimal system is the best known
number system, but it is not very suitable for use by digital computers. It
uses a base of ten, such that each digit in a number can have any one of ten
values within the range 0–9. Items of electronic equipment such as the digital
counter, which are often used as computer peripherals, have liquid crystal
display elements that can each display any of the ten decimal digits, and
therefore a four element display can directly represent decimal numbers in the
range 0–9999. The decimal system is therefore perfectly suitable for use with
such output devices.
The fundamental unit of data storage
within a digital computer is a memory element known as a bit. This holds
information by switching between one of two possible states. Each storage unit
can therefore only represent two possible values and all data to be entered
into memory must be organized into a format that recognizes this restriction.
This means that numbers must be entered in binary format, where each digit in
the number can only have one of two values, 0 or 1. The binary representation
is particularly convenient for computers because bits can be represented very
simply electronically as either zero or non-zero voltages. However, the
conversion is tedious for humans. Starting from the right-hand side of a binary
number, where the first digit represents 20 (i.e. 1), each
successive binary digit represents progressively higher powers of 2. For
example, in the binary number 1111, the first digit (starting from the
right-hand side) represents 1, the next 2, the next 4 and the final, leftmost
digit represents 8, Thus the decimal equivalent is 1 + 2 + 4 + 8 = 15.
Example 9.1
Convert the following 8-bit binary
number to its decimal equivalent: 10110011
Solution
Starting at the right-hand side, we
have:
(1 × 20) + (1 × 21)
+ (0 × 22) + (0 × 23) + (1 × 24) + (1 × 25)
+ (0 × 26) + (1 × 27)
= 1 + 2 + 0 + 0 + 16 + 32 + 0 +
128 = 179
For data storage purposes, memory
elements are combined into larger units known as bytes, which are usually
considered to consist of 8 bits each. Each bit holds one binary digit, and
therefore a memory unit consisting of 8 bits can store 8-digit binary numbers
in the range 00000000 to 11111111 (equivalent to decimal numbers in the range 0
to 255). A binary number in this system of 10010011 for instance would
correspond with the decimal number 147.
This range is clearly inadequate for
most purposes, including measurement systems, because even if all data could be
conveniently scaled the maximum resolution obtainable is only 1 part in 128.
Numbers are therefore normally stored in units of either 2 or 4 bytes, which
allow the storage of integer (whole) numbers in the range 0–65 535 or 0–4 294
967 296.
No means have been suggested so far
for expressing the sign of numbers, which is clearly necessary in the real
world where negative as well as positive numbers occur. A simple way to do this
is to reserve the most significant (left-hand bit) in a storage unit to define
the sign of a number, with ‘0’ representing a positive number and ‘1’ a
negative number. This alters the ranges of numbers representable in a 1- byte
storage unit to -127 to +127, as only 7 bits are left to express the magnitude
of the number, and also means that there are two representations of the value
0. In this system the binary number 10010011 translates to the decimal number -19
and 00010011 translates to +19. For reasons dictated by the mode of operation
of the CPU, however, most computers use an alternative representation known as
the two’s complement form.
The two’s complement of a number is
most easily formed by going via an intermediate stage of the one’s complement.
The one’s complement of a number is formed by reversing all digits in the
binary representation of the magnitude of a number, changing 1s to 0s and 0s to
1s, and then changing the left-hand bit to a 1 if the original number was
negative. The two’s complement is then formed by adding 1 at the least significant
(right-hand) end of the one’s complement. As before for a 1-byte storage unit,
only 7 bits are available for representing the magnitude of a number, but,
because there is now only one representation of zero, the decimal range
representable is -128 to +127.
Example 9.2
Find the one’s and two’s complement
8-bit binary representation of the following decimal numbers: 56 -56 73 119 27 -47
Method of Solution
Take first the decimal value of 56
Form 7-bit binary representation:
0111000
Reverse digits in this: 1000111
Add sign bit to left-hand end to form
one’s complement: 01000111
Form two’s complement by adding one
to one’s complement: 01000111C1D01001000
Take next the decimal value of -56
Form 7-bit binary representation:
0111000
Reverse digits in this: 1000111
Add sign bit to left-hand end to form
one’s complement: 11000111
Form two’s complement by adding one
to one’s complement: 11000111C1D11001000
We have therefore established the
binary code in which the computer stores positive and negative integers (whole
numbers). However, it is frequently necessary also to handle real numbers
(those with fractional parts). These are most commonly stored using the
floating-point representation.
The floating-point representation
divides each memory storage unit (notionally, not physically) into three
fields, known as the sign field, the exponent field and the mantissa field. The
sign field is always 1 bit wide but there is no formal definition for the
relative sizes of the other fields. However, a common subdivision of a 32-bit
(4-byte) storage unit is to have a 7-bit exponent field and a 24-bit mantissa
field, as shown in Figure 9.3.
The value contained in the storage
unit is evaluated by multiplying the number in the mantissa field by 2 raised
to the power of the number in the exponent field. Negative as well as positive
exponents are obtained by biasing the exponent field by 64 (for a 7-bit field),
such that a value of 64 is interpreted as an exponent of 0, a value of 65 as an
exponent of 1, a value of 63 as an exponent of -1 etc. Suppose therefore that
the sign bit field has a zero, the exponent field has a value of 0111110
(decimal 62) and the mantissa field has a value of 000000000000000001110111
(decimal 119), i.e. the contents of the storage unit are
00111110000000000000000001110111. The number stored is +119 × 2-2.
Changing the first (sign) bit to a 1 would change the number stored to -119 × 2-2.
However, if a human being were asked
to enter numbers in these binary forms, the procedure would be both highly
tedious and also very prone to error. In consequence, simpler ways of entering
binary numbers have been developed. Two such ways are to use octal and
hexadecimal numbers, which are translated to binary numbers at the input–output
interface to the computer.
Octal numbers use a base of 8 and
consist of decimal digits in the range 0–7 that each represent 3 binary digits.
Thus 8 octal digits represent a 24-bit binary number.
Hexadecimal numbers have a base of 16
and are used much more commonly than octal numbers. They use decimal digits in
the range 0–9 and letters in the range A–F that each represent 4 binary digits.
The decimal digits 0–9 translate directly to the decimal values 0–9 and the
letters A–F translate respectively to the decimal values 10–15. A 24-bit binary
number requires 6 hexadecimal digits to represent it. The following table shows
the octal, hexadecimal and binary equivalents of decimal numbers in the range
0–15.
No comments:
Post a Comment
Tell your requirements and How this blog helped you.