University of London
International Programmes
CO1110 – Introduction
to Computing and the Internet
Coursework assignment
1
2017 – 2018
Sunil Bothra
SRN: 01305384
Question 1 (a): Convert the positive decimal number
17.45 to IEEE 754 single precision representation. Show all of your working
out.
The positive, base ten, decimal number 17.45 may be converted to a
32-bit single precision IEEE 754 binary floating-point standard representation
as follows:
A.
To convert the integer part, 17, into binary (base 2), the same is divided
iteratively by 2, until a quotient equal to zero is arrived at. The remainders
are tracked for later use.
(integer / 2 = quotient + remainder)
1)
17 / 2 = 8 + 1
2)
8 / 2 = 4 + 0
3)
4 / 2 = 2 + 0
4)
2 / 2 = 1 + 0
5)
1 / 2 = 0 + 1
B.
To construct the base 2 representation of the integer part of the number
(17), the remainders are read from the bottom up, as indicated by the blue
arrow above. The resulting base 2 representation is obtained
17(10) = 1 0001(2)
C.
To convert the fractional part (0.45) into binary (base 2), it needs to
be multiplied iteratively by 2, until a fractional part equal to zero is
arrived at. The integer part of each step is tracked for later use.
(fractional part x 2 = integer + fractional remainder)
1)
0.45 × 2 = 0 + 0.9
2)
0.9 × 2 = 1 + 0.8
3)
0.8 × 2 = 1 + 0.6
4)
0.6
× 2 = 1 + 0.2
5)
0.2 × 2 = 0 + 0.4
6)
0.4 × 2 = 0 + 0.8
7)
0.8 × 2 = 1 + 0.6
8)
0.6
× 2 = 1 + 0.2
9)
0.2 × 2 = 0 + 0.4
10) 0.4 × 2 = 0 + 0.8
11) 0.8 × 2 = 1 + 0.6
12) 0.6 × 2 = 1 + 0.2
13)
0.2 × 2 = 0 + 0.4
14) 0.4 × 2 = 0 + 0.8
15) 0.8 × 2 = 1 + 0.6
16) 0.6 × 2 = 1 + 0.2
17) 0.2 × 2 = 0 + 0.4
18) 0.4 × 2 = 0 + 0.8
19) 0.8 × 2 = 1 + 0.6
20) 0.6 × 2 = 1 + 0.2
21) 0.2 × 2 = 0 + 0.4
22) 0.4 × 2 = 0 + 0.8
23) 0.8 × 2 = 1 + 0.6
24) 0.6 × 2 = 1 + 0.2
D.
The blue arrows above indicate the direction in which the integers needs
to be read in order to construct a base 2 representation of the original
fraction (0.45). Steps 4 through 23 represent an infinitely recurring loop as
step 8 is identical to step 4. Despite the numerous iterations, no fractional
parts equal to zero were arrived at. However, the number of iterations
performed above exceed the Mantissa limit (23) and at least one integer that
was different from zero was found. To construct the base 2 representation of
the fractional part of the number (0.45), the remainders are read from the top
down, as indicated by the blue arrows above. The resulting base 2
representation is obtained (the overlined portion “1001” recurs infinitely)
0.45(10) = 0.011
(2)
or
0.45(10) = 0.0111 0011 0011
0011 0011 0011(2)
E.
By combining the binary representations obtained in steps B and D above,
the full binary representation of 17.45, before normalization, is
17.45(10) = 1 0001.011
(2)
F.
The above binary representation can be normalized by shifting the
decimal 4 places to the left, thus leaving only one non-zero digit to the left
of the same
17.45(10) = 1 0001.011
(2)
= 1 0001.011
(2) × 20
= 1
0001.011
(2) × 24
G.
Based on steps A through F, the following are the elements needed for a IEEE
754 single precision representation:
1)
Sign: 0 (since the number in question is positive).
2)
Exponent (unadjusted): 4 (derived from 24 from step F above).
3)
Mantissa (not normalized): 1.0001 0111 0011 0011 0011 0011 0011 (step F).
These will be
fed into the following single precision (32-bit) form (Bias = 127)
Sign (1)
Exponent (8)
Fraction
(23)
H.
To convert the unadjusted exponent (4) into binary representation, it is
first adjusted using the 8-bit bias notation and the result is converted from
decimal (base 10) to 8 bit binary format, by iteratively dividing by 2 as in
step A above.
1)
Exponent (adjusted) = Exponent (unadjusted) + (2(8-1) – 1)
= 4
+ (2(8-1) – 1)
= (4 + 127)(10)
=
131(10)
2)
Dividing iteratively by 2 as shown in step A above
1)
131 / 2 = 65 + 1
2)
65 / 2 = 32 + 1
3)
32 / 2 = 16 + 0
4)
16 / 2 =
8 + 0
5)
8 / 2 = 4 + 0
6)
4 / 2 = 2 + 0
7)
2 / 2 = 1 + 0
8)
1 / 2 = 0 + 1
3)
Reading from the bottom up, the binary representation of adjusted
exponent is
131(10) = 1000 0011(2)
I.
To normalise the mantissa, the leading bit (leftmost, given that it is
always 1) and the decimal point (if available), are removed, and the length is
adjusted to 23 bits by removing the extra bits from the right
Mantissa (normalized) = 1.
000 1011 1001 1001 1001 1001 1001
= 000
1011 1001 1001 1001 1001
J.
Based on the
calculations made in steps A through I, the positive decimal number 17.45 can
be converted to IEEE 754 single precision representation as shown below
Sign (1 bit)
Exponent (8 bits)
Mantissa (23 bits)
0
1
0
0
0
0
0
1
1
0
0
0
1
0
1
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0
or
17.45(10)
= 0 10000011 00010111001100110011001 (32 bits IEEE 754).
Question
1 (b): In IEEE 754 single precision, 1.25 is
represented as: 0 01111111 01000000000000000000000 In IEEE 754 single precision
1.26 is represented as: 0 01111111 01000010100011110101110 Explain why there
are many more 1’s in the mantissa of the IEEE 754 representation of 1.26 than
1.25.
The mantissa is the portion of a
floating-point number wherein its significant digits are contained. It is
arrived at by converting both the integer as well as the fraction part of a
given number into base 2 representation, adding the results, converting the sum
(non-normalized mantissa) to scientific notation and then normalizing the same
by eliminating the leading bit (leftmost, given that it is always 1) and the decimal point (if
available), and adjusting the length to 23 bits by removing the extra bits from
the right (if in excess of 23) or adding the necessary number of zeros (if less
than 23). The following steps are used to derive the mantissas in IEEE 754
single precision for 1.25 and 1.26:
A.
The integer part of both numbers is the same (1) so the base 2
representation of it is
1)
1 / 2 = 0 + 1
2)
1(10) = 1(2)
B.
The base 2 representation of the fractional part .25 of the first number
is
1)
0.25 × 2 = 0 + 0.5
2)
0.5 × 2 = 1 + 0 (Last multiplication step as fractional part equal to 0
found)
3)
0.25(10) = 0.0100 0000
0000 0000 0000 000(2) (Extra 0s added, in blue)
C.
The base 2 representation of the fractional part .26 of the first number
is
1)
0.26 × 2 = 0 + 0.52
2)
0.52 × 2 = 1 + 0.04
3)
0.04 × 2 = 0 + 0.08
4)
0.08 × 2 = 0
+ 0.16
5)
0.16 × 2 = 0 + 0.32
6)
0.32 × 2 = 0 + 0.64
7)
0.64 × 2 = 1 + 0.28
8)
0.28 × 2 = 0 + 0.56
9)
0.56 × 2 = 1 + 0.12
10) 0.12 × 2 = 0 + 0.24
11) 0.24 × 2 = 0 + 0.48
12) 0.48 × 2 = 0 + 0.96
13)
0.96 × 2 = 1 + 0.92
14) 0.92 × 2 = 1 + 0.84
15) 0.84 × 2 = 1 + 0.68
16) 0.68 × 2 = 1 + 0.36
17) 0.36 × 2 = 0 + 0.72
18) 0.72 × 2 = 1 + 0.44
19) 0.44 × 2 = 0 + 0.88
20) 0.88 × 2 = 1 + 0.76
21) 0.76 × 2 = 1 + 0.52
22) 0.52 × 2 = 1 + 0.04
23) 0.04 × 2 = 0 + 0.08
24)
0.08 × 2 = 0
+ 0.16
0.26(10) = 0.010
(2)
(The overlined part recurs since in steps 4 and 24 the fractional parts repeat).
Conclusion
Since the integer part of both numbers (1) is the same and both mantissas are
provided, the difference between them is entirely related to the base 2
representation of the fractional part, and it can be seen from the results
obtained in steps B and C that the reason why there are recurring zeros after
the first two bits of 1.25’s mantissa is due to only two multiplication steps
needed to arrive at a fractional part that is equal to 0. The rest of 1.25’s
mantissa is completed by extra 0s to fill up the 23 bits required.
In the case of 1.26’s mantissa, 0.26’s iterative multiplication with 2
does not produce a fractional part equal to 0. Each time, during the iteration,
there is a residual fraction equal to or larger than 0.5, upon multiplication,
there is a remainder of 1, and since an infinitely recurring pattern (Step C
4-23) exists, there are many more 1s in the mantissa of an IEEE 754 single
precision representation of 1.26 than there are in that of 1.25.
0.25(10) = 0.0100
0000 0000 0000 0000 000(2)
Mantissa = 0100 0000 0000 0000
0000 000
0.26(10) = 0.010
(2)
Mantissa = 0100 0010 1000 1111
0101 110
Question
1 (c): Why is the exponent biased in IEEE
representation?
The IEEE 754 single precision
format comprises 32 bits as follows:
Sign (1 bit)
Exponent (8 bits)
Mantissa (23 bits)
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0
Within the 8 bits of the exponent the total number of values possible
are as follows:
28 = 256
If unsigned numbers were to be stored in the exponent they would range
from (as shown in the right)
0, 1, 2… 127, 128… 255
But, the exponent can be negative or positive. In that case the signed
range that can be contained within the exponent would be as shown on the right.
-127, -126, -125… -0, +0, 1, 2… 128
negative numbers positive numbers
As it can been seen from above there are two 0s, one negative, one positive. So
to solve this problem of the extra, redundant 0, the numbers from the signed range
below (negative and positive) are mapped to unsigned range above as shown by
the red arrows (as displayed in the two ranges below)
0, 1, 2… 127 128, 129… 255
The left range maps to all values from -127 to -1 and 0 (128 numbers),
the right range maps to all values from 1 to 128 (127 numbers). However, in IEEE
754, the minimum exponent of
-127 (where every bit is equal to 0) and +128 (all 1s) are reserved for special
values such as
+- ? (Infinity), denormalized numbers and NaNs (Not a
Number). Hence, the actual exponent range is -126… 127 where -126 is stored
as 1 and 0 is stored as 127. It allows every number from -127 to 127 to be
uniquely mapped from 0-254 and avoids mapping the 0 twice. This implies that
the exponent of 0 is stored with a bias of 127 (0 + 127), the exponent of -126
is stored as 1 (-126 + 127), the exponent of 127 is stored as 254 (127 + 127),
and so on.
Biasing is essential since exponents must be signed values for them to
represent very small as well as very large values. However, two’s complement
(how signed values are usually represented) would complicate any comparisons. This
is why in order to store the exponent in 8 bits, a bias of +127 is added to the
base 10 exponent and then it is converted into a base 2 representation (8-bits)
(e.g., an exponent of 4 is adjusted with a bias of 127 to become 131(10)
which is then converted into an 8-bit IEEE 754 single precision representation
as 1000 0011(2)).
Question
2: Consider
the problem of finding the total time to access and read data stored as tracks
on a computer disk. This can be described, at its simplest, as access time plus
transfer time. Access time is comprised of the average time for the reading
head to find the correct track (known as seek time) plus the rotational delay,
or latency. Seek time is zero for fixed-head systems. The rotational latency is
the average time taken for the disc to spin around to the correct place to
start reading. This is considered to be, on average, half the time taken for
one complete disk rotation. The time taken to read data, the transfer time, is
given by the amount of data on the track to be read, divided by the total data
on the track, all multiplied by the time taken for one disk rotation. Clearly
if all of the data on a track is to be read, then this reduces to the time
taken for one disk rotation. There may, in fact, be delays caused by I/O
queuing (see Stallings, section Disk Performance Parameters, pages 225-227,
tenth global edition), but in the following problem we will just consider
access time plus transfer time. Given a moveable-head system with a constant
disk rotation speed of 12,000 revolutions per minute (rpm), an average seek
time of 6 milliseconds and 512 byte sectors with 500 sectors per track, answer
the following questions, giving all your working. Give your final answers to
(a) and (b) in milliseconds (ms).
Question
2 (a): The file is sequentially organised, and
is stored on 6 complete tracks followed by exactly one half of a track.
Data provided:
·
Disk rotation = 12000 rpm
·
Avg. Seek time = 6 ms
·
Sector size = 512 bytes
·
Sectors / Track = 500
·
File is sequentially
and fully distributed across 6.5 tracks.
Formulae used:
·
Avg. Rotational
Latency =
=
=
* Time for one disk rotation (in ms)
(Assumption:
Minimum latency is 0, based on the disk head being present at the sector
sought. Maximum latency is the equal to one full disk rotation time based on
the disk head just missing the sector sought).
·
Transfer Time =
* Time for one disk rotation (in ms)
To calculate: Total time (to access and read data)
Total time = Access
Time + Transfer Time
(Note: The
calculations of Avg. Access and Transfer Time below are for reading the first track.)
Avg. Access Time = Avg. Seek Time + Avg. Rotational Latency
= 6 ms +
* (
* 1000) ms
= 6 ms + 0.5 *
ms
= 6 ms + 2.5 ms
= 8.5
ms
Transfer Time =
* Time for one disk rotation (in ms)
=
*
= 5 ms
Total Time to read first track = 8.5 + 5 = 13.5 ms
Assumptions: Since the file is sequentially
organised (stored on 6 complete tracks followed by exactly one half of a track)
it can be assumed here, that after reading the first track, the other 5.5 tracks
can be read without any additional seek time needed, implying that the I/O
operation, as mentioned in the question, is able to cope with the data flow
from the disk. If this is so, only the rotational delay for each subsequent track
needs to be taken into consideration, viz.
–
From tracks 2 through
6 (full tracks), each track is read in: 2.5
+ 5 = 7.50 ms.
–
Track 7 (half-full of
file data) is read in:
=
3.75 ms.
Therefore, to read
the full file, distributed across 6.5 sequential tracks, the following total
time will be required:
Total Time to read
first track = = 13.50 ms
Total Time to read tracks 2-6 = 7.5 x 5 = 37.50 ms
Total Time to read track 7 = = 03.75 ms
Total Time to read 6.5 tracks = = 54.75 ms
Question
2 (b): The same file as that in part (a) is now
distributed at random across the disk, i.e. each sector of the file is randomly
placed on the disk.
Data provided:
·
Disk rotation = 12,000 rpm
·
Avg. Seek time = 6 ms
·
Sector size =
512 bytes
·
Sectors / Track = 500
·
File’s sectors are randomly
distributed across the disk.
Formulae used: Same as answer 2 (a) above.
Assumptions: Since the degree of fragmentation of the file
is unknown, a worst-case scenario is assumed where each sector of the file is
in a separate, non-contiguous location, implying that to access the whole file,
each single sector will need to be individually accessed (average seek time + average
rotational latency) and then read. Furthermore, as indicated in the question, delays
caused by I/O queuing are not considered for the purposes of this calculation.
To calculate: Total time (to access and read data)
Total time =
Access Time + Transfer Time
Time to read one
sector =
=
=
=
= 0.01 ms
Avg. Seek Time = 6.00
ms
Avg. Rotational latency = 2.00
ms
Total Time per sector = 8.01
ms
Total time =
No. of sectors to read * Total
time per sector
(to access and read file)
= (6.5 tracks * 500
sectors) * 8.01 ms
= 3,250 * 8.01 ms
= 26,032.5 ms or 26.03
seconds
Question
2 (c): T If the answer to part (a) is X, and the answer to part (b) is Y, express
X as a percentage of Y, giving your answer to one significant figure.
X = Time
needed to access and read data in part (a) = 15.45
ms
Y = Time
needed to access and read data in part (b) = 26,032.50 ms
X expressed as a
percentage of Y =
* 100
= 0.05934889080956496686833765485451 %
The first significant
figure of the % = 5
In the numerical
representation of X expressed as a percentage of Y, the first significant figure
is 5, given that the leading zeros are insignificant, but necessary to ensure
the correct positioning of the remaining numbers. In the aforementioned number,
the figure to the right of 5, is 9, and since this is greater than 5, it is
rounded up as shown below:
X as a % of Y to one
significant figure = 0.06 %
Question
3 (a): Explain what is meant by Locality of
reference and how this is exploited in cache memory to improve performance. Your
answer should be at most two paragraphs long – a paragraph is considered to
consist of no more than 8 sentences here.
Locality of reference
The tendency of a processor to access the same set of memory locations repetitively
over a short period of time.
154
A distinction is made in the literature between spatial locality and temporal locality.
Spatial locality refers to the tendency of execution to involve a number of memory
locations that are clustered. This reflects the tendency of a processor to access
instructions sequentially. Spatial location also reflects the tendency of a
program to access data locations sequentially, such as when processing a table
of data. Temporal locality refers to the tendency for a processor to access
memory locations that have been used recently. For example, when an iteration
loop is executed, the processor executes the same set of instructions
repeatedly. Traditionally, temporal locality is exploited by keeping recently
used instruction and data values in cache memory and by exploiting a cache
hierarchy. Spatial locality is generally exploited by using larger cache blocks
and by incorporating prefetching mechanisms (fetching items of anticipated use)
into the cache control logic.