Introduction To Parallel Processing
Introduction To Parallel Processing
Handout_1
Parallel Processing
1. Definition
Parallel Processing is the use of a collection of integrated and tightly coupled processing
elements or processors that are cooperating and communicating on a single task to speed up
its solution.
2. Motivation
• Higher Speed, or Solving Problems Faster
This is important when applications have hard or soft deadlines (called real time
systems)
o For example, we have at most few hours to do 24-hour weather forecasting
or to produce timely tornado warning
• Higher Throughput, or Solving More Instances of Given Problems
E.g. Transaction processing for banks and airlines
• Higher Computational Power, or Solving Larger Problems
Generate more detailed, accurate and longer simulation e.g. 5 day weather
forecasting
3. Large-Scale Problems
These are the problems that require enormous computation power (usually expressed in OPS,
MFLOPS, etc). They are thus regarded as High Performance Computing (HPC)
applications.
• Examples
Weather Forecasting
o Assume that the target area is roughly 3,000 miles by 3,000 miles. Over this
area we lay a rectangular grid and predict weather conditions only at the grid
intersection points.
o Since we only forecast the weather at intersection points, we should not use
too coarse a grid as our predictions will be poor for locations far from these
points. Conversely, we should not use too fine a grid, as that will create too
much work.
o Let’s use a grid spacing of 0.25 mile. If we assume that the atmosphere
extends to 20 miles, then we will end up with a three-dimensional grid of size
12,000 x 12,000 x 80, a total of 1.15 x 1010 grid intersection points.
o At each point assume that we have initial values for the following six pieces
of meteorological data, obtained from a weather satellite:
Page - 1 - of 6
CS-421 Parallel Processing BE(CIS) Batch 2004-05
Handout_1
Page - 2 - of 6
CS-421 Parallel Processing BE(CIS) Batch 2004-05
Handout_1
Page - 3 - of 6
CS-421 Parallel Processing BE(CIS) Batch 2004-05
Handout_1
iii. A computer model can change the time scale to make the system more
convenient to observe. This is particularly important for systems that change
too slowly (galactic models of the formation of the universe) or too quickly
(elementary particle decay) to study in real life.
iv. Finally, we can model objects that do not yet exist in order to predict their
future behavior. For example, we could “walk through” a building that has
not yet been built or study the effect of increased concentrations of CO2 on
the climate of the 21st and 22nd century.
Because of these advantages, researchers are using computational modeling in
such diverse areas as economics, physics, architecture, chemistry, and medicine,
as well as building programs to simulate the behavior of everything from
automobiles, airplanes, and rockets, to planets, oceans, cities, and the human
body. Computational modeling is becoming an important and widely used
scientific paradigm.
However, there is no “free lunch”, and there is a serious issue that must be
addressed when using simulation models--the amount of computation required to
execute the modeling program. It can be truly enormous, well beyond the
capabilities of all but the largest parallel computers.
• Grand Challenge Problems
A fundamental problem in science or engineering with broad economic and scientific
impact whose solution would be advanced by new developments in high performance
computing and communications technology.
The U.S. government formed a High Performance Computing and Communications
(HPCC) Initiative in 1991 to identify and plan solutions for such problems.
E.g.
• studying the formation and evolution of galaxies
• designing new drugs and studying their physiological properties
• studying atmospheric pollution and its effect on temperature
• predicting long term global climate and ocean temperature changes
• designing new manufacturing materials with specific and well defined properties
4. Distributed Processing
This is the use of a collection of independent workstations acting as a single logical system
and cooperating and communicating on a single task to speed up its solution.
Page - 4 - of 6
CS-421 Parallel Processing BE(CIS) Batch 2004-05
Handout_1
Page - 5 - of 6
CS-421 Parallel Processing BE(CIS) Batch 2004-05
Handout_1
• Stagnating clock rates are now being compensated by multiple processor cores on the
same chip i.e. multi-core architectures
• Consequently, uniprocessor is now becoming to disappear even from the desktops
thus making it imperative for programmers to learn parallel programming techniques
to exploit the hardware parallelism available in state-of-the-art machines.
d. Von-Neumann Bottleneck
• The speed-disparity between processor and memory is growing with the passage of
time, causing huge performance bottlenecks.
• A parallel system (e.g. a COW) overcomes this shortcoming by providing more and
more aggregate memory and cache capacity as well as boosting the memory
bandwidth required by HPC applications.
• Some of the fastest growing applications of parallel computing utilize not their raw
computational speed, rather their ability to pump data to memory and disk faster.
6. Economic Impact
Businesses are investing more and more money in parallel and distributed solutions as they
have realized the competitive potential of these technologies.
******
Page - 6 - of 6