OR 9th Question - 3rd Internals

Why parallel and distributed computing is used in design of Metaheuristics with an Example?

Parallel and Distributed Architectures
The traditional Flynn classification of parallel architectures is based on two criteria:
the number of instruction streams and the number of data streams that define the
following four classes (Table 6.4):
SISD (single instruction stream—single data stream):
This class rep-
resents the traditional monoprocessor architecture executing a sequential
program. This class tends to disappear. Nowadays, most of the processors
composing our workstations and laptops are multicore processors (e.g., Intel
or AMD multicore processors) that are multiprocessor machines on a single
SIMD (single instruction stream—multiple data streams):
This class rep-
resents parallel architectures where the same instruction flow is executed on
different data streams. The processors are restricted to execute the same pro-
gram. This architecture is generally composed of specific processors (nonstan-
dard components). It has a synchronous programming model that is based on
data decomposition (data parallelism). They are very efficient in executing syn-
chronized parallel algorithms that contain regular computations and regular data
transfers. The SIMD architecture has been popular in the past for its simplicity
and scalability, but tends to disappear for its high cost and particular program-
ming model. When the computations or the data transfers become irregular or
asynchronous, the SIMD machines become much less efficient.
MISD (multiple instruction streams—single data stream):
In MISD archi-
tectures, multiple instruction streams are executed on a single data stream. This
class does not exist in practice. Sometimes this class of architecture is considered
in regard to pipeline vector processors.
MIMD (multiple instruction streams—multiple data streams):
architectures, multiple instruction streams are executed on multiple data streams.
The processors are allowed to perform different types of instructions on different
data. The tendency is to use the standard components (processors, network).
In this chapter, our focus is mainly on the MIMD class of architectures that
represents the most general model of parallel architectures.
Parallel architectures are evolving quickly. Nowadays, the classification of Flynn
is not sufficient to describe the different types of parallel architectures and their char-
Shared memory/distributed memory architectures:
The development of paral-
lel architectures is dominated by two types of architectures: shared memory architec-
tures and distributed memory architectures. In shared memory parallel architectures,
the processors are connected by a shared memory
In distributed memory architectures, each processor has its own memory
(Fig. 6.16b). The processors are connected by a given interconnection network us-
ing different topologies (e.g., hypercube, 2D or 3D torus, f
at tree, multistage cross-
bars) (Fig. 6.17). This architecture is harder to program; data and/or tasks have to be
explicitly distributed to processors. Exchanging information is also explicitly handled using message passing between nodes
Next Post »

Please Leave ur Valuable comments Here ConversionConversion EmoticonEmoticon


HTML Comment Box is loading comments...

Our Partners

Our Partners