Parallel Random Access Machine - Parallel Algorithm

What is Parallel Random Access Machine (PRAM)?

The model where multiple processors are attached to single memory block is PRAM Model. This model contains:

  • Set of processors of similar type
  • A common memory unit is shared by all the processors. The shared memory unit is used for processors for communication.
  • The processors are connected with the single shared memory by the memory access unit (MAU).

PRAM Architecture

Independent operations can be performed by any number of processors in a particular unit of time. This facilitating in accessing the same memory location by different processors.

This problem is solved by enforcing the following constraints on PRAM model -

  • Exclusive Read Exclusive Write (EREW) – No two processors are allowed to read or write at the same time from the same memory.
  • Exclusive Read Concurrent Write (ERCW) – No two processors are allowed to read, but are allowed to write at the same time from the same memory location.
  • Concurrent Read Exclusive Write (CREW) − All the processors are allowed to read from the same memory location at the same time, but are not allowed to write to the same memory location at the same time.
  • Concurrent Read Concurrent Write (CRCW) − All the processors are allowed to read from or write to the same memory location at the same time.

PRAM can be implemented in many methods. Some of the important methods are:

  • Shared memory model
  • Message passing model
  • Data parallel model

Shared Memory Model

In this model more emphasis is given on data parallelism than on control parallelism. On different processors, multiple processes execute independently and a common memory space is shared. The rest of the processors can view any changes in the memory location.

As the same memory location is accessed by multiple processors it may lead to instances where the same memory location is accessed by more than one processor. One may read on the location and the other may write on the location at a time thus creating confusion. This can be avoided by implementing a control mechanism called lock / semaphore.

Shared Memory Model

Shared memory programming has been implemented in the following −

  • Thread libraries – Multiple threads of control are facilitated to run in the same memory location. An interface supporting the multithreading is provided by the library. The sub routes for the following are provided:
    • Creating and destroying threads
    • Scheduling execution of thread
    • passing data and message between threads
    • saving and restoring thread contexts

Examples of thread libraries include − SolarisTM threads for Solaris, POSIX threads as implemented in Linux, Win32 threads available in Windows NT and Windows 2000, and JavaTM threads as part of the standard JavaTM Development Kit (JDK).

  • Distributed Shared Memory (DSM) Systems – An abstraction of the shared memory is created on the loosely coupled architecture for implementing the shared memory programming without any support of the hardware. Standard libraries are implemented and the advanced user-level memory management feature is implemented. Some of the examples include Tread Marks System, Munin, IVY, Shasta, Brazos, and Cashmere.
  • Program Annotation Packages – The architecture that has the characteristics of uniform memory access implements this package. The most example of this is OpenMP, which implements functional parallelism focusing on the loop parallelization.

The control level on the memory system is low on the concept of shared memory but is tedious and erroneous. It is suited for system programming than application programming.

What are the merits of Shared Memory Programming?

  • A user-friendly programming approach is given by the global address space.
  • There is fast and uniform data sharing among the processes because of the closeness of the memory to CPU.
  • The data communication among processes need not be distinctly specified.
  • The overhead related to process-communication is negligible.
  • Easy to learn.

What are the demerits of Shared Memory Programming?

  • It is not portable.
  • Very difficult to manage the data locality.

Message Passing Model

The common parallel programming approach used in the distributed memory systems is Message passing. The parallelism is determined by the programmer. All the processors contain their own local memory unit and use a communication network for exchanging data.

Message Passing Model

Message-passing libraries are used for communication. In addition to data, the following components are included in the message:

  • The processor address from which the message is sent.
  • Memory location starting address
  • Data type of the sending data;
  • Data size of the sending data;
  • The address of the processor to which the message is to be sent.
  • For the receiving processor, starting address of the memory location.

The following methods are used by the processors for communication -

  • Point-to-Point Communication
  • Collective Communication
  • Message Passing Interface

Point-to-Point Communication

The simplest form of message passing is point-to-point communication. The sending processor sends the message to the receiving processor by using different modes of transfer -

  • Synchronous mode – Once the confirmation is received that the message is delivered, the next message is sent, thus maintaining message sequence.
  • Asynchronous mode – The receipt of the confirmation that the message is received is not required for sending the next message.

Collective Communication

Message passing is done by using more than two processors under Collective communication. Collective communication is done in the following modes:

  • Barrier – A particular block known as barrier block is used for running a communication for passing of the message.
  • Broadcast − Broadcasting is of two types −
    • One-to-all – The same message is sent to all the other processors using a single operation of one processor.
    • All-to-all − Here, all processors send message to all other processors.

There are three types of broadcasted messages -

  • Personalized – To all the destination processes, unique messages are sent.
  • Non-personalized – The same message is received by all the destination processors
  • Reduction – All the messages from other processors are collected by any one processor of the group and the messages are combined as a single message which is made accessible to other processors in the group.

What are the merits of Message Passing?

  • Low-level control parallelism is provided.
  • It is portable;
  • Less error prone;
  • Overhead related to parallel synchronization and data distribution is very less.

What are the demerits of Message Passing?

  • As compared to parallel shared-memory code, message-passing code generally needs more software overhead.

Message Passing Libraries

Several message passing libraries are available but out of which the following two are relevant -

  • Message Passing Interface (MPI)
  • Parallel Virtual Machine (PVM)

MESSAGE PASSING INTERFACE (MPI)

For providing communication among all the concurrent processes in the distributed memory system, the universal standard is MPI. At least one implementation of the message passing interface is provided by most of the parallel computing platforms. It is being implemented as collection of predefined functions called library and are called from the languages such as C, C++, Fortran, etc.

Merits of Message Passing Interface

  • Shared memory architecture and distributed memory architecture supports this interface.
  • Own local variables are possessed by each of the processors
  • Distributed memory computers are less expensive when compared to shared memory computers.

Demerits of Message Passing Interface

  • Parallel algorithm involves more programming changes
  • Debugging is difficult
  • For nodes communication, it does not perform

PARALLEL VIRTUAL MACHINE (PVM)

The message passing system that connects the separate host machines to form a single virtual machine is PVM, also known as single manageable parallel computing resource. Routing of message, conversion of data, scheduling of tasks in the network are managed by PVM.

Features of PVM

  • Installation and configuration is very easy and simple
  • PVM can be used by multiple users at the same time
  • Multiple applications can be executed
  • Small package
  • The languages supported are C, C++, Fortran
  • There is a choice for the user to select group of machines
  • It is a message-passing model
  • The computation is process-based
  • Heterogeneous architecture is supported.

Data Parallel Programming

The operations on the data set are performed simultaneously by this data parallel programming model. On the same data structure, operations are collectively performed by the processors. Each task is performed on a different partition of the same data structure.

But all the algorithms cannot by specified in terms of data parallelism. And hence data parallelism is not universally accepted.

Data decomposition and data mapping to the processes is specified by the data parallel languages. It contains the data distribution statements which enable the programmer to control the data. For instance, which data will go into which processor for reducing the cost of communication.

Parallel Algorithm Related Tutorials

Parallel Algorithm Related Practice Tests


All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Parallel Algorithm Topics