Convergence of Parallel Architectures - Parallel Computer Architecture

What are the different parallel computer architectures?

Different types of architecture are used for designing and developing the parallel computer machines. The types of architecture used are as follows -

Communication Architecture

The conservative and the traditional concepts of the computer architecture is improved to communication architecture. This is enabled by using parallel architecture. The structure of the organization and the most crucial abstractions are defined and described by the computer architecture. The communication process and the synchronization operations including the structure of the organization are defined and described by the communication architecture.

Layers of abstraction

The above diagram depicts the communication architecture. The top layer constitutes the programming model. Programming model is used developing the applications. This model include the following aspects -

  • Shared address space
  • Message passing
  • Data parallel programming

Shared address programming – The information is being posted at a location in such a way that the information is shared and viewed by others and thus one can communicate and interact with the other. However the activities pertaining to an individual are communicated.

Message passing – The information is sent by a specific sender that is meant for the specific person and only that person receives the message.

Data parallel programming – This programming is used for sharing the information globally by execution of an action simultaneously by each of the individual on different data sets.

Shared Memory

The significant and important type of the parallel computers or parallel machines is shared memory multiprocessors. The programs that run in parallel are supported to the maximum by the shared memory and a better insight on the multiprogramming workloads is supported by the shared memory.

Shared Memory Multiprocessor

Here the memory modules are being accessed by I/O controller and the processor facilitated by the computer system. The addition of more memory modules results in increasing the capacity of the memory and the addition of I/O controllers results in increasing the capacity of the I/O. The addition of processors results in increasing the capacity of the processors.

A central memory bus is located in such a manner that the whole resources organize around it. This enables to access the address by any of the processors. From the memory location, the processors are situated at equal distance and hence the time used for accessing a particular memory location is the same, this format is known as symmetric multiprocessor.

Message-Passing Architecture

The communication is passed to different processors in the form of I/O operations by using of the message-passing architecture. Here the memory system is not used, rather I/O level is used for combining the communication.

The operating system that is being used at actions of lower level at the actual communication operation is used for executing the communication between the users in message passing architecture. This builds up a gap between the programming model and the hardware communication operations.

In message passing architecture, the communication operations used by the user are ‘send’ and ‘receive’. The local data that is meant for transmitting to a remote processor is known as send. The local data buffer used for storing the transmitted data by mentioning the sending process is known as receive. To the message that is being sent, a tag also known as an identifier is attached, that implies that the message is meant for the person or the receiver who has the similar tag.

The cycle of one memory-to-memory includes from the send to a matching receive. The local data address along with the synchronization events are provided by each of the ends.

Convergence

The boundaries that were existing between the shared memory and the message passing is completely removed by the hardware and software developments. Though both of them demonstrate complete different programming models providing transparent paradigm, the structure of the machines of both types of architecture unite towards the organization in common.

Data Parallel Processing

One more important type of the parallel machines is data parallel processing which is also known as single-instruction multiple-data machines. In this programming model, large data structures such as array or matrix exist and each of the elements pertaining to the large data structure enables to execute the operations in parallel.

The global space is developed by the group of processes in a manner that one processor is viewed at a time by the programming languages of the data parallel architecture. The global view of the operations is facilitated be the processors as they are enabled to communicate and interact with each other. The global view is created by using the address space or the message passing architectures.

What are the different fundamental Issues related to the development of the parallel architecture design?

It is not possible to improve and increase the computer efficiency alone by the development of the programming model or the development of the hardware. However the computer performance can be improved and increased by the development of the computer architecture. The issues related to design can be resolved by concentrating on the usage of the machine by the programs and the technologies offered.

Communication Abstraction

A platform that provides a set of instructions such that a single program can be made on run on different implementations simultaneously is referred as communication abstraction. It is considered as the line of boundary between the system implementation and the programming model.

The performance of both hardware and the software is increased by not hampering their working, by developing a contract between then is referred as communication abstraction.

Programming Model Requirements

The data is operated by more than one thread which is enabled by the parallel program. The data for which the threads can be named and which enable to perform the operations by the specific order that is followed is defined and described by the parallel programming.

The activities pertaining to the threads need to be organized for enhancing the dependencies between the programs.

All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Parallel Computer Architecture Topics