The technology that is being developed in order to reach some of the expectations such as minimizing the cost, increasing the performance efficiency and production of accurate results in the real-life applications is known as parallel processing. Most widely used and common practices are multiprogramming, multiprocessing, or multicomputing.
The hardware and the software that is being used by the modern computers are most extensive and more powerful. The computer performance can be analyzed by analysing the hardware and software development.
There are three categories of the computing problems. They are classified as numerical computing, logical reasoning, and transaction processing. In some instances, all the three models need to be combined together.
The different types of parallel computers are -
Multiprocessor models are basically of three different types. They are
The physical memory is distributed uniformly among all the processors in this UMA model. The access time to the memory words is identical for all the processors. A private cache memory is tagged to each of the processor as well as for each of the peripheral device.
The multiprocessor is considered to by symmetric, when all the peripheral devices are equally accessed by the processors. The multiprocessor is considered to be asymmetric when the peripheral devices are accessed by only few of the processors.
Depending on the place or location of the memory word, the access time of the processors may differ in NUMA model. The physical memory is distributed between the processors, and the distributed memory is known as local memory. A global address space is created by combining all the local memories. All the processors are facilitated the access to the global address space thus created.
The main memories which are distributed as per NUMA model are converted into cache memories by COMA model.
To incorporate data parallelism and processing of the vector, supercomputers and parallel processors are used.
The vector supercomputers facilitate an additional feature by fixing a vector processor to the scalar processor. The data and the program is induced into the main memory by the host computer. The operations are executed by the scalar control units using the scalar functions by decoding the instructions. The scalar control unit can execute only when the instructions decoded are either scalar or program operations.
The vector control unit receives the instructions, in case the decoded instructions turn out to be vector operations.
To a single control unit many processors containing their own memory units are connected in SIMD supercomputers. An interconnection network is used for the same.
To develop a parallel algorithm, excluding the implementation design and the parallel algorithm, the best framework is provided by an ideal model.
The performance of parallel computers is enhanced, VLSI complexity of the chip is forecasted and evaluated by these models.
The Parallel Random Access Machines (PRAM) was developed with the memory access overhead being zero or null and developing an ideal parallel computer.
The memory units of the PRAM are shared and hence the memory is enabled to be centralized and divided between the processors. These processors compute the cycle and operate by read-memory and write-memory.
There can be different memory update operations. They are -
The processor and the memory arrays and the switching networks are used to formulate VLSI chips by using parallel computers.
The memory space of the VLSI chip is relational to the size of that particular VLSI chips. Usually the VLSI technologies have two dimensions.
The chip area (A) can be used for computing the space complexity of the algorithm. The time required for executing the algorithm is considered as T. Then the number of bits processed by the chip (upper bound) can be calculated as A.T
The lower bound is calculated as follows -
A.T2 >= O (f(s))
The different tracks of the parallel computers are -
These work under the assumption that different processors are used for execution of the threads. Shared memory or the message passing is used for communication and interaction.
These work under the assumption that on a given huge amount of data, the same code is executed. Same instructions are executed either on a particular set of data elements or on similar set of data elements.
These work under the assumption that the threads when execute on separate processors may led to delays, which can be overcome by executing different threads on the same processor.
Parallel Computer Architecture Related Interview Questions
|Python Interview Questions||C++ Interview Questions|
|Artificial Intelligence Interview Questions||Computer Graphics Interview Questions|
|Compiler Design Interview Questions||Computer architecture Interview Questions|
|Synchronized Multimedia Integration Language (SMIL) Interview Questions||x86 Interview Questions|
|Multimedia compression Interview Questions||Advanced C++ Interview Questions|
|Basic C Interview Questions|
Parallel Computer Architecture Tutorial
Parallel Computer Architecture
All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.