Processor in Parallel Systems - Parallel Computer Architecture

What are the different processors used in parallel systems?

Transputer is a processor that was widely used in the early 80s for developing multicomputers. The single chip of the transputer includes the following items – a core processor, a small SRAM memory, DRAM main memory, 4 communication channels. The trasnputer network is developed by connecting the channels with each other for enabling parallel computer communication. But transputer could not withstand the parallel applications as it lacks the computational power. To overcome this problem, the technology has developed the cheaper versions of the processors known as RISC processors.

But in modern usage, parallelism is used at different stages by the microprocessors. They are parallelism ate instruction level and data level.

High Performance Processors

Some of the high performance processors are RISC and RISCy.

The traditional RISC processors have the following features -

  • There are only few addressing modes.
  • The instruction formats are usually fixed to either 32 or 64 bits.
  • The data is loaded from the memory by using s set of instructions and the data is stored from the register to the memory by using the respective set of instructions.
  • Registers are used for performing arithmetic operations.
  • Pipelining is used.

Multiple instruction pipelines are most widely used in the parallel computers by using superscalar microprocessors. The content of the instruction-level parallelism (ILP) in a particular application determines the superscalar processors effectiveness. The hardware level instructions are not performed in the order of the program enabling to fill the pipeline.

Mostly the approach used by the microprocessor is super pipeline approach. The stages of the pipelined are increased in number and the operations within the pipeline are decreased to increase and improve the frequency of the clock.

Very Large Instruction Word (VLIW) Processors

As the name implies the VLIM processors with very large instructions are known as Very Large Instruction Word (VLIM) processor. The instructions are so large that within a single instruction, operations are executed parallel. Then these executed operations are being sent to the respective units for further execution. The decoding of the operations is performed upon receiving the VLIM instructions. These are sent to the functional units for parallel execution.

Vector Processors

For the general microprocessors, co-processors are fixed which are termed as vector processors. For each of the element of the vector, the operation is performed only when the vector instruction is received and decoded. For the normal operation, many vector operations are to be combined together in such a manner that the outcome of a particular vector operation is sent to other vector operation and the outcome is received as an operand.


The speed of the microprocessors doubles for every 18 months making it more difficult for the main memory and the DRAM chip to operate in accordance with that chip. To overcome this situation, the speed gap between the microprocessor and memory is lined by using caches. Thus a small SRAM memory is termed as a cache. The different types of caches that are being used in the modern processors are Translation Look-aside Buffers (TLBs) caches, instruction and data caches.

What are the different types of caches?

The following are some of the types of caches.

Direct Mapped Cache

The addresses of the main memory are mapped to the locations of the cache. Multiple blocks of the main memory can be mapped to a single cache entry. It is the processor which has to determine the desired data block.

Fully Associative Cache

This cache facilitates in placing the block of the cache anywhere. The cache entry having the cache block is estimated and identified by following a replacement policy. The conflicts that arise due to the problems associated with cache-entry are minimized by fully associative caches. Fully associative caches are not suggested for large scale operations as it is associated with high cost element.

Set-associative Cache

The direct mapping cache when combined with fully associative cache represents set-associative cache. The entries of the cache are split into sub-entries which are known as cache sets. A particular cache set is mapped to the fixed memory blocks. And within the cache set the mapping of the memory follows the concept of fully associative cache.

What are the different Cache strategies?

Certain cache strategies are designed to specify the course of action when the unforeseen events occur. The cache block that is to be replaced by the new block is recognized and identified by the cache by following the widely accepted replacement strategies. They are -

  • First-In First Out (FIFO)
  • Least Recently Used (LRU)

All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd Protection Status

Parallel Computer Architecture Topics