Transputer is a processor that was widely used in the early 80s for developing multicomputers. The single chip of the transputer includes the following items – a core processor, a small SRAM memory, DRAM main memory, 4 communication channels. The trasnputer network is developed by connecting the channels with each other for enabling parallel computer communication. But transputer could not withstand the parallel applications as it lacks the computational power. To overcome this problem, the technology has developed the cheaper versions of the processors known as RISC processors.
But in modern usage, parallelism is used at different stages by the microprocessors. They are parallelism ate instruction level and data level.
Some of the high performance processors are RISC and RISCy.
The traditional RISC processors have the following features -
Multiple instruction pipelines are most widely used in the parallel computers by using superscalar microprocessors. The content of the instruction-level parallelism (ILP) in a particular application determines the superscalar processors effectiveness. The hardware level instructions are not performed in the order of the program enabling to fill the pipeline.
Mostly the approach used by the microprocessor is super pipeline approach. The stages of the pipelined are increased in number and the operations within the pipeline are decreased to increase and improve the frequency of the clock.
As the name implies the VLIM processors with very large instructions are known as Very Large Instruction Word (VLIM) processor. The instructions are so large that within a single instruction, operations are executed parallel. Then these executed operations are being sent to the respective units for further execution. The decoding of the operations is performed upon receiving the VLIM instructions. These are sent to the functional units for parallel execution.
For the general microprocessors, co-processors are fixed which are termed as vector processors. For each of the element of the vector, the operation is performed only when the vector instruction is received and decoded. For the normal operation, many vector operations are to be combined together in such a manner that the outcome of a particular vector operation is sent to other vector operation and the outcome is received as an operand.
The speed of the microprocessors doubles for every 18 months making it more difficult for the main memory and the DRAM chip to operate in accordance with that chip. To overcome this situation, the speed gap between the microprocessor and memory is lined by using caches. Thus a small SRAM memory is termed as a cache. The different types of caches that are being used in the modern processors are Translation Look-aside Buffers (TLBs) caches, instruction and data caches.
The following are some of the types of caches.
The addresses of the main memory are mapped to the locations of the cache. Multiple blocks of the main memory can be mapped to a single cache entry. It is the processor which has to determine the desired data block.
This cache facilitates in placing the block of the cache anywhere. The cache entry having the cache block is estimated and identified by following a replacement policy. The conflicts that arise due to the problems associated with cache-entry are minimized by fully associative caches. Fully associative caches are not suggested for large scale operations as it is associated with high cost element.
The direct mapping cache when combined with fully associative cache represents set-associative cache. The entries of the cache are split into sub-entries which are known as cache sets. A particular cache set is mapped to the fixed memory blocks. And within the cache set the mapping of the memory follows the concept of fully associative cache.
Certain cache strategies are designed to specify the course of action when the unforeseen events occur. The cache block that is to be replaced by the new block is recognized and identified by the cache by following the widely accepted replacement strategies. They are -
Parallel Computer Architecture Related Interview Questions
|Python Interview Questions||C++ Interview Questions|
|Artificial Intelligence Interview Questions||Computer Graphics Interview Questions|
|Compiler Design Interview Questions||Computer architecture Interview Questions|
|Synchronized Multimedia Integration Language (SMIL) Interview Questions||x86 Interview Questions|
|Multimedia compression Interview Questions||Advanced C++ Interview Questions|
|Basic C Interview Questions|
Parallel Computer Architecture Tutorial
Parallel Computer Architecture
All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.