Alternative Mechanisms to Achieve Parallel Speedup and Efficient Use of Processing Resources
This paper deals with alternative mechanisms to achieve parallel speedup and efficient use of processing resources. Computer architectures in common use take advantage of low-level parallelism by utilizing multiple pipelines to achieve high instruction processing throughput. The next generations of integrated circuits will continue to support increasing numbers of transistors, with an attendant hardware allocation and efficiency problem, in terms of making efficient use of the additional transistors.
Computer manufacturers and researchers are looking at ways to capture additional levels of parallelism beyond multiple pipelines by adding multiple processors or processing components in a single chip or single package. Each level of parallelism performance suffers from the law of diminishing returns outlined by Amdahl. This paper explains how incorporating multiple levels of parallelism in a system results in higher overall performance and efficiency.
Please disable any pop-up blockers for proper viewing of this Whitepaper.