What implications would increased parallelism have for OS design?
Parallel computing is a type of calculation in which most computations are done at the same time, functioning on the idea that huge issues can often be isolated from tinier ones, which are then resolved in parallel. Parallelism has been implemented for many years, primarily in high-performance computing, but interest in it has emerged in the past few years because of the physical limitations hindering frequency scaling. As power usage and heat production issues by computers have become a major problem in the past few years, parallel computing has become the most important paradigm in computer architecture, primarily in the form of multi-core processors.
While the production technology continues to exhibit progress, minimizing the size of single gates and the physical constraints of microelectronics have become a huge design problem. Some impacts of these physical constraints can result in tremendous heat generation and information synchronization issues. The need for more efficient microprocessors leads CPU designers to utilize different processes of maximizing performance. Therefore, increased parallelism is actually beneficial for most applications, but is ineffective for others that tend to have hard-to-predict codes. Most applications are practically better fitting to increased parallelism, and multiple independent CPUs are one usual process utilized to improve a system’s overall parallelism. A mixture of improved available space because of improved production procedures and the need for increased parallelism is the rationale behind the establishment of multi-core CPUs.
Dastikop explains cloud computing fundamentals in a one minute video. In this session cloud computing is explained in terms of parallel and distributed computing (lesson):