Pablo Halpern has been writing software since 1978 is currently a software engineer at Intel Corporation as well as a long-time member of the ISO C++ Standards Committee. As the chairman of the Parallel Programming Models Working Group at Intel, he coordinated the efforts of teams working on Cilk Plus, TBB, OpenMP, and other parallelism languages, frameworks, and tools targeted to C++, C, and Fortran users.
His current work is focused on developing compiler technology for Intel's next-generation processors, promoting adoption of parallel and vector constructs into the C++ and C standards, and creating simpler and more powerful parallel programming languages, compilers, and tools for Intel's customers.
Parallel programming was once considered to be exclusive to the realm of weather forecasters and particle physicists working on multi-million dollar super computers while the rest of us relied on chip manufacturers to produce faster CPUs every year. That era has come to an end.
Clock speedups have been largely replaced by having more CPU cores on a chip and more chips in a system. A typical smart phone now has 2 to 4 cores, a typical laptop computer or tablet has 4 to 8 cores, servers have dozens of cores, and supercomputers have thousands of cores. Each of these cores have 4 to 16 SIMD (Single instruction, multiple data) lanes, and many systems also have GPUs (Graphic Processing Units) capable of massively-parallel computations.
If you want to speed up a computation on modern hardware, you need to take advantage of the multiple cores, SIMD units, and GPUs available. This talk provides an overview of the parallelism landscape. We'll explore the what, why, and how of parallel programming, discuss the distinction between parallelism and concurrency and how they overlap, and learn about the problems that one runs into. We'll conclude with a brief exploration of how to begin designing for parallelism.