High-Performance Compilers for Parallel Computing - Tapa blanda

Michael, Wolfe

 
9780805327304: High-Performance Compilers for Parallel Computing

Sinopsis

By the author of the classic 1989 monograph, Optimizing Supercompilers for Supercomputers, this book covers the knowledge and skills necessary to build a competitive, advanced compiler for parallel or high-performance computers. Starting with a review of basic terms and algorithms used-such as graphs, trees, and matrix algebra-Wolfe shares the lessons of his 20 years experience developing compiler products such as KAP, the capstone product of Kuck and Associates, Inc., of Champaign, Illinois.

"Sinopsis" puede pertenecer a otra edición de este libro.

Acerca del autor

As co-founder in 1979 of Kuck and Associates, Inc., Michael Wolfe helped develop KAP restructuring, parallelizing compiler software. In 1988, Wolfe joined the Oregon Graduate Institute of Science and Technology faculty, directing research on language and compiler issues for high performance computer systems. His current research includes development and implementation of program restructuring transformations to optimize programs for execution on parallel computers, refinement and application of recent results in analysis techniques to low level compiler optimizations, and analysis of data dependence decision algorithms.



0805327304AB04062001

De la contraportada

High Performance Compilers for Parallel Computing provides a clear understanding of the analysis and optimization methods used in modern commercial research compilers for parallel systems. By the author of the classic 1989 monograph Optimizing Supercompilers for Supercomputers, this book covers the knowledge and skills necessary to build a competitive, advanced compiler for parallel or high-performance computers. Starting with a review of basic terms and algorithms used - such as graphs, trees, and matrix algebra - Wolfe shares the lessons of his 20 years experience developing compiler products. He provides a complete catalog of program restructuring methods that have proven useful in the discovery of parallelism or performance optimization and discusses compiling details for each type of parallel system described, from simple code generation, through basic and aggressive optimizations. A wide variety of parallel systems are presented, from bus-based cache-coherent shared memory multiprocessors and vector computers, to message-passing multicomputers and large-scale shared memory systems.



0805327304B04062001

"Sobre este título" puede pertenecer a otra edición de este libro.