Parallel Systems COMP4300
Course overview
Course description
A practically oriented introduction to programming paradigms for parallel computers. Considers definitions of program efficiency on parallel computers, addresses the modelling, analysis and measurement of program performance. Description, implementation and use of parallel programming languages, parallel features of operating systems, library routines and applications.
Rationale
The leading edge of high performance computing is in computers with highly parallel architecture machines like the ASCI machines and BEOWULF-style architectures. The high-end examples of these computers cost millions of dollars each, the low end hundreds of thousands, but they have important uses in high speed computations: weather forecasting, financial modelling and information databases, so-called ``electronic wind tunnels'', realistic high speed graphics such as film and video animation sequences, scientific visualisation, machine learning, and virtual reality research. Similar computers using parallel processing will become much more widely used in many areas as they are better understood and the cost of building them continues to fall. The ANU has a stable of high performance parallel computers in the Department of Computer Science, the Supercomputing Facility, and the Research School of Information Science and Engineering, that provides later year students with an unparalleled opportcoursey to work with state-of-the-art computing systems.
Parallel processing is the key to harnessing the power of modern cheap high-powered processing and memory chips. There are many computer designs which attempt answers to the computer architecture question, that is how to combine processors and memory in a parallel computer that can make effective use of their potential power at an acceptable cost. The techniques for programming the resulting machines include many new models of constructing, debugging and measuring performance of programs that are quite different to conventional computing structures.
Ideas
This course presents the ideas of parallel efficiency, speedup, and load balancing, and the associated difficulties of performance evaluation; practical parallel programming with existing languages; data parallel programming, process parallel programming; common parallel programming paradigms and problem decomposition.
Topics
A selection will be made from the following topics:
- Foundations: machine architectures and programming applications.
- Parallel programming issues: driving forces and enabling factors, sample applications (scientific, engineering, AI and database). Efficiency, speedup, load balancing, performance measurement and comparisons. Introduction to parallel algorithms (reduction, sorting). Matrix manipulation.
- Programming paradigms: geometric and process decomposition, worker-farming, spatial decomposition, particle decomposition.
- Parallel software: languages and coordination constructs, parallelising compilers, programming environments.
- Practical parallelism: creating and evaluating programs for systems at ANU Supercomputer Facility and within the School of Computer Science as available.
- Technology trends:the future of high performance and parallel computing.
Technical skills
- familiarity in data-parallel languages.
- competence in parallel programming using message passing libraries.
- programming of shared memory systems using OpenMP and/or pthreads.
Textbooks
- Lin, C. & Snyder, L., Principles of Parallel Programming, Pearson International edition.
- Grama, A., Gupta, A., Karypis, G. & Kumar, V., Introduction to Parallel Computing, 2nd edition, Addison-Wesley, 2003.
Other Reading Material
- Wilkinson, Barry & Allen, Michael Parallel Programming: techniques and applications using networked workstations and parallel computers, .Prentice Hall 2nd edition (2004).
- Bryant, R.E. and O'Hallaron, D. Computer Systems: A Programmer's Perspective, Pearson/Prentice Hall..
- Dowd, Kevin & Severance, Charles High Performance Computing, O'Reilly & Associates Inc. 2nd edition 1998.
- Gropp, W., Lusk, W. & Skjellum, A.Using MPI: Portable Parallel Programming with the Message-Passing Interface MIT Press 1999,
- Gropp, W., Lusk, W. & Thakur,R Using MPI-2: Advanced Features of the Message-Passing Interface MIT Press 1999, ISBN 0-262-057133-1.
- Butenhof, D.R. Programming with POSIX Threads Addison-Wesley 1997.
Workload
Thirty one-hour lectures, six two-hour tutorial/laboratory sessions


