COMP4300: Formal Course Description
(6 credit points) Group CSecond Semester
Thirty one-hour lectures, six two-hour laboratory/tutorials.
Lecturers: Alistair Rendell and Josh Milthorpe
Prerequisites
COMP2310; 6 units of 2000-series COMP courses; and 6 units of 2000-series MATH courses or COMP2600
Syllabus
A practically oriented introduction to programming paradigms for parallel computers. Considers definitions of program efficiency on parallel computers, addresses the modelling, analysis and measurement of program performance. Description, implementation and use of parallel programming languages, parallel features of operating systems, library routines and applications.
Description
A mainly practical introduction to the arts of programming high-performance parallel computers for representative problems, with the emphasis on performance and programming paradigms.
Rationale
The leading edge of high performance computing is in computers with highly parallel architecture machines. Increasingly these machines are heterogeneous, incorportating more than one instruction set architecture. While the very high-end examples of these computers cost more than one hundred million of dollars each, they generally use commodity technology so smaller and cheaper systems are widespread. Typical applications include: weather forecasting, financial modelling and information databases, so-called ``electronic wind tunnels'', realistic high speed graphics such as film and video animation sequences, scientific visualisation, machine learning, and virtual reality research. The Research School of Computer Science has a number of high end computer systems including experimental and prototype systems. More generally the ANU is host to the National Computational Infrastructure which in 2013 houses a system with over 50,000 compute cores.Parallel processing is the key to harnessing the power of modern cheap high-powered processing and memory chips. There are many computer designs which attempt answers to the computer architecture question, that is how to combine processors and memory in a parallel computer that can make effective use of their potential power at an acceptable cost. The techniques for programming the resulting machines include many new models of constructing, debugging and measuring performance of programs that are quite different to conventional computing structures.
Objectives
At the completion of this unit the student will:- be able to program more than one parallel machine in more than one specialised programming language or programming system; generic graduate attributes: 1,4
- be able to descriptively compare the performance of different programs and methods on one machine; generic graduate attributes: 3,5
- be aware of the elements of parallel programming language and system implementation; generic graduate attributes: 3
- be aware of the history and developments in the field. generic graduate attributes: 3
Assessment
The following assessment modes are used.- mid-semester examination
- This tests your progress in order to show up any problems early, and motivates you not to leave too much until the end of semester.
- final examination
- This tests objectives 1, 2, 3, and 4.
- assignments
- Each assignment will require an in-depth
task which interacts with the larger system.
This tests objectives 1 and 2.
Technical Skills
- familiarity in data-parallel languages.
- competence in parallel programming using message passing libraries.
Ideas
This unit presents the ideas of parallel efficiency, speedup, and load balancing, and the associated difficulties of performance evaluation; practical parallel programming with existing languages; data parallel programming, process parallel programming; common parallel programming paradigms and problem decomposition.
Topics
A selection will be made from the following topics:- Foundations:
- machine architectures and programming applications.
- Parallel programming issues:
- driving forces and enabling factors, sample applications (scientific, engineering, AI and database). Efficiency, speedup, load balancing, performance measurement and comparisons. Introduction to parallel algorithms (reduction, sorting). Matrix manipulation.
- Programming paradigms:
- geometric and process decomposition, worker-farming, spatial decomposition, particle decomposition.
- Parallel software:
- languages and coordination constructs, parallelising compilers, programming environments.
- Practical parallelism:
- creating and evaluating programs for distributed and shared memory parallel computer systems.
- Technology trends:
- the future of high performance and parallel computing.


