Alistair Rendell

   

Appointment and Contact Details

Deputy Dean, ANU College of Engineering and Computer Science
and Professor, Research School of Computer Science
Australian National University
Canberra, ACT 0200
AUSTRALIA
T: +61-2-6125 4386
F: +61-2-6125 0010
E: Alistair.Rendell@anu.edu.au
PGP Public Key
Rm N226 Building 108
Rm A309 Building 115

Previous appointments

Research

   
  The High Performance Computing Research Group (June 2006). From left: Jin Wong (Phd student), Pete Janes (Honours student), Warren Armstrong (PhD student), Rui Wang (Postdoctoral Fellow), Bill Clarke (Research Associate), Peter Strazdins (Senior Lecturer), Alistair Rendell (Associate Professor), Joseph Antony (PhD student), David Barr (Alexander Technology), Andrew Over (PhD student)  

I have broad interests in the area of high performance computing. Some of the projects we are currently working on include:

  • Computer Architecture and Performance Modelling: Modern computer systems increasingly incorporate multi-core processors and have memory architectures that have non-uniform memory access. We are interested in developing software tools and methods that are appropriate for such machines. This work is undertaken in partnership with Sun Microsystems and Gaussian Inc as part of Australian Research Council (ARC) Linkage Grant LP0347178 (2002-2006).
      J. Antony, P.P. Janes and A.P. Rendell, "Exploring Thread and Memory Placement on NUMA Architectures: Solaris and Linux, UltraSPARC/FirePlane and Opteron/HyperTransport", 13th IEEE International Conference on High Performance Computing, Lecture Notes in Computer Science, 338-352, 4297 (2006).

  • Cluster and Grid Computing: Computers assembled from commodity parts have revolutionised high performance computing. Much work remains, however, to improved the usability and efficiency of these systems. We are interested in developing queueing systems that can optimally allocate user processes to heterogeneous cluster systems, and the use of virtualization to provide compute environments tailored for a given application. This work is in partnership with Alexander Technology and is supported by ARC Linkage Grant LP0669726 (2006-2009)

  • Interval Arithmetic: In contrast to computing with a single machine representable floating point number, interval computations define a range of floating point numbers between lower and upper bounds. We are using interval arithmetic to study the effects of rounding and truncation errors for a variety of computational science applications. We are also interested in a variety of novel interval algorithms, such as the ability to rigorously determine global minimums. This work is supported by ARC Discovery Grant DP0558228 (2005-2007).
      A.P. Rendell, B. Clarke and J. Milthorpe, "Interval Arithmetic and Computational Science: Performance Considerations", 2006 International Conference on Computational Science, Lecture Notes in Computer Science 218-225, 3991 (2006)

  • Software Distributed Shared Memory: The majority of computations performed on parallel systems use message passing models such as MPI. MPI is often referred to as the assembly language of parallel programming. We are interested in alternative approaches. Current work is focused on adapting shared memory parallel programming models, such as OpenMP to run on clusters.
      H'sien J. Wong and A.P. Rendell, "The design of MPI Based Distributed Shared Memory Systems to Support OpenMP on Clusters" Cluster07 (accepted, July 07).

  • Intelligent Scientific Software: Modern computing systems are becoming increasing complex. Often the optimal computational algorithm will depend on the current runtime conditions. We are interested in developing intelligent runtime environments that can be used with the large body of existing scientific software. Current work is using DynInst to perform dynamic code modification coupled with reinforcement learning techniques to direct the code modification.
      W. Armstrong, P. Christen, E. McCreath, A.P. Rendell, "Dynamic Algorithm Selection Using Reinforcement Learning", Workshop on Integrating AI and Data Mining (AIDM 2006), held as part of the 19th Australian Joint Conference on Artifical Intelligence, Hobart, 4-8 December, 2006.

  • Computational Science: We are interested in the development and application of large scale computational science applications. In current work we are studying the cleaning of the Al2O3 surface using Ga. These calculations run for days on high performance computing platforms. Related work is considering improved ways to generate fitting basis sets for density functional calculations that use periodic boundary conditions.
      R. Yang and A.P. Rendell, "First Principles Study of Gallium Atoms adsorption on the alpha-Al2O3(0001) Surface", J. Phys. Chem B, 9608-9618, 110 (2006).

All publications

Research Opportunities

If you are interested in working in any area related high performance computing, distributed computing, or computational science please send me an email describing your interest.

  • PhD: the majority of scholarships are offered towards the end of each year for study to start in Jan-March the following year. A small number of scholarships are available for students to begin mid-year. Further information can be found at the graduate school web site. It is possible that we may also have funding for scholarships from other sources.

  • Undergraduate/Honours Projects: A list of possible projects that I have proposed for 2007 is available here. Other projects are available for honours or software engineering students or as part of the Masters(eScience) program.

  • Vacation Scholarships: from November-Feb the college of engineering and computer science offers a number of summer vacation scholarships. The closing date for these scholarships is typically end of August each year. In addition to these scholarships we may have other funds to support enthusiastic and able vacation scholars.

  • Novel Hardware Projects for 2009 Some background information

Teaching

   
  Cluster constructed by students of COMP3320 in 2005  

I have taught into the following courses:

Related publications:
  • A.P. Rendell, A Project Based Approach to Teaching Parallel Systems, Presented at the computational science education workshop at the 2006 International Conference on Computational Science, Reading, UK. Published in Lecture Notes in Computer Science, Springer Verlag, 3992, 155-160 (2006). (Preprint available here)

  • J. Roper and A.P. Rendell, Introducing Design Patterns, Graphical User Interfaces and Threads within the Context of a High Performance Computing Application. Presented at the computational science education workshop at the 2005 International Conference on Computational Science, Atlanta, USA. Published in Lecture Notes in Computer Science, Springer Verlag, 3515, 18 (2005) (Preprint available here)

  • R. Garg, I. Sharapov, and A.P. Rendell, Performance Programming: Theory, Practice and Case Study, Tutorial M5 presented at SC02, Baltimore, USA. (Copy available here)

Some on-line teaching modules