- Computer Architecture and Performance
Modelling: Modern computer
systems increasingly incorporate multi-core processors and have memory
architectures that have non-uniform memory access. We are interested
in developing software tools and methods that are appropriate for such
machines. This work is undertaken in partnership with Sun Microsystems
and Gaussian Inc as part of Australian Research Council (ARC) Linkage Grant
LP0347178 (2002-2006).
J. Antony, P.P. Janes and A.P. Rendell, "Exploring Thread and
Memory Placement on NUMA Architectures: Solaris
and Linux, UltraSPARC/FirePlane and Opteron/HyperTransport",
13th IEEE International Conference on High Performance Computing,
Lecture Notes in Computer Science, 338-352, 4297 (2006).
- Cluster and Grid Computing: Computers assembled from commodity
parts have revolutionised high performance computing. Much work
remains, however, to improved the usability and efficiency of these
systems. We are interested in developing queueing systems that can
optimally allocate user processes to heterogeneous cluster systems, and
the use of virtualization to provide compute environments tailored for
a given application. This work is in partnership with Alexander
Technology and is supported by ARC Linkage Grant LP0669726 (2006-2009)
- Interval Arithmetic: In contrast to computing with a single
machine representable floating point number, interval computations
define a range of floating point numbers between lower and
upper bounds. We are using interval arithmetic to study the effects of
rounding and truncation errors for a variety of computational science
applications. We are also interested in a variety of novel interval
algorithms, such as the ability to rigorously determine global
minimums. This work is supported by ARC Discovery Grant
DP0558228 (2005-2007).
A.P. Rendell, B. Clarke and J. Milthorpe,
"Interval Arithmetic and Computational Science: Performance
Considerations", 2006 International Conference on Computational
Science, Lecture Notes in Computer Science 218-225, 3991 (2006)
- Software Distributed Shared Memory: The majority of computations
performed on parallel systems use message passing models such as
MPI. MPI is often referred to as the assembly language of parallel
programming. We are interested in alternative approaches. Current work
is focused on adapting shared memory parallel programming models, such
as OpenMP to run on clusters.
H'sien J. Wong and A.P. Rendell, "The design of MPI Based
Distributed Shared Memory Systems to Support OpenMP on Clusters"
Cluster07 (accepted, July 07).
- Intelligent Scientific Software: Modern computing systems are
becoming increasing complex. Often the optimal computational algorithm
will depend on the current runtime conditions. We are interested in
developing intelligent runtime environments that can be used with the
large body of existing scientific software. Current work is using
DynInst to perform dynamic code modification coupled with
reinforcement learning techniques to direct the code modification.
W. Armstrong, P. Christen, E. McCreath,
A.P. Rendell, "Dynamic Algorithm Selection Using Reinforcement
Learning", Workshop on Integrating AI and Data Mining (AIDM
2006), held as part of the 19th Australian Joint Conference on
Artifical Intelligence, Hobart, 4-8 December, 2006.
- Computational Science: We are
interested in the development and application of large scale
computational science applications. In current work we are studying
the cleaning of the Al2O3 surface using Ga. These calculations run for
days on high performance computing platforms. Related work is
considering improved ways to generate fitting basis sets for
density functional calculations that use periodic boundary
conditions.
R. Yang and A.P. Rendell, "First
Principles Study of Gallium Atoms adsorption on the alpha-Al2O3(0001)
Surface", J. Phys. Chem B, 9608-9618, 110 (2006).
|