Honours topics
Half of your time as an honours student is spent working on a project. But first you have to find a project topic.
The "official" reference for projects proposed by potential supervisors is the CECS projects database.
There are projects there available for all levels of research, including short projects, summerscholarship projects, Honours projects, Masters and PhD projects. All potential research students at any level are urged to browse it.
If you see a project that looks interesting, email the potential supervisor about it. Don't be afraid to discuss possible variations to the listed project: what appears on the web site is generally more of a suggestion than a rigid specification.
You don't have to be constrained by what you see on the project website. If you have something that you would like to work on as a project, then feel free to discus it with the honours convener to see if it could form the basis of an honours project and to identify a possible supervisor. Look at the web pages of Computer Science staff to find out their research interests. Remember that projects may also be supervised by people outside the College, or even ontside the University: from CSIRO or NICTA, for instance.
Former Project Topics
For your interest, here is an archive of Honours project proposals from previous years. Some of them are quite ancient now, of course, but they may help to give you ideas of the kind of thing which is suitable.2002 Honours project proposals
Where am I? - Get the machine connected to the real world (Localization)
This topic was taken by a student.
Contact: Uwe R. ZimmerThe key issue of physical agents (robots) in contrast to software agents, is that they are operating in real world space and real-time. The benefit they get is a rich model of the world, which is the (local) real world itself, if they can keep their internal representations in synchronization with the environment around them. Therefore we need localization algorithms which are highly efficient and robust to keep pace with the speed of sensations passing by and which are powerful enough to handle complex sensor data and adequate spatio-temporal models.
There is a set of known methods, which your project can be based on, and there are just as many challenging questions which you might address. The direction you will choose for your honours projects depends on the degree of robustness and global consistency, as well as the level of dynamics and sensor complexity you are going to allow for.
Multiple vehicles in RSISE might be employed to test your strategy. Among them is an autonomous submersible, an in-door land robot and an autonomous car.
Besides good programming skills (your system will be concurrent and will have real-time features), you should be able to identify and locate problems in complex systems and you should have some relationship with the real world yourself :-) (especially where you are right now).
The abilities you will gain could convince your future employer that you are able to handle physical, real world systems (e.g. embedded systems) or your future PhD supervisor that you have already taken a peek outside your main discipline and are able to think across faculty borders.
Please see me to get more information on your possibilities in this field.
Object recognition from Video sequences
Potential supervisor: Richard HartleyObject recognition is one of the basic problems of computer vision, with wide applications in manufacturing, navigation and robot-world interaction. To date, most of the research in object recognition has been directed at recognition from a single image. In this project, we wish to develop algorithms for recognition of objects from video sequences, taken with a hand-held camera. This means that no knowledge is available as to the motion of the camera, which must be deduced from the input image sequence - however, software is available which will carry out this part of the task. Steps in the recognition process include edge finding, contour detection and contour tracking.
A student working on this project will need to have good programming skills in C++, and a reasonable mathematical background.
Efficient Tableaux Provers for Modal Logic Using Mercury
Supervisor: Rajeev Gore, DCS and CSL, ANUMercury is a new logic programming language invented at the University of Melbourne (http://www.cs.mu.oz.au/research/mercury/). It produces C code and combines the advantages of functional and logic programming. It has a built-in backtracking mechanism which makes it ideal for search problems.
Tableau calculi are now routinely used to determine whether or not a particular statement A is a logical consequence of another collection of statements Gamma in some (non-classical) logic L. Such provers have applications in Hardware Verification, Artificial Intelligence and Hybrid Systems.
Previous research of ours has shown that Mercury is an ideal language for programming such provers. The project is to continue this research by implementing a prover for a particular logic in Mercury. The logic is of fundamental importance in Hybrid Systems.
Background: You will need a strong background in theoretical computer science or mathematics. A grounding in logic would be useful but is not essential as there is plenty of local expertise in this area. This project is ideal for students who wish to pursue a PhD in any area of theoretical computer science.
Details: You will need to become familiar with sequent and tableau proof calculi for various nonclassical logics, and become familiar with modal logic (local expertise abounds). You will need to become familiar with Mercury (non-local expertise available). This project is likely to be extendible to a PhD topic.
Implementing a Prover for Transitive Tense Logics using the Logics Work Bench
Supervisor: Rajeev Gore, DCS and CSL, ANUProject: The Logics Work Bench (http://www.lwb.unibe.ch/) is a suite of efficient theorem provers for classical and various nonclassical propositional logics. All procedures are based upon sequent or tableau calculi. The LWB contains a programming language with which implementors can write further procedures for their own favourite propositional logic. Recent research has led to the definition of decision procedures for transitive tense logics which model time as a sequence of branching or linear points. Such a model of time is of fundamental importance in applications in Artificial Intelligence, Hardware Verification, and Hybrid Systems. The project is to use the in-built programming language of the LWB to implement these new decision procedures for tense logic Kt.S4.
Background: You will need a strong background in theoretical computer science or mathematics. A grounding in logic would be useful but is not essential as there is plenty of local expertise in this area. This project is ideal for students who wish to pursue a PhD in any area of theoretical computer science.
Details: You will need to become familiar with sequent and tableau proof calculi for various nonclassical logics, and become familiar with modal logic (local expertise abounds). You will need to become familiar with the inner workings of the LWB (non-local expertise available). You will have to understand the theoretical algorithms, and translate them into a working prototype to be included in the LWB.
This project is likely to be extendible to a PhD topic.
Object-Oriented Computational Science Software Development
Supervisor:
Alistair
Rendell
E-mail:
Alistair.Rendell@anu.edu.au
Phone: 6125 4386
Development of a Flexible Interface for Molecular Science Codes
Often we have a large molecular system that we wish to break down
into regions that are treated using different computational methods -
so called hybrid models. In practice this can mean using different
codes to do the different parts of the computation. Some of these
codes may be commercial packages, in which case our ability to modify them
may be limited. The aim of this project is to build a flexible "glue",
that we have control over, and that we can use to couple the third
party programs together. But this is more than a complicated shell
script - the glue will do work - like compute optimal structures for
molecular configurations.
Genetic Algorithms for Optimisation of Real Parameters
Genetic algorithms (GA) came on the scene 10-15 years ago. The basic
idea is that you approach an optimisation problem in the same way that
we, over time, have been optimised for the climate in which we now
live (well some of us!). Question is, are these methods more hype
than substance! We are interested in using GA methods to fit a bunch
of real parameters to a rather complex hypersurface. But how best
to represent and mutate our real parameters is a non-trivial
question. Also, should we combine GA with other optimisation
strategies?
An Object-Oriented Integral Code
For the mathematically inclined. Computation of integrals over
Gaussian functions forms the core of almost all quantum chemistry and
molecular physics codes. Schemes to evaluate these integrals
have evolved over many years and are complex, involving, for
example, tree searches, interpolation, recursion, and numerical
quadrature. As a consequence of this evolutionary path todays
"production" codes are
are now extremely hard to follow and modify. The aim of this project
is to design and develop a well documented, moderately efficient, O-O
integral code that can be used in future work to explore
novel computational methods.
High Performance Parallel Programming
Supervisor:
Alistair Rendell
E-mail:
Alistair.Rendell@anu.edu.au
Phone: 6125 4386
OpenMP for Non-Uniform Memory Access (NUMA) Computers
This topic was taken by a student.
Jointly supervised by Compaq's man on the ground (Lindsay Hood) this project will target programming models for the Compaq GS system. Specifically Compaq has proposed a series of extensions to the OpenMP shared memory programing paradigm that are designed to account for the NUMA architecture of the GS. But information on, for example, how useful these extensions are, what performance enhancements they offer, and how they compare with traditional message passing is, at best, limited. The aim of this project will be to consider these issues. You will be working closely with a major computer company, and early access to Compaq's next generation hardware based on the Alpha EV7 processor may be negotiable.Shared Arrays via one-sided MPI-2 Communications
The second Message Passing Interface standard (MPI-2) supports so called "one sided" communications. In this model the transfer of data from one process to another does not require the cooperation of both processes, but can be completely specified by just one process. The aim of this project will be to use these one-sided communications to build a restricted version of either the Global Array (GA) library or Distributed Data Interface (DDI). That is, we wish to define physically distributed array objects but permit access to these objects by any process in our parallel job. The advantage of using MPI-2 is that it is a standard that is portable between machines. In contrast current GA or DDI implementations are hardware specific.
Distributed Shared Memory (DSM)
A number of models exist for providing the illusion of shared memory on physically distributed memory machines. Two examples are TreadMarks (Rice University) and Adsmith (National University of Taiwan). Both implementations must consider issues like how memory pages are replicated and how to maintain consistency between the memory pages on different processors. The aim of this project would be to research current DSM implementations, then install one model on the Bunyip cluster and explore its utility for a small number of application kernels.
Agent Negotiation
| Supervisor: | Roger Clarke |
|---|---|
| E-mail: | Roger.Clarke@anu.edu.au |
| Phone: | (02) 6288 1472 |
Investigate the practicability of implementing agents, and negotiations between agents, in particular by measuring the increase in the complexity of code as the complexity of the interactions between the two agents increases.
The suggested implementation context is the W3C's Platform for Privacy Preferences (P3P) specification. This defines how client-side software can store people's privacy preferences, and server-side software can store privacy policy statements by corporations and government agencies. The two agents can then negotiate with one another in order to permit a transaction to be entered into (such as the provision of shoe-size and credit-card details), to prevent the transaction, or to refer a mismatch to the consumer for a decision.
It is envisaged that a succession of prototypes of increasing completeness would be implemented, and key process and product factors would be measured. This would depend on a thorough appreciation of theories relating to agents, P3P, software development and maintenance, software complexity, and development and maintenance productivity.
Some background reading is at:
- http://www.anu.edu.au/people/Roger.Clarke/DV/P3POview.html
- http://www.anu.edu.au/people/Roger.Clarke/DV/P3PCrit.html
Conception, Design and Implementation of Nyms
This topic was taken by a student.
Supervisor: Roger Clarke (Visiting Fellow)
http://www.anu.edu.au/people/Roger.Clarke
E-mail:
Roger.Clarke@anu.edu.au
Phone: (02) 6288 1472
Most network protocols deal in Entities and Identifiers.
An Entity (which covers things as diverse as a person, a company, a network-connected device, and a process) has precisely one Identity (which is, very roughly speaking, its 'essence'. In the case of a device or process, that might be operationalised as the specification of the functions it performs).
An Entity has one or more Identifiers, each of which is a data-item or group of data-items which reliably distinguish it from other Entities, especially those of the same class.
In complex networks, this model is too simplistic, and two additional concepts are necessary.
A Role is a particular presentation of an Entity. An Entity may have many Roles; and a Role may be associated with more than one Entity. As examples, think of a SIM Card as an Entity, and the multiple Mobile-Phone housings into which it is successively placed as Roles; and then there are the many Roles that you play yourself, as student, worker, sportsperson, voter, dole-bludger, scout-master, tax-payer, lover ... There are various ways in which you can accidentally or on purpose enable someone else to adopt one of your Roles (e.g. give them your password; although there are juicier examples than that).
A Nym is a data-item or group of data-items which reliably distinguishes a Role. However, because a Role is not reliably related to an Entity, there is no reliable mapping between a Nym and the underlying Entity or Entities (i.e. the mapping is not only m:n, but it's not determinable).
I'm particularly interested in Nyms because of the vital part they are going to play in keeping us reasonably sane and reasonably free, as the State and the Hypercorps increasingly abuse personal data in order to impose themselves more and more on individuals. But of course they could turn out to be important within open networks too, quite independently of privacy concerns. I'd like to see some serious research done, drawing on the emergent literature, and performing some laboratory experimentation.
Here are some starting materials (these happen to be on my own site, but
they point to plenty of other sources as well):
Concepts;
human identity;
tools;
digital persona;
PKI;
Notes
on the relevant section of the Computers, Freedom & Privacy Conference in
1999;
some references;
Intro (needs some updating);
Inet;
tracking crims.
File-Sharing Technologies
Supervisor: Roger Clarke (Visiting Fellow)
http://www.anu.edu.au/people/Roger.Clarke
E-mail:
Roger.Clarke@anu.edu.au
Phone: (02) 6288 1472
If you didn't do COMP3410 - Information Technology in Electronic Commerce in Semester 2, 2000, the assignment that I set was: "Enormous tensions currently exist between, on the one hand, the need for musicians and music publishers to earn revenue and, on the other, the desire of consumers to get their music for free. Provide constructive suggestions as to how technology might be used to address these problems".
There are some leads in the slides for Lecture 5, at: http://www.anu.edu.au/people/Roger.Clarke/EC/ETIntro.html#LOutline.
File-sharing technologies started out as centralised repositories, then became centralised directories of dispersed repositories (the Napster model), and are rapidly maturing into forms in which both the repositories and the directories are dispersed (the Gnutella model).
But do they work? On the one hand, are there still choke-points? And on the other, is it feasible to run both anarchic, revenue-denying schemes and paid services all using the one architecture? Are there significant differences among the emergent products? Is a taxonomy feasible? And does such a taxonomy lead to the discovery of variants that no-one's implemented yet?
Here are some starting materials:
http://www.anu.edu.au/people/Roger.Clarke/EC/FDST.html; http://www.anu.edu.au/people/Roger.Clarke/EC/KingEP.html.A (still growing, probably incomplete) catalogue of technologies: http://www.anu.edu.au/people/Roger.Clarke/EC/FDST.html#Friends.
http://www.anu.edu.au/people/Roger.Clarke/EC/Bled2K.html.Keyword-Based Approaches To Text Comparison
| Supervisor: | Peter Strazdins |
|---|---|
| E-mail: | Peter.Strazdins@anu.edu.au |
| Phone: | (02) 6125 5041 |
Extending the Sparc-Sulima Simulator for Clusters
| Supervisor: | Peter Strazdins |
|---|---|
| E-mail: | Peter.Strazdins@anu.edu.au |
| Phone: | (02) 6125 5041 |
Symbolic Optimization of Robot Dynamics Calculations
Supervisor:
Roy Featherstone
Background: Many scientific and engineering calculations involve matrices, and these matrices often have special properties; for example, they might be symmetric, diagonal, or contain lots of zeros and ones. If the software performs matrix arithmetic by calling generic matrix multiplication and addition routines then the computer does a lot of unnecessary calculation, perhaps a factor of ten more than necessary. Symbolic optimization is a way to solve this problem: first the calculation is performed symbolically, then an optimizer simplifies the expressions and removes all unnecessary calculations, then a code generator converts the remaining expressions into computer source code which, when compiled and run, will perform only the necessary calculations.
Project: Write a program to symbolically optimize the dynamics calculations for several robots, and compare your results with existing literature on the subject. Write a report on your findings. (A sufficiently good report could potentially be presented at a robotics conference.) Algorithms will be supplied (and explained), along with pointers to the literature and some source code. The recommended programming language is C++.
Required Skills: You must be fluent with object-oriented programming concepts, algorithms that search and operate on binary trees, and basic matrix arithmetic (addition, subtraction, etc.). You should be familiar with C++; and it is highly desirable that you understand the basics of rigid-body mechanics (mass, force, acceleration, 3D vectors, centre of mass, Newton's laws, etc.). You will also need to exercise some scientific judgement in comparing your results with those published by other researchers.
Benefits: This would be a suitable project for anyone considering a career at the mathematical or computational end of the physical sciences or engineering. It could also be a valuable stepping stone towards a career in compiler writing, computer algebra, scientific computing, simulation or robotics.
Exploring Object Placement for Cluster JVM Performance
This topic was taken by a student.
| Supervisor: | Ramesh Sankaranarayana and John N Zigman |
|---|---|
| E-mail: | john@cs.anu.edu.au |
| Phone: | (02) 6125 8196 |
Induction in a large temporal domain - the intelligent file pre-fetcher for Linux
Supervisor: Eric McCreathMachine learning may be defined as learning a description of a concept from a set of training examples. This description may then be used for prediction. This project involves investigating the application of Machine Learning with an operating system to pre-fetch files. The central outcomes of this research will be an analysis of machine learning approaches to undertake such a task. This would also involve forming a theoretical model of the domain in question, which would be vital for the analysis.
Details:
Operating Systems often employ read-ahead windowing approaches for sequentially accessed files. Basically the next part of a file is pre-fetched into memory even before it is requested. This greatly improves performance of the system.
Often when we interact with computers the same files are required during our interaction. For example we may always sit down and start-up our mail reader. If the computer could learn this pattern of usage and pre-fetch these files the responsiveness of the system could be improved. This project involves developing such a system and investigating its potential.
Within machine learning there are a number of issues that make this project both challenging and interesting. These include: the temporal nature of the data, the large number of examples, how the leant hypothesis could be applied to recommend which files to pre-fetch. Also within Operating Systems there are a number of challenges, such as how the system hooks into the kernel and how to measure the performance of the system when it is functioning with the operating system.
The results of this research have the potential to be published in both Operating Systems and Machine Learning publication. The project would also form a good starting point for graduate study.
Students will need to be able to reason logically and should have done well in theory subjects (either within Mathematics or Computer Science). Successful completion will require both determination and imagination. Finally, student considering embarking on this project must have good programming skills and will need to gain an understanding of the Linux kernel.
Parallel Techniques for High-Performance Record Linkage
This topic was taken by a student.
Supervisor: Peter ChristenE-Mail: Peter.Christen@anu.edu.au
Many organisations today collect massive amounts of data in their daily businesses. Examples include credit card and insurance companies, the health sector (e.g. Medicare), police/intelligence or telecommunications. Data mining techniques are used to analyse such large data sets to find patterns and rules, or to detect outliers. Often several data sets have to be linked to obtain more detailed information. As most data is not primarily collected for data analysis purposes, a common unique identifier (like a patient number) is missing in many cases, so that probabilistic techniques have to be applied to link data sets. Record linkage is a rapidly growing field with applications in many areas and it is an important initial step in many data mining projects.
The ANU Data Mining Group is currently working in collaboration with the NSW Health Department, Epidemiology and Surveillance Branch on the improvement of probabilistic techniques for record linkage. We are mainly interested in developing high-performance techniques for linkage that can be run on parallel computers like the APAC National Facility (a Compaq supercomputer with 480 processors).
Students involved in this project would contribute with prototype and parallel algorithm development. The tools we are using are the scripting language Python and OpenMP and MPI for parallel programming.
Multi-format Document Standards
Contact: Tom Worthington , Ian Barnes, Roger Clarke, Ramesh Sankaranarayana
Investigate open standards to allow an academic "paper" and accompanying audio-visual presentation to be prepared as one electronic document. Implement an open source software prototype demonstrating similar features to a word processor, web tool, presentation package and AV package. All functions should work on the one document, rendered as a typeset printed document, as a web page, a live "slide show" and pre-recorded audio-visual presentation with audio, video and synchronised slides. The software should generate documents incorporating accessibility features for the disabled in conformance with the W3C Web Content Accessibility Guidelines. Document text, images and other content would be shared by all tools (for example the text of the WP document would be the default notes for the slide show and note the default captions for the deaf on the video).
Part of the Scholarly Communications System Prototype. See: http://www.tomw.net.au/2000/scsp.html
Server/Browser Protocols for Available Bandwidth
Contact: Tom Worthington , Ian Barnes, Roger Clarke, Ramesh Sankaranarayana
Investigate open standards for web servers and browsers to negotiate content formats to suit the user's requirements and bandwidth available. Implement an open source demonstration. Implement content translation tools, where servers and browsers do not support suitable formats. As an example the resolution of images would be reduced to suit small screens and low bandwidth links, video would be converted to low resolution still key frames and synchronised audio. The system would be capable of displaying a multi-media presentation with audio and "talking head" video in real time on a hand held device with a medium speed wireless Internet connection and on a set-top box web browser, as well as more conventional desktop computers. Accessibility features, as described in W3C Web Content Accessibility Guidelines, would be integrated with bandwidth and multimedia features (for example the notes of a live presentation would be the default closed caption for the video presentation and also replace the audio where bandwidth was limited).
Part of the Scholarly Communications System Prototype. See: http://www.tomw.net.au/2000/scsp.html
Automatic Web Page Layout
Contact: Tom Worthington , Ian Barnes, Roger Clarke, Ramesh Sankaranarayana
Investigate artificial intelligence algorithms for automatically laying out web pages and produce an open source prototype software. Document layout "hints" for different renderings of the document (print, web, slideshow and AV) would be explicitly encoded in the document (using XML or similar format) or would be inferred from an existing screen layout. Documents would be rendered to suit the user's requirements and the capabilities of their display device and communications link, through features in the display device and/or in a server (for low capability display devices). As an example multiple frames would be used on large screens and one frame with links on small screens. The software would generate documents incorporating accessibility features for the disabled as described in W3C Web Content Accessibility Guidelines. Multiple renderings of information objects (for example multiple language versions for text, text captions for images) would be available.
Part of the Scholarly Communications System Prototype. See: http://www.tomw.net.au/2000/scsp.html
Performance Analysis using Hardware Counters
Supervisors: Alistair Rendell and Peter ChristenE-Mail: Alistair.Rendell@anu.edu.au / Peter.Christen@anu.edu.au
Modern processors and computer systems are designed to be efficient and achieve high performance with applications that have regular memory access patterns. For example, matrix-matrix multiplication and other dense linear algebra routines can usually be programmed to achieve near peak performance. While such routines have traditionally formed the core of many scientific and engineering applications, efforts to extend these computations to much larger systems has involved the use of sparse data structures. The irregular memory access patterns associated with such structures, however, often gives rise to a marked decrease in the fraction of peak performance that is achieved. Similar irregular memory access issues effect many commercial applications like database servers and decision support systems (data mining).
This project aims to analyse the performance of various applications by using hardware performance counters like the Solaris CPC or PAPI libraries. Such counters are based on processor registers, that can be set by a user to count events like cache hits and misses, number of load, store or floating-point instructions, etc.
The first part of this project will involve instrumenting various applications with performance counters and understanding the measured data. Work will start with simple programs but progress to real scientific, engineering and commercial applications from various areas like:
- Linear Algebra (eg. BLAS, LAPACK, ATLAS, FFT)
- Chemistry/Physics (eg. the Gaussian code)
- Data Mining and Databases
The second part of the project will aim to develop an interactive graphical user interface (GUI) that will allow the user to dynamically change the counted hardware events during run time, and display the results. Such a program would communicate with the application via signals, and could be implemented in Python/Tkinter.
This project is in collaboration with Sun Microsystems and prospective students are eligible to apply for an industrially sponsored honours scholarship.
Biometrics: The Scope for Masquerade
Supervisor: Roger Clarke (Visiting Fellow)E-mail: Roger.Clarke@anu.edu.au
Phone: (02) 6288 1472
Biometrics is a generic term encompassing a wide range of measures of human physiography and behaviour. Measures of relatively stable aspects of the body include fingerprints, thumb geometry, aspects of the iris and ear-lobes, and perhaps DNA. Dynamic measures of behaviour include the process (as distinct from the product) of creating a hand-written signature, and the process of keying a password.
Schemes can be devised that apply biometrics in such a manner that the measure is only ever known to a chip held by the individual, and the device currently measuring the person concerned. (This is analogous to the mechanism used for protecting the secure PINs that we key into ATM and EFT/POS keyboards).
It is very common, however, for proposals for biometric schemes to involve central storage of the biometrics, as police fingerprint records do now, and as proposals by the Australian Government would do in relation to DNA records. This raises the question as to whether a person who gains access to the store could masquerade as that individual. Possible uses would be to gain access to buildings, software or data, to digitally sign messages and transactions, to capture the person's identity, to harm the person's reputation, or to `frame' the person.
Honours work in 2001 by Chris Hill laid a firm theoretical foundation for further work in this area, and applied the theory to fingerprinting. The opportunity exists to continue investigations into the extent to which the centralised storage of biometric measures of humans creates the risk of masquerade. Possibilities include:
1) Synthesis of High-Quality Fingerprint Images
The 2001 project generated fingerprint images that were satisfactory mathematically, but not visually. Software available from Optel and the University of Bologna demonstrates that it is possible to synthesise fingerprint images. It may be possible to combine their techniques with Hill's strategy, in order to create high-quality fingerprint images that have pre-defined minutiae points, and that can be used to conduct masquerade even when the technology is supplemented by visual checks.
2) Classification of Fingerprint Images Based on Minutiae Points
Hill used a neural network approach to classify fingerprint images. Scope exists to improve on that work, and to apply other forms of machine-learning to the problem.
3) Development of a Secure Fingerprint Template
Masquerade could be prevented by the transmission and storage not of the biometric itself, but of a hash of the biometric. This requires a hashing algorithm that is provably one-way, but which supports the aims of achieving very low false-acceptances and very low false-rejections. This topic would require a strong background in the relevant maths.
4) Application of Hill's Generic Masquerade Method to Other Biometrics
In addition to fingerprints, the scope for masquerade needs to be investigated using intercepted images of other biometric forms, such as thumb geometry, iris scans, face recognition, DNA, etc.
Background reading is at: http://www.anu.edu.au/people/Roger.Clarke/DV/HumanID.html, http://www.anu.edu.au/people/Roger.Clarke/DV/IDCards97.html, http://www.anu.edu.au/people/Roger.Clarke/DV/PLT.html, http://www.anu.edu.au/people/Roger.Clarke/DV/Biometrics.html, Hill C. (2001) 'Risk of Masquerade Arising from the Storage of Biometrics', Honours Thesis, Dept of Computer Science, Australian National University, November 2001


