Advanced Topics in Artificial Intelligence COMP4620/COMP8620 |
Welcome to the Advanced AI course at the ANU !
This year (2013) the course will focus on the Foundations of AI, including inductive inference, decision-making under uncertainty, reinforcement learning, intelligent agents, information theory, philosophical foundations, and others. Note that traditionally the course varies significantly from year to year. Material from other years is available from the left menu.![]() |
![]() |
News
28May12: website contents createdFormalities/Miscellaneous/Summary
Offered By: The AI Group @ Research School of Computer Science @ Australian National UniversityOffered In: Second Semester, 2013 (22 July to 31 October). See Schedule below
Lecturer: Marcus Hutter
Tutors/Labs/Assistance: Wen Shao and Mayank Daswani and Peter Sunehag
Target: Undergraduate (COMP4620) and Graduate (COMP8620) students. Others welcome.
Enrollment: Undergraduates: The usual way via ISIS. Honors&Graduates&Others: Contact lecturer.
Admin: Bindi Mamouney
Course Subjects: Computer Science & Mathematics & Statistics
Unit Value: 6 units
Time Table: See Schedule below for details
Office hours: Wed 9ºº-10ºº, RSISE Bld 115, Room B259.
Assessment: Assignments (45%); Seminar (10%); Examination (45%) [details]
Indicative Workload: 25h lectures, 10h tutorial, 10h lab, ~50h assignments, lots of self-study
Prescribed texts: Excerpts from (see resources for details)
- Shane Legg (2008) Machine Super Intelligence
- Marcus Hutter (2005) Universal Artificial Intelligence
- Joel Veness et al. (2011) A Monte Carlo AIXI Approximation
Study@ANU page: http://studyat.anu.edu.au/courses/COMP4620;details.html
Wattle page: http://wattleprep.anu.edu.au/course/view.php?id=945
This page: http://cs.anu.edu.au/courses/COMP4620/2013.html
Prerequisites: If you have absolved the Machine Learning course COMP4670 or the Artificial Intelligence course COMP3620 or the Information Theory course COMP2610 you should have the necessary background for this course. Otherwise you can acquire the necessary background e.g. from the AI book Russell&Norvig (2010) Chp.2,3,5.2,5.5,13,15.1-2,17.1-3,21.
Chapter 1 of Li&Vitanyi (2008) is a great refresher of basic computer, information, and probability theory.
Course Description
This is an advanced undergraduate and graduate course that covers advanced topics in Artificial Intelligence. Topics vary from one offering to the next (see Study@ANU page).This year (2013) the course will focus on the foundations of AI, including inductive inference, decision-making, reinforcement learning, information theory, and some game and agent theory.
The dream of creating artificial devices that reach or outperform human intelligence is many centuries old. This course presents an elegant parameter-free theory of an optimal reinforcement learning agent embedded in an arbitrary unknown environment that possesses essentially all aspects of rational intelligence. The theory reduces all conceptual AI problems to pure computational questions, and is key to addresses many theoretical, philosophical, and practical AI questions.
How to perform inductive inference is closely related to the AI problem. The course covers Solomonoff's theory, which solves the induction problem, at least from a philosophical and statistical perspective.
Both theories are based on Occam's razor quantified by Kolmogorov complexity, Bayesian probability theory, and sequential decision theory.
Learning Outcomes
While the Introduction to AI course introduced a bunch of methods for solving a variety of AI problems, this Advanced AI course emphasizes the foundational, unifying, and general aspects of (artificial) intelligence. Course highlights are:- Formal definitions of (general rational) Intelligence;
- Optimal rational agents for arbitrary problems;
- Philosophical, mathematical, and computational background;
- Some approximations, implementations, and applications;
- State-of-the-art artificial general intelligence.
- measures, tests, and definitions of intelligence;
- Occam's razor;
- universal Turing machines;
- algorithmic information theory;
- probability theory;
- universal induction;
- Bayesian sequence prediction;
- minimum description length principle;
- intelligent agents;
- sequential decision theory;
- reinforcement learning;
- planning under uncertainty;
- universal search;
- Monte-Carlo tree search;
- philosophical foundations.
Schedule
Below is the schedule of 2012!The schedule for 2013 will be similar and provided end of June.
| Week | Lecture | Tutorial/Lab |
|---|---|---|
| to be updated throughout the course | Monday 13ºº-14ºº & Wednesday 15ºº-16ºº Chemistry Lecture Theatre T2 in Building 34 |
Thu.15-17ºº, Tut. in R214|R221; Ian Ross Bld.31 / Thu.10-12ºº, Lab in N114; CSIT Bld.108 |
| 23Jul - 27Jul | Overview & Introduction [Advertizement] [Slides] Reading:[Legg08.Chp.1] |
--- |
| 30Jul - 3Aug | Information Theory & Kolmogorov Complexity [Slides] Reading:[UAIBook.Sec.2.2] |
tutorial |
| 6Aug - 10Aug | Bayesian Probability Theory [Slides] Reading:[UAIBook.Sec.2.3] |
tutorial+ get assignment 1 |
| 13Aug - 17Aug | Algorithmic Probability & Universal Induction [Slides] Reading:[UAIBook.Sec.2.4] |
tutorial |
| 20Aug - 24Aug | Minimum Description Length & Universal Similarity [Slides] Optional Reading:[MDL.Chp.1,USM] |
tutorial+ |
| 27Aug - 31Aug | Bayesian Sequence Prediction & CTW [Slides, Slides] Reading: Parts of [UAIBook.Chp.3,CTW] |
tutorial |
| 3Sep - 7Sep | Rational Agents [Slides] Reading:[UAIBook.Chp.4.1&4.2] |
hand in assignment 1 get assignment 2 |
| 10Sep - 21Sep | break | --- |
| 24Sep - 28Sep | Universal Artificial Intelligence [Slides] Reading:try[UAIBook.Chp.5] |
lab |
| 1oct - 5oct | Approximations and Applications [Slides] |
tutorial: solutions to assignment 1 |
| 8oct - 12oct | MC-AIXI-CTW [Slides] Reading:[MC-AIXI-CTW] |
lab |
| 15oct - 19oct | Discussion [Slides] Reading:[UAIBook.Chp.8] |
lab |
| 22oct - 26oct | Discussion | lab+ hand in assignment 2 |
| 29oct - 2nov | Student Presentation of Individual Contribution to Practical Assignment. Send slides in advance to Mayank Daswani. | lab |
Assignments
Theory Assignment 1: The theory assignment is to be done individually, and will involve various mathematical exercises that will deepen the understanding of the lectured material. Wen Shao will be tutor and primary contact for the theory assignment.Practical Group Assignment 2: The practical assignment will be a group project. Goal is to implement the MC-AIXI-CTW model, which is a recent practical scaled-down version of the theoretical universal AI agent AIXI. Students will acquire first-hand experience how a single algorithm can autonomously learn to solve various toy problems like playing Tic-Tac-Toe or PacMan or Poker just based on experience and reward feedback without ever being told the rules of the game. The implementation should be completely stand-alone in very light C++. Particular emphasis is on ease of use (installation, compilation, running, modification) and good documentation. The project involves programming of various sophisticated functions, and requires and furthers the understanding of the theoretical material taught in the main class.
Each group will consist of 6-9 students. A group can self-organize and distribute work internally. The various modules/tasks/domains can be implemented by different students, each responsible for delivering a well-tested module including source and documentation. The group is responsible to deliver a final product consisting of documented source code, experimental results, and a final joint report.
Lab director Mayank Daswani will supervise the practical group project during lab sessions.
Tutorials/Labs
Rehearsal of lecture material and help with assignments: See WattleAssessment
Theory: Individual Theory Assignments (20%).Practice: Practical Group Assignment (25%).
Seminar: Seminar = 5 minute presentation of individual contribution to group assignment (10%).
Exam: Final written or oral examination, depending on number of students (45%).
Know: What to know for the exam: Material in the course slides.
The other provided reading material should help you to better understand the slides, but will itself not be examined.
Pass: To pass the course, students must pass each assignment and the final exam.
Plagiarism: Misconduct will result in failure of the course and disciplinary consequences (no exceptions)
See: [AcademicHonesty@ANU] [Student Handbook@RSCS] [Example@comp1100]
Resources
Slides and assignments: See links in schedule.Marcus Hutter (2005) Universal Artificial Intelligence
The lectures will draw heavily from this (tough) book, but only the easier parts will be covered.
It is recommended that students have a copy of this book (available at the ANU bookshop or cheaper here).
Shane Legg (2008) Machine Super Intelligence
This is a gentle more philosophical, less mathematical introduction into the subject. It is highly recommended. It costs less than $20 and the pdf is even free.
Joel Veness et al. (2011) A Monte Carlo AIXI Approximation
This is a (tough and hot) research paper, which builds the basis for the group implementation project.
The lectures will also draw from the following paper(s)
F. Willems and Y. Shtarkov and T. Tjalkens
The context-tree weighting method: Basic properties
IEEE Transactions on Information Theory (41), 653 - 664, 1995
A more readable version of the same paper is here
If you're curious what's out there else (but this is clearly beyond the course), see further recommended AI books and the papers read in the RL reading group.


