Advanced Topics in Artificial Intelligence COMP4620/COMP8620
Welcome to the Advanced AI course at the ANU !This year (2012) the course will focus on the Foundations of AI, including inductive inference, decision-making under uncertainty, reinforcement learning, intelligent agents, information theory, philosophical foundations, and others. Note that traditionally the course varies significantly from year to year. Material from other years is available from the left menu.
News07Sep12: Assignment 2 available
06Aug12: Assignment 1 available
02Aug12: Change of tutorial to Thu 15ºº-17ºº: 9.Aug. in R214 & 16.Aug.-30.Aug. in R221
30May12: website contents created
Formalities/Miscellaneous/SummaryOffered By: The AI Group @ Research School of Computer Science @ Australian National University
Offered In: Second Semester, 2012 (23 July to 2 November). See Schedule below
Lecturer: Marcus Hutter
Tutors/Labs/Assistance: Wen Shao and Mayank Daswani and Peter Sunehag
Target: Undergraduate (COMP4620) and Graduate (COMP8620) students. Others welcome.
Enrollment: Undergraduates: The usual way via ISIS. Honors&Graduates&Others: Contact lecturer.
Admin: Bindi Mamouney and Kathy MacDonald
Course Subjects: Computer Science & Mathematics & Statistics
Unit Value: 6 units
Time Table: See Schedule below for details
Office hours: Wed 9ºº-10ºº, RSISE Bld 115, Room B259.
Indicative Assessment: Assignments (45%); Seminar (10%); Examination (45%)
Indicative Workload: 25h lectures, 10h tutorial, 10h lab, ~50h assignments, lots of self-study
Prescribed texts: Excerpts from (see resources for details)
- Shane Legg (2008) Machine Super Intelligence
- Marcus Hutter (2005) Universal Artificial Intelligence
- Joel Veness et al. (2011) A Monte Carlo AIXI Approximation
Study@ANU page: http://studyat.anu.edu.au/courses/COMP4620;details.html
Wattle page: http://wattleprep.anu.edu.au/course/view.php?id=945
This page: http://cs.anu.edu.au/courses/COMP4620/2012.html
Prerequisites: If you have absolved the Machine Learning course COMP4670 or the Artificial Intelligence course COMP3620 or the Information Theory course COMP2610 you should have the necessary background for this course. Otherwise you can acquire the necessary background e.g. from the book Russell&Norvig (2010) Chp.2,3,5.2,5.5,13,15.1-2,17.1-3,21.
Chapter 1 of Li&Vitanyi (2008) is a great refresher of basic computer, information, and probability theory.
Course DescriptionThis is an advanced undergraduate and graduate course that covers advanced topics in Artificial Intelligence. Topics vary from one offering to the next (see Study@ANU page).
This year (2012) the course will focus on the foundations of AI, including inductive inference, decision-making, reinforcement learning, information theory, and some game and agent theory.
The dream of creating artificial devices that reach or outperform human intelligence is many centuries old. This course presents an elegant parameter-free theory of an optimal reinforcement learning agent embedded in an arbitrary unknown environment that possesses essentially all aspects of rational intelligence. The theory reduces all conceptual AI problems to pure computational questions.
How to perform inductive inference is closely related to the AI problem. The course covers Solomonoff's theory, which solves the induction problem, at least from a philosophical and statistical perspective.
Both theories are based on Occam's razor quantified by Kolmogorov complexity, Bayesian probability theory, and sequential decision theory.
Learning OutcomesDespite the grand vision above, most of the course necessarily is devoted to introducing the key ingredients of this theory, which are important subjects in their own right. On completing this course students will have a solid understanding of:
- measures, tests, and definitions of intelligence;
- Occam's razor;
- universal Turing machines;
- algorithmic information theory;
- probability theory;
- universal induction;
- Bayesian sequence prediction;
- minimum description length principle;
- intelligent agents;
- sequential decision theory;
- reinforcement learning;
- planning under uncertainty;
- universal search;
- philosophical foundations.
The intention is to run tutorials throughout the first half of the course to consolidate the knowledge via theoretical exercises. In the second half, a group project will be run, which shall approximate, implement, and test the theory on some applications like Tic-Tac-Toe or Poker or Pacman.
|to be updated throughout the course||Monday 13ºº-14ºº & Wednesday 15ºº-16ºº
Chemistry Lecture Theatre T2 in Building 34
|Thu.15-17ºº, Tut. in R214|R221; Ian Ross Bld.31 / Thu.10-12ºº, Lab in N114; CSIT Bld.108|
|23Jul - 27Jul||Overview & Introduction [Advertizement]
|30Jul - 3Aug||Information Theory & Kolmogorov Complexity
|6Aug - 10Aug||Bayesian Probability Theory
get assignment 1
|13Aug - 17Aug||Algorithmic Probability & Universal Induction
|20Aug - 24Aug||Minimum Description Length & Universal Similarity
[Slides] Optional Reading:[MDL.Chp.1,USM]
|27Aug - 31Aug||Bayesian Sequence Prediction & CTW
[Slides, Slides] Reading: Parts of [UAIBook.Chp.3,CTW]
|3Sep - 7Sep||Rational Agents
|hand in assignment 1|
get assignment 2
|10Sep - 21Sep||break||---|
|24Sep - 28Sep||Universal Artificial Intelligence
|1oct - 5oct||Approximations and Applications
|tutorial: solutions to assignment 1|
|8oct - 12oct||MC-AIXI-CTW
|15oct - 19oct||Discussion
|22oct - 26oct||Discussion||lab+|
hand in assignment 2
|29oct - 2nov||Student Presentation of Individual Contribution to Practical Assignment. Send slides in advance to Mayank Daswani.||lab|
AssignmentsTheory Assignment 1: The theory assignment is to be done individually, and will involve various mathematical exercises that will deepen the understanding of the lectured material. Wen Shao will be tutor and primary contact for the theory assignment.
Practical Group Assignment 2: The practical assignment will be a group project. Goal is to implement the MC-AIXI-CTW model, which is a recent practical scaled-down version of the theoretical universal AI agent AIXI. Students will acquire first-hand experience how a single algorithm can autonomously learn to solve various toy problems like playing Tic-Tac-Toe or PacMan or Poker just based on experience and reward feedback without ever being told the rules of the game. The implementation should be completely stand-alone in very light C++. Particular emphasis is on ease of use (installation, compilation, running, modification) and good documentation. The project involves programming of various sophisticated functions, and requires and furthers the understanding of the theoretical material taught in the main class.
Each group will consist of 6-9 students. A group can self-organize and distribute work internally. The various modules/tasks/domains can be implemented by different students, each responsible for delivering a well-tested module including source and documentation. The group is responsible to deliver a final product consisting of documented source code, experimental results, and a final joint report.
Lab director Mayank Daswani will supervise the practical group project during lab sessions.
Tutorials/LabsRehearsal of lecture material and help with assignments: See Wattle
AssessmentTheory: Individual Theory Assignments (20%).
Practice: Practical Group Assignment (25%).
Seminar: Seminar = 5 minute presentation of individual contribution to group assignment (10%).
Exam: Final written examination (45%) Exam (120min,written,closed-book,informal&math questions).
Know: What to know for the exam: Material in the course slides.
The other provided reading material should help you to better understand the slides, but will itself not be examined.
Pass: To pass the course, students must pass each assignment and the final exam.
ResourcesSlides and assignments: See links in schedule.
Marcus Hutter (2005) Universal Artificial Intelligence
The lectures will draw heavily from this (tough) book, but only the easier parts will be covered.
It is recommended that students have a copy of this book (available at the ANU bookshop or cheaper here).
Shane Legg (2008) Machine Super Intelligence
This is a gentle more philosophical, less mathematical introduction into the subject. It is highly recommended. It costs less than $20 and the pdf is even free.
Joel Veness et al. (2011) A Monte Carlo AIXI Approximation
This is a (tough and hot) research paper, which builds the basis for the group implementation project.
The lectures will also draw from the following paper(s)
F. Willems and Y. Shtarkov and T. Tjalkens
The context-tree weighting method: Basic properties
IEEE Transactions on Information Theory (41), 653 - 664, 1995
A more readable version of the same paper is here
If you're curious what's out there else (but this is clearly beyond the course), see further recommended AI books and the papers read in the RL reading group.