Eliciting, Evaluating, and Aggregating Predictions
Mark Reid
ARTIFICIAL INTELLIGENCE SEMINARDATE: 2012-04-27
TIME: 15:00:00 - 16:00:00
LOCATION: RSISE Seminar Room, ground floor, building 115, cnr. North and Daley Roads, ANU
CONTACT: JavaScript must be enabled to display this email address.
ABSTRACT:
Suppose you know an expert forecaster and would like to know what they think tomorrow's weather might be. How can you _elicit_ this information, that is, motivate them to truthfully tell you what they believe? Suppose the expert tells you there is a 40% chance of rain tomorrow. How should you _evaluate_ the quality of her prediction 24 hours later? What do you do if you know 10 experts instead of just one? How might you _aggregate_ their possibly conflicting predictions? Can you make use of their past reliability?
The answers to these questions are all closely related through what are known
as "proper losses" (or "proper scoring rules"). These are, in a very precise
sense, the "right" losses to use when predicting with probabilities. In this
talk I will give an overview of what is currently known about proper losses,
their geometry, and many of their interesting connections with sequential
prediction, divergences, information theory, and prediction markets. Some of
these connections are the result of recent work by myself and several
collaborators.


