Loss functions, and relations between machine learning problems
Bob Williamson (NICTA)
NICTA SML SEMINARDATE: 2011-08-11
TIME: 11:00:00 - 12:00:00
LOCATION: NICTA - 7 London Circuit
CONTACT: JavaScript must be enabled to display this email address.
ABSTRACT:
Loss functions are central to supervised machine learning problems, but there has been little work in the recent machine learning literature in systematically understanding the effect of choice of loss functions. I will motivate the study of loss functions, arguing why the choice of loss function matters, and showing how a systematic study of loss functions in machine learning is a good starting point for a more comprehensive mapping of relations between machine learning problems.
Specifically I will summarise some recent work starting with consideration of proper losses for classification problems (binary and multiclass). I will consider relationships to divergences, surrogate regret bounds, composite losses (the composition of a proper loss with an invertible link function), existence and uniqueness results for such representations, integral representations, and characterisation of mixability and convexity.
The information in this e-mail may be confidential and subject to legal professional privilege and/or copyright. National ICT Australia Limited accepts no liability for any damage caused by this email or its attachments.


