AI, ML and Friends is a weekly seminar series within the School of Computing on Artificial Intelligence, Machine Learning, and related topics. We are open to attendees and presenters external to the school. Please sign up to the mailing list to receive weekly announcements including zoom details, and email the seminar organiser to schedule a talk.
Upcoming Seminars
24 April 2026, 12:00#
3D Reconstruction Methods in Challenging Real-World Scenarios#
Speaker: Jinguang Tong
Abstract: Accurate 3D reconstruction from images remains challenging when common assumptions—such as Lambertian appearance, reliable correspondences, and accurate segmentation—are violated. This thesis addresses four recurring failure modes in real-world capture: refraction, weak texture and thin structures, strong specular reflections, and imperfect foreground masking. I present a set of methods that integrate differentiable rendering with scenario-specific physical modeling and robust geometric priors, including refractive-aware reconstruction, multi-level multi-view consistency, physically grounded Gaussian splatting, and self-supervised foreground separation. Together, these approaches improve reconstruction fidelity and stability across challenging settings, expanding the practical applicability of modern 3D vision systems.
Bio: Jinguang Tong is a final-year PhD candidate at the Australian National University and DATA61, CSIRO, advised by Chuong Nguyen, Hongdong Li, and Kaihao Zhang. His research focuses on 3D computer vision, with an emphasis on developing robust reconstruction methods for challenging real-world scenarios. He is also interested in adapting 3D knowledge for vision-language models and generative AI, aiming to bridge accurate 3D understanding with higher-level multimodal reasoning and content generation.
Where: Birch Building, 2.02
24 April 2026, 16:00#
A Formal Abductive Explanation Framework for the Audit of AI Ethics Principles in Individual Decisions#
Speaker: Belona Sonna
Abstract: The increasing deployment of Artificial Intelligence (AI) systems in high-stakes domains such as healthcare, finance, and public services has amplified the need for robust auditing mechanisms to ensure that these systems are reliable, fair, and privacy-preserving. Despite the growing number of AI ethics principles proposed in regulatory and academic contexts, their practical adoption remains limited due to the lack of unified formal definitions and the absence of unified tool capable of assessing multiple criteria. This thesis proposes abductive explanations as a unifying and formal tool for auditing AI systems. Unlike conventional explainability techniques, abductive explanations identify minimal sets of features that are sufficient to guarantee a given decision, thereby providing both formal guarantees and interpretable justifications. Building on this foundation, the thesis demonstrates how abductive explanations can serve not only as explanatory tools but also as diagnostic instruments for assessing key AI ethics principles.
Bio: Belona is a PhD Candidate at the Australian National University. Her research lies at the intersection of formal methods, explainable AI, and AI ethics, focusing on the formal verification of ethical properties in AI-based decision-making. She developed abductive explanation–based frameworks to audit proxy discrimination, unfairness, and privacy leakage, with particular interest to healthcare systems.
Where: Skaidrite Darius Building, N101