University of Massachusetts Amherst
BioEmery Berger is a Professor in the College of Information and Computer Sciences at the University of Massachusetts Amherst, the flagship campus of the UMass system. He graduated with a Ph.D. in Computer Science from the University of Texas at Austin in 2002. Professor Berger has been a Visiting Scientist at Microsoft Research and at the Universitat Politecnica de Catalunya (UPC) / Barcelona Supercomputing Center (BSC). Professor Berger’s research spans programming languages, runtime systems, and operating systems, with a particular focus on systems that transparently improve reliability, security, and performance. He is the creator of a number of influential software systems including Hoard, a fast and scalable memory manager that accelerates multithreaded applications (used by companies including British Telecom, Cisco, Crédit Suisse, Reuters, Royal Bank of Canada, SAP, and Tata, and on which the Mac OS X memory manager is based); DieHard, an error-avoiding memory manager that directly influenced the design of the Windows 7 Fault-Tolerant Heap; and DieHarder, a secure memory manager that was an inspiration for hardening changes made to the Windows 8 heap.
His honors include a Microsoft Research Fellowship, an NSF CAREER Award, a Lilly Teaching Fellowship, a Most Influential Paper Award at PLDI 2016 and OOPSLA 2012, a Google Research Award, and a Microsoft SEIF Award. Professor Berger currently serves as an elected member-at-large of the SIGPLAN Executive Committee. He recently served as Program Chair for PLDI 2016 and spent a decade as Associate Editor of the ACM Transactions on Programming Languages and Systems.
Low-level Systems Security: Attacks and Defenses
The original attacks on C/C++ applications depended on "smashing" the stack; because modern compilers include effective and cheap countermeasures to stack-smashing attacks, the focus of this lecture is on the heap, which is much more complex and expensive to protect. In this class, I will explain a range of heap-based attacks and how they interact with existing runtime systems and memory managers. I will talk about the challenges of coping with these issues, and describe our DieHard/DieHarder systems, which leverage randomization and novel allocation algorithms to efficiently and provably reduce the vulnerabilities of C/C++ applications to a wide class of heap-based attacks; this work directly inspired reliability and security enhancements in Microsoft Windows. I will talk about how these techniques enable self-healing from attack via statistical inference, and describe recent advances that further close the performance gap between unprotected and protected code while providing a high degree of protection.
BioVitaly Chipounov is co-founder and chief architect at Cyberhaven, where he is designing novel security products to keep enterprise data safe from malware and malicious insiders. He got his MSc and PhD at EPFL, Switzerland, at the Dependable Systems Laboratory led by Prof. George Candea. During his PhD, he built S2E – a platform for multi-path analysis of software systems – currently used by dozens of researchers and companies around the world to build tools that find bugs and security vulnerabilities in computer systems. He participated in the DARPA Cyber Grand Challenge as part of the CodeJitsu team, where S2E was one of the fastest at finding vulnerabilities and generating exploits.
Vulnerability Analysis with S2E
S2E is a platform for in-vivo analysis of software systems that combines a virtual machine with symbolic execution. Users install and run in S2E an unmodified software stack, including programs, libraries, the OS kernel, and drivers. Symbolic execution then automatically explores thousands of paths through the system, allowing users to check desired properties even in corner-case situations and at any level of the software stack. S2E works on unmodified binaries, does not require writing code in special languages, or model the environment.
In this tutorial, you will learn how to use S2E to find bugs and vulnerabilities in Windows applications and device drivers. In the first part of the tutorial, you will learn the basics of symbolic execution with S2E with hands-on testing of Windows applications. In the second part of the tutorial, we will cover Windows kernel driver testing and show how S2E can help you build well-tested drivers with robust error recovery code using symbolic fault injection. You will try integrated code coverage tools and profilers to quickly diagnose symbolic execution bottlenecks, which you will then resolve with techniques such as state merging and function modeling. Finally, we will show how to automatically generate proofs of vulnerabilities with S2E.
To follow this tutorial optimally, you will need a Windows installation (either in a VM or native) with Visual Studio 2017 and the Windows Driver Kit version 1709. You will use that to hack and build the sample drivers we provide. We will provide remote access to Linux machines with S2E and guest images pre-installed. You may of course install S2E and build the Windows images on your own machine ahead of time if you wish, it’s fully automated.
University of Arizona
BioChristian Collberg is a Professor at the Department of Computer Science at the University of Arizona. He got his PhD from Lund University, Sweden and taught at University of Auckland, New Zealand, before joining Arizona. His current research interests lie in the protection of software from reverse engineering and tampering, as well as in the reproducibility of computer science research (see FindResearch.org). He is the author of "Surreptitious Software, Obfuscation, Watermarking, and Tamperproofing for Software Protection".
Introduction to Software Protection and Obfuscation
We will look at traditional ways of obfuscating code, i.e. ways to make software harder to understand and analyze. There will be a significant hands-on practical component to this part of the course, where we will obfuscate code using automatic tools.
University of Virginia
BioI am Professor of Computer Science at the University of Virginia. I received my Ph.D. from the University of Arizona in 1981 and joined the University of Virginia in 1982. Over the years, I have worked in many areas including compilers, embedded systems, high-performance computing, and computer security. My current research foci is on software protection and computer security. In the area of software security, my group has developed dynamic methods to prevent adversaries from compromising intellectual property. In the area of computer security, we have developed methods for "hardening" binaries to make them impervious to cyber attack. In 2014–2016, I lead a team at the University of Virginia that competed in the DARPA Cyber Grand Challenge (CGC)—a contest to build an autonomous supercomputer to automatically analyze vulnerable binaries and patch them—all without human intervention. Our entry, produced in collaboration with Grammatech, Inc., finished second winning a $1M prize. I am also leading a DARPA project, "Double Helix: High Assurance N-variant Systems." Double Helix is a binary analysis and transformation system that processes binary applications and produces variants with diverse binary structures that are intended to be deployed within a multi-variant system. A unique aspect of Double Helix is that it employs structured diversity to guarantee that variants behave differently yielding high-assurance systems. Both CGC and Double Helix rely on high-precision binary analysis and rewriting to reverse engineer and transform binaries.
VM-based Software Protection and A League of Extraordinary Machines
Virtual machines (PVMs) are often used to provide software protection. In this tutorial, we discuss the role and power of process-level virtual machines (PVMs) in software protection. The key advantage of PVMs is that they support efficient run-time transformations of the code to be protected. When used in combination with other software protection techniques such as white-box cryptography, guard networks, opaque predicates, etc., PVMs force the attacker to perform resource-intensive dynamic analysis. The tutorial discusses how efficient process-level virtual machines are constructed and how they are used to thwart both static and dynamic analyses by crackers through application of several novel transformations enabled by PVMs.
The second talk will focus on DARPA's Cyber Grand Challenge (CGC), which was announced in November 2013. The challenge was to build an autonomous cyber reasoning system that could analyze software, identify vulnerabilities, formulate patches and deploy them on a network all in real time. To test these capabilities, the cyber reasoning systems would face off against each other in a cyber version of capture the flag (CTF) with the winning team receiving a $2M cash prize. 104 teams from across the world entered the competition and began designing and building the supercomputers necessary to compete. In this talk I will describe the CGC competition, present some of the unique challenges faced by the teams, and provide an overview of technologies used to address these challenges. I will also present detailed analysis of the telemetry collected during the CGC final event. The talk will conclude by describing a new program spawned by CGC, CHESS (Computers and Humans Exploring Software Security), and some its unique challenges.
Bjorn De Sutter
BioBjorn De Sutter is associate professor in the Computer Systems Lab at the Faculty of Engineering and Architecture of Ghent University. He got both his MSc and his PhD from Ghent University in 1997 resp. 2002. Amongst others, he teaches courses on Compilers, and on Software Hacking and Protection, which are also his major research interests. The focus of his compiler technology research is on increasing the productivity of programmers by automating necessary tasks for non-functional requirements, incl. the protection of software against all kinds of attacks. His more than 80 peer-reviewed publications cover protections against fault injection attacks, against tampering attacks, against reverse engineering attacks, against memory exploit attacks, against patch-based attacks, and against timing side channel attacks. He coordinated the EU FP7 STREP ASPIRE (Advanced Software Protection: Integration, Research, and Exploitation), which was evaluated by the EC as excellent in Feb. 2017, and participated in numerous other national and international projects. He received the Annual FWO Barco Award in 1998 for his Master thesis, and is the recipient of a 2013 HiPEAC Technology Transfer Award.
Advanced Man-at-the-end Attacks and Defenses
In the first part of this lecture, we will study some of the most powerful reverse engineering attack methods, including methods deployed today by professional hackers and state-of-the-art academic methods such as fuzzing, pointer chaining, pattern matching, symbolic execution, statistical attacks, and generic de-obfuscation techniques.
In the second part, we will present results from experiments with various kinds of attackers that help us to understand how the different techniques are combined into full fledged attacks.
In the third part, will discuss some defenses against some of the advanced attacks, such as more advanced obfuscations, but also anti-taint techniques, anti-debugging techniques, anti-tracing techniques, anti-tampering techniques, and code renewability.
Trail of Bits
BioPeter Goodman is a senior security engineer at Trail of Bits. He is an expert at designing and implementing binary translation and instrumentation systems. In a past life, he was an academic and before that, a competitive ski racer.
Binary Lifting with McSema
Binary translation, analysis and recompilation with McSema.
Yuan Xiang Gu
BioMr. Gu was the co-founder of Cloakware Corporation that. In 2007, Cloakware was acquired by Irdeto. Since then, as a chief architect and a senior research director of Irdeto, Mr. Gu is also leading the development of next generation Cloakware technology, and research collaboration with research communities worldwide. Mr. Gu has been invited and visited over 40 universities and research institutes in North American, Europe and Asia, and organizing international security forums (workshops, summer schools, and association) and becomes an active speaker at many international conferences and workshops to promote software security and protection. Mr. Gu was invited being a guest professor of Northwest University in China. As a native Chinese, Mr. Gu often represents Irdeto to involve many business development and research collaboration activities in China. Prior to joining Cloakware, Mr. Gu has worked as a senior scientist and architect at Nortel Networks.
Previously, Mr. Gu was a visiting professor at the Computer Science School of McGill University at Montreal of Canada between 1988-1990. Before relocated to Canada, Mr. Gu was a professor in the Computer Science Department at Northwest University in China. Mr. Gu received the First Outstanding Young Scientists Foundation Award from the Chinese Academy of Sciences in 1985, and has about four decades of software research and development knowledge and expertise, and has published more than two dozens of patents and patent applications.
The Industrial Challenges in Software Security and Protection
While the digital world connects devices, peoples and things in a speed beyond humans’ imagination, hostile digital environments have been becoming more dominated from consumer devices to home networks, to the public Internet, to the cloud and web services, and to the Internet of Things, where traditional security models are inadequate to address emerging threats and attacks. The anything/anytime/anywhere properties of modern connectivity make software security and digital asset protection much more challenging and critical.
University of Maryland
BioMichael Hicks is a Professor in the Department of Computer Science at the University of Maryland. His research focuses on using programming languages and automated analyses to improve the security, reliability, and availability of software. He is the current Chair of the ACM Special Interest Group on Programming Languages (SIGPLAN) and blogs for a general computer science audience, both researchers and practitioners, at http://www.pl-enthusiast.net.
Building Security In: Programming Language-based Techniques for Ensuring Software Security
For the first lecture, I will talk about the benefits of type- and spatial-safety as a means of ruling out memory-based attacks. I will talk about ongoing work on the Checked C project that aims to retrofit C programs to be spatially safe, incrementally. I will also talk about my prior work on the Cyclone programming language that aimed to avoid temporal safety violations (e.g., "use after free"). This work is now reflected in many ways in the Rust programming language.
For my second lecture, I will talk about random (or "fuzz") testing, which aims to find bugs in insecure programs. This is a complementary technology to Checked C, for the parts of the program not protected by Checked C's checks. Fuzzers appear to be very successful at finding bugs, so it's important to understand why they work. Unfortunately, the existing research literature does not provide uniformly good scientific evidence on this front. We surveyed the recent research literature and assessed the experimental evaluations carried out by 32 fuzzing papers. We found problems in every evaluation we considered. We then performed our own extensive experimental evaluation using an existing fuzzer. Our results showed that the general problems we found in existing experimental evaluations can indeed translate to actual wrong or misleading assessments. We developed some guidelines, and open problems, that may address these issues.
BioMathias Payer is a security researcher and an assistant professor in computer science at Purdue university, leading the HexHive group. His research focuses on protecting applications in the presence of vulnerabilities, with a focus on memory corruption. He is interested in system security, binary exploitation, user-space software-based fault isolation, binary translation/recompilation, and (application) virtualization.
Before joining Purdue in 2014 he spent two years as PostDoc in Dawn Song's BitBlaze group at UC Berkeley. He graduated from ETH Zurich with a Dr. sc. ETH in 2012, focusing on low-level binary translation and security. He analyzed different exploit techniques and wondered how we can enforce integrity for a subset of data (e.g., code pointers). All prototype implementations are open-source. In 2014, he founded the b01lers Purdue CTF team.
Type Sanitization and Object Integrity for C++
Applications written in C++ are prone to memory corruption and type confusion vulnerabilities as the C++ programming language enforces neither memory nor type safety. Memory corruption is caused, e.g., by buffer overflows or using already freed memory. Type confusion happens if the runtime type of an object differs from the statically expected type due to an incorrect type cast. Type casting is a core feature of C++ that enables modularity but for performance, the majority of casts are only checked statically at compile time due to the expense of the C++ runtime type checking mechanism.
Due to such low level errors, applications can be vulnerable to exploits which allow an adversary to escalate privileges, execute arbitrary code, leak information, or simply cause denial of service to legitimate requests. The two fundamental approaches to secure software are either finding and fixing the underlying bugs or to mitigate the effects of vulnerabilities.
In the first session, we will focus on bug finding techniques using compiler-based instrumentation. To discover type safety violations, and thereby protect against type confusion vulnerabilities. We introduce HexType, a mechanism that enforces type safety for C++. HexType makes all type casts explicit. Upon allocation, metadata is recorded for each newly created object. Each cast, explicit, implicit, or static results in a runtime check using the recorded metadata. Type violations result in program termination. Our type safety mechanism integrates into existing testing frameworks or can be combined with a fuzzing framework to discover new vulnerabilities.
Even with substantial fuzzing efforts, some bugs will remain undiscovered. In the second session, we will discuss compiler-based mitigations. We have developed a mechanism to protect against control- flow hijacking attacks that leverages the unique properties of C++ to augment existing Control-Flow Integrity (CFI) mechanisms. CFI is an upcoming mitigation that is already deployed on Google Chrome and Microsoft Edge and at the verge of wide deployment. CFI measures, for each call site, the set of allowed targets using a static analysis. At runtime, the actual value is compared to the allowed targets, terminating at a miscompare. The static analysis is prone to over- approximation. We propose to make the analysis dynamic by guaranteeing the integrity of object types at runtime, thereby improving the precision of the analysis and protecting against a set of code reuse attacks that CFI would miss.
National University of Singapore
BioAbhik Roychoudhury is a Professor of Computer Science at National University of Singapore. His research focuses on software testing and analysis, software security and trust-worthy software construction. His research group has built scalable techniques for testing, debugging and repair of programs using systematic semantic analysis. The capability to automatically repair programs at a large scale contributes to the vision of self-healing software. He has been an ACM Distinguished Speaker (2013–19). He is currently leading the TSUNAMi center, a large five-year long targeted research effort on trust-worthy software funded by National Research Foundation. He is also leading the Singapore Cyber-security Consortium, which is a Consortium of over 30 companies in the cyber-security space engaging with academia for research and collaboration.
He has been Program Chair of ACM International Symposium on Software Testing and Analysis (ISSTA) 2016, General Chair of ACM SIGSOFT Symposium on Foundations of Software Engineering (FSE) 2022, and an Editorial Board member of IEEE Transactions on Software Engineering (TSE) since 2014. Abhik received his Ph.D. in Computer Science from the State University of New York at Stony Brook in 2000.
Symbolic Execution for Vulnerability Detection and Repair
Symbolic analysis of programs have been first studied for the purpose of program verification. In recent decades, symbolic execution technology has witnessed renewed interest due to the maturity of Satisfiability Modulo Theory (SMT) solvers. Powerful dynamic analysis tools have leveraged the back-end solvers to systematically navigate large search spaces leading to application of symbolic analysis in test generation or vulnetrability detection. In this tutorial, we will first study the power of state-of-the-art symbolic execution based approaches for various security vulnerability analysis tasks such as finding zero-day vulnerabilities and/or crash reproduction. We compare the symbolic analysis approaches to grey-box fuzz testing approaches (which are routinely employed in industrial settings). Through such comparison, we can try to understand the state-of-practice of symbolic execution in terms of its ability to detect software vulnerabilities.
In the later part of the tutorial, we will conceptualize the use of symbolic execution approaches/tools for purposes beyond guiding search. We will sketch the usage of symbolic execution in inferring specification of intended program behavior. This is done by analyzing a buggy program against selected tests. Such specification inference capability can be combined with program synthesis techniques to automatically repair programs. Automated program repair via symbolic execution goes beyond search-based approaches which lift patches from elsewhere in the program. Such an approach can construct “imaginative” patches, serves as a test-bed for the grand-challenge of automated programming, and contributes to the vision of self-healing software.
BioXiangyu Zhang is a professor and University Scholar at Purdue University. He works on dynamic and static software analysis and their applications in software security, forensic analysis, software debugging and testing. He has received the 2006 ACM SIGPLAN Distinguished Doctoral Dissertation Award, NSF Career Award, ACM SIGSOFT Distinguished Paper Awards, Best Student Paper Award on USENIX Security'14, Best Paper Award on CCS'15 and Distinguished Paper Awards on NDSS'16 and USENIX SECURITY'17. He has also co-supervised a dissertation that won the ACM SIGSAC Doctoral Dissertation Award in 2017.
Forensics of Cyber Attacks
Cyber attacks are becoming increasingly targeted and stealthy. For example, advanced targeted attacks (also called advanced persistent threats or APTs) tend to be stealthy, low-and-slow, and sometimes disguised via psychosocial campaigns. Such trends in cyber attacks call not only for proactive technologies that prevent or deter attacks, but also reactive technologies that detect and investigate ongoing or finished attacks.
In this mini-course, we focus on the investigation – or forensics – of cyber attacks, with the goals of (i) understanding the intent, strategy, and detailed steps and targets of an attack, (ii) collecting various types of evidence for possible legal proceedings, (iii) revealing hidden impacts of an attack to minimize loss and to prevent similar attacks in the future. We will cover two kinds of cyber forensics techniques: (1) temporal forensics involves analyzing log data to identify events related to an attack and their causal relations; and (2) memory forensics involves analyzing the memory snapshot of a system to extract critical information. Students will get a chance to learn the state-of-the-art and conduct labs on realistic attack forensics.
Keynote SpeakersWe are pleased to announce the following keynote speakers, who will each provide an opening address on the first three days of the summer school.
BioCristina is the Director of Oracle Labs Australia and an Architect at Oracle. Headquartered in Brisbane, the focus of the Lab is Program Analysis as it applies to finding vulnerabilities in software and enhancing the productivity of developers worldwide.
Prior to founding Oracle Labs Australia, Cristina was the Principal Investigator of the Parfait bug tracking project at Sun Microsystems, then Oracle. Today, Oracle Parfait has become the defacto tool used by thousands of Oracle developers for bug and vulnerability detection in real-world, commercially sized C/C++/Java applications. The success of the Parfait tool is founded on the pioneering work in advancing static program analysis techniques carried out by Cristina’s team of Researchers and Engineers at Oracle Labs Australia.
Prior to her work at Oracle and Sun Microsystems, Cristina held teaching posts at major Australian Universities, co-edited Going Digital, a landmark book on cybersecurity, and served on the executive committees of ACM SIGPLAN and IEEE Reverse Engineering.
Oracle Parfait — The Flavour of Real World Vulnerability Detection
The Parfait static code analysis tool focuses on detecting vulnerabilities that affect C, C++, Java and PL/SQL languages. Its focus has been on four key items expected of a commercial tool that lives in a commercial organisation:
- precision of results (i.e., high true positive rate),
- scalability (i.e., being able to quickly scan millions of lines of code),
- incremental analysis (i.e., run over deltas of the code quickly), and
- usability (i.e., ease of integration into standard build processes and reporting).
In this presentation, we’ll sample a flavour of Parfait. We explore some real-world challenges faced in the creation of a robust vulnerability detection tool, we look into two examples of vulnerabilities that severely affected the Java platform in 2012-13 and most machines in 2017-18, and we conclude by recounting what matters to developers for integration into today’s continuous integration and continuous delivery (CI/CD) pipelines.
University of New South Wales
BioGernot Heiser is Scientia Professor and John Lions Chair of Operating Systems at UNSW Sydney and Chief Research Scientist at Data61, CSIRO. His research interest are in operating systems, real-time systems, security and safety. He is the founder and past leader of Data61’s Trustworthy Systems group, which pioneered large-scale formal verification of systems code, specifically the design, implementation and formal verification of the seL4 microkernel; seL4 is now being designed into real-world security- and safety-critical systems. Heiser's former company Open Kernel Labs, acquired by General Dynamics in 2012, marketed the OKL4 microkernel, which shipped on billions of mobile wireless chips and more recently ships on the secure enclave processor of all iOS devices. Gernot is a Fellow of the ACM, the IEEE and the Academy of Technology and Engineering (ATSE).
Stop the Leaks! Towards Provable Information Security with seL4
Preventing unauthorised information flow in computer systems is a problem that has been recognised at least since the 1970s, and is still not solved satisfactorily. With the move from time-shared central computers to personal computing platforms interest in the topic has waned, until the shared-platform model re-emerged in the form of public clouds, as well as mobile devices running a large number of apps from untrustworthy sources. This has put information-flow security, and especially covert information flows, back on the agenda.
The last decade saw significant progress in fundamental approaches to computer security, especially with the seL4 microkernel, the first operating system with a formal proof of implementation correctness. Subsequent work proved information-flow properties about seL4, including its ability to prevent information flow through covert storage channels. This leaves timing channels as the major security threat that must be resolved.
In this talk I will present an introduction on seL4 and its main concepts, and give an overview of its verification story and the information security challenges it does and does not address. I will explain some of the challenges presented by the present programming model of computer systems, with the instruction-set architecture (ISA) as the hardware-software contract. I will argue that the ISA abstracts too much for security, and how it could be refined into a contract that makes it possible to reason about the absence of timing channels. I will also talk about work in progress on mechanisms that allow seL4 to prevent information leakage through timing channels, in a way that should be verifiable against a suitable hardware-software contract.
The Australian National University
BioProfessor Schmidt is the 12th Vice-Chancellor of The Australian National University (ANU). Winner of the 2011 Nobel Prize in Physics, Professor Schmidt was an astrophysicist at the ANU Mount Stromlo Observatory and Research School of Astronomy and Astrophysics before becoming Vice-Chancellor.
Acknowledgement of Country and Opening Remarks
We acknowledge and celebrate the First Australians on whose traditional lands we meet, and pay our respects to the elders past, present and emerging. We especially pay our respects to the elders of the Ngunnawal people whose traditional lands include the ANU.
University of Adelaide
BioYuval Yarom is a senior lecturer in the School of Computer Science at the University of Adelaide. His main research interests are computer security and cryptography, with a current focus on microarchitectural attacks and their mitigation. He received his PhD from the University of Adelaide and an M.Sc. and a B.Sc. from the Hebrew University of Jerusalem.
Speculative Execution Considered Harmful
Speculative execution is a processor design feature that improves performance by prematurely executing instructions based on a prediction that these will be required by the running program. The recently published Spectre and Meltdown attacks show that misprediction of the instructions to be executed can result in devastating information leaks, which can completely erode the confidentiality offered by hardware protection mechanisms. This talk describes some of the published speculative execution attacks, including their cause, exploitation techniques, and countermeasures.