Recently, the synergy between artificial intelligence (AI) and security has gained increasing prominence and significance. This evolution naturally arises from the need to enhance security with greater efficiency. Among the many areas of security benefiting from AI's integration, cryptography stands as a notable field. We are already witnessing the application of AI techniques to address several problems in cryptography, such as enhancing defenses against implementation attacks and hardware Trojans, and investigating attacks on Physical Unclonable Functions (PUFs). Beyond AI's contributions to cryptography, it is also possible to identify the use of cryptography to solve security and privacy issues in AI systems as an emerging and pivotal subject. The mounting frequency of AI system attacks urges us to explore potential research avenues involving cryptographic strategies to counteract these threats. Our objective is to convene experts from both academic and industrial backgrounds, each contributing to diverse facets of cryptography and AI, to facilitate knowledge exchange and foster collaborative efforts. Of particular interest is the exploration of the transferability of techniques across different cryptographic applications and the strengthening of AI security mechanisms. Furthermore, we will delve into recent developments, including those stemming from previous AICrypt events, to provide insights into the evolving landscape of this field.
Download the Call for Papers
Authors interested to give a contributed talk in this workshop are invited to submit an extended abstract of at most 2 pages (excluding references) using Easychair.
The topics of the workshop encompass all aspects concerning the intersection of AI and cryptography, including but not limited to:
We encourage researchers working on all aspects of AI and cryptography to take the opportunity and use AICrypt to share their work and participate in discussions. The authors are invited to submit an extended abstract using the EasyChair submission system.
Submitted abstracts for contributed talks will be reviewed by the workshop organizers for suitability and interest to the AICrypt audience. There are no formal proceedings published in this workshop, thus authors can submit extended abstracts related to works submitted or recently published in other venues, or work in progress that they plan to submit elsewhere.
The authors of accepted papers will be invited to submit an extended version of paper to appear (after a new round of reviewing) in a post-proceedings volume to be published by Springer.
Every accepted submission must have at least one author registered for the workshop. All submitted abstracts must follow the original LNCS format with a page limit of up to 2 pages (excluding references). The abstracts should be submitted electronically in PDF format.
EXTENDED submission deadline!
Abstract submission deadline: APR 15, 2024
previously APR 5, 2024
Notification to authors: APR 19, 2024
Workshop date: May 26, 2024
Workshop registration goes through the Eurocrypt registration process. Check this page for further information.
COSIC, KU Leuven, Belgium
The breakthroughs in AI have led to the belief that AI will revolutionize society and will result to a different approach towards cybersecurity. However, researchers caution that beyond the hype, there are significant privacy risks, potential abuse by malicious actors, and the possibility of incorrect or unfair decisions made by AI systems. Privacy preserving machine learning uses techniques such as computing on encrypted data to mitigate the privacy risks. Additionally, legal frameworks such as the EU AI Act and the Council of Europe's AI treaty are being developed to address some other issues. This talk presents a perspective on these developments.
Prof. Bart Preneel, a full professor at KU Leuven, leads the renowned COSIC research group. His expertise lies in applied cryptography, cybersecurity, and privacy. Prof. Preneel has delivered over 150 invited talks across 50 countries and received the RSA Award for Excellence in Mathematics (2014) and the ESORICS Outstanding Research Award (2017). He served as president of IACR (International Association for Cryptologic Research) and is also a fellow of the IACR. Prof. Preneel consults for industry and government on cybersecurity and privacy, he founded the mobile authentication startup nextAuth and holds roles in Approach Belgium, Tioga Capital Partners, and Nym Technologies. Actively engaged in cybersecurity policy, he contributes to ENISA as an Advisory Group member.
Weizmann Institute of Science, Rehovot, Israel
In this talk I will describe how to plant novel types of backdoors in any facial recognition model based on the popular architecture of deep Siamese neural networks, by mathematically changing a small fraction of its weights (i.e., without using any additional training or optimization). These backdoors force the system to err only on specific persons which are preselected by the attacker. For example, we show how such a backdoored system can take any two images of a particular person and decide that they represent different persons (an anonymity attack), or take any two images of a particular pair of persons and decide that they represent the same person (a confusion attack), with almost no effect on the correctness of its decisions for other persons. Uniquely, we show that multiple backdoors can be independently installed by multiple attackers who may not be aware of each other's existence with almost no interference.
Joint work with Irad Zehavi and Roee Nitzan.
Adi Shamir is an Israeli cryptographer and inventor. He is a co-inventor of the Rivest–Shamir–Adleman (RSA) algorithm (along with Ron Rivest and Len Adleman), a co-inventor of the Feige–Fiat–Shamir identification scheme (along with Uriel Feige and Amos Fiat), one of the inventors of differential cryptanalysis and has made numerous contributions to the fields of cryptography and computer science. In 2002, he won the Turing Award, together with Rivest and Adleman, in recognition of his contributions to cryptography. He now works at the faculty of Mathematics and Computer Science at the Weizmann Institute of Science.
Google & Columbia University, New York, USA
AI is based on statistical approximations and attempts to get some big picture from sampling. Crypto is based on careful models, exact assumptions, and proofs. So these two areas seem to be away from each other. In this talk, I will show that there are enough common issues that need attention and further research.
Moti Yung is a computer scientist, cryptographer and information security researcher. Yung’s areas of expertise include cryptovirology and kleptography. He is a a Security and Privacy Research Scientist with Google and an Adjunct Research Faculty at the Computer Science Department at Columbia University, where he earned his Ph.D.
Yung's contributions to research and development treat science and technology holistically: from the theoretical mathematical foundations, via conceptual mechanisms which typify computer science, to participation in the design and development of industrial products. His industry experience includes IBM Research, Certco/ Bankers Trust, RSA Laboratories (EMC), and Snap.
His published work (articles, patents, a book, and edited books) includes collaborations with more than 300 co-authors. Yung’s work has been predicting future needs of secure systems, and analyzing coming threats. These led to basic theoretical and applied notions, like: ransomware attacks, cryptosystems subversion, concurrent sessions in authentication protocols, strong (chosen ciphertext) secure encryption, and digital signatures from simplified cryptography.
The program starts at 09:00 am, CEST time (UTC + 2).
TIME CEST (UTC+2) |
SESSION/TITLE |
---|---|
Session 1: Side-Channel Analysis 09:00 - 10:40 |
|
09:00 - 10:00 | Opening and Keynote Talk: AI: The Good, the Bad and the Ugly Bart Preneel |
10:00 - 10:20 | The more, the merrier? A step-by-step inter-device analysis for transfer learning side-channel attacks Lizzy Grootjen, Zhuoran Liu and Ileana Buhan |
10:20 - 10:40 | Exploring DNN Weights Extraction via Deep Learning Physical Side-Channel Analysis Dirk Lauret and Zhuoran Liu |
10:40 - 11:00 | Coffee Break |
Session 2: Homomorphic Encryption and Verification of ML 11:00 - 12:40 |
|
11:00 - 11:20 | Encrypted Image Classification with Low Memory Footprint using Fully Homomorphic Encryption Lorenzo Rovida and Alberto Leporati |
11:20 - 11:40 | Homomorphic WiSARDs: Efficient Weightless Neural Network training over encrypted data Leonardo Neumann, Antonio Guimarães, Diego F. Aranha and Edson Borin |
11:40 - 12:00 | PrivaTree: Private Decision Tree Evaluation by means of Homomorphic Encryption Marina Checri, Aymen Boudguiga, Jean-Paul Bultel, Olive Chakraborty, Pierre-Emmanuel Clet and Renaud Sirdey |
12:00 - 12:20 | Efficient Verification Framework for Large-Scale Machine Learning Models Artem Grigor, Anton Kravchenko and Georg Wiese |
12:20 - 12:40 | Ensuring Privacy and Robustness in Computation of Machine Learning Algorithms Chrysa Oikonomou and Katerina Sotiraki |
12:40 - 13:30 | Lunch Break |
Session 3: Federated Learning 13:30 - 14:50 |
|
13:30 - 14:30 | Keynote Talk: Facial Misrecognition Systems Adi Shamir |
14:30 - 14:50 | Non-Interactive Secure Aggregation and its Applications to Federated Learning Harish Karthikeyan and Antigoni Polychroniadou |
14:50 - 15:20 | Coffee break |
Session 4: Neural distinguishers & PUFs 15:20 - 17:00 |
|
15:20 - 16:20 | Keynote Talk: Touching Points of Cryptography and AI Moti Yung |
16:20 - 16:40 | 5 Years of Neural Distinguishers David Gerault and Anna Hambitzer |
16:40 - 17:00 | Provable Learnability Assessment of PUFs in Pre-silicon Phase Durba Chatterjee |