Dates are inconsistent

Dates are inconsistent

202 results sorted by ID

2024/1196 (PDF) Last updated: 2024-07-24
Client-Aided Privacy-Preserving Machine Learning
Peihan Miao, Xinyi Shi, Chao Wu, Ruofan Xu
Cryptographic protocols

Privacy-preserving machine learning (PPML) enables multiple distrusting parties to jointly train ML models on their private data without revealing any information beyond the final trained models. In this work, we study the client-aided two-server setting where two non-colluding servers jointly train an ML model on the data held by a large number of clients. By involving the clients in the training process, we develop efficient protocols for training algorithms including linear regression,...

2024/1151 (PDF) Last updated: 2024-07-15
Privacy-Preserving Data Deduplication for Enhancing Federated Learning of Language Models
Aydin Abadi, Vishnu Asutosh Dasu, Sumanta Sarkar
Applications

Deduplication is a vital preprocessing step that enhances machine learning model performance and saves training time and energy. However, enhancing federated learning through deduplication poses challenges, especially regarding scalability and potential privacy violations if deduplication involves sharing all clients’ data. In this paper, we address the problem of deduplication in a federated setup by introducing a pioneering protocol, Efficient Privacy-Preserving Multi-Party Deduplication...

2024/1065 (PDF) Last updated: 2024-06-30
AITIA: Efficient Secure Computation of Bivariate Causal Discovery
Truong Son Nguyen, Lun Wang, Evgenios M. Kornaropoulos, Ni Trieu
Cryptographic protocols

Researchers across various fields seek to understand causal relationships but often find controlled experiments impractical. To address this, statistical tools for causal discovery from naturally observed data have become crucial. Non-linear regression models, such as Gaussian process regression, are commonly used in causal inference but have limitations due to high costs when adapted for secure computation. Support vector regression (SVR) offers an alternative but remains costly in an...

2024/1047 (PDF) Last updated: 2024-07-01
Improved Multi-Party Fixed-Point Multiplication
Saikrishna Badrinarayanan, Eysa Lee, Peihan Miao, Peter Rindal
Cryptographic protocols

Machine learning is widely used for a range of applications and is increasingly offered as a service by major technology companies. However, the required massive data collection raises privacy concerns during both training and inference. Privacy-preserving machine learning aims to solve this problem. In this setting, a collection of servers secret share their data and use secure multi-party computation to train and evaluate models on the joint data. All prior work focused on the scenario...

2024/987 (PDF) Last updated: 2024-07-17
CoGNN: Towards Secure and Efficient Collaborative Graph Learning
Zhenhua Zou, Zhuotao Liu, Jinyong Shan, Qi Li, Ke Xu, Mingwei Xu
Applications

Collaborative graph learning represents a learning paradigm where multiple parties jointly train a graph neural network (GNN) using their own proprietary graph data. To honor the data privacy of all parties, existing solutions for collaborative graph learning are either based on federated learning (FL) or secure machine learning (SML). Although promising in terms of efficiency and scalability due to their distributed training scheme, FL-based approaches fall short in providing provable...

2024/942 (PDF) Last updated: 2024-06-12
Let Them Drop: Scalable and Efficient Federated Learning Solutions Agnostic to Client Stragglers
Riccardo Taiello, Melek Önen, Clémentine Gritti, Marco Lorenzi
Applications

Secure Aggregation (SA) stands as a crucial component in modern Federated Learning (FL) systems, facilitating collaborative training of a global machine learning model while protecting the privacy of individual clients' local datasets. Many existing SA protocols described in the FL literature operate synchronously, leading to notable runtime slowdowns due to the presence of stragglers (i.e. late-arriving clients). To address this challenge, one common approach is to consider stragglers as...

2024/862 (PDF) Last updated: 2024-05-31
BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federated Learning
Songze Li, Yanbo Dai
Applications

In a federated learning (FL) system, decentralized data owners (clients) could upload their locally trained models to a central server, to jointly train a global model. Malicious clients may plant backdoors into the global model through uploading poisoned local models, causing misclassification to a target class when encountering attacker-defined triggers. Existing backdoor defenses show inconsistent performance under different system and adversarial settings, especially when the malicious...

2024/852 (PDF) Last updated: 2024-05-30
Breaking Indistinguishability with Transfer Learning: A First Look at SPECK32/64 Lightweight Block Ciphers
Jimmy Dani, Kalyan Nakka, Nitesh Saxena
Attacks and cryptanalysis

In this research, we introduce MIND-Crypt, a novel attack framework that uses deep learning (DL) and transfer learning (TL) to challenge the indistinguishability of block ciphers, specifically SPECK32/64 encryption algorithm in CBC mode (Cipher Block Chaining) against Known Plaintext Attacks (KPA). Our methodology includes training a DL model with ciphertexts of two messages encrypted using the same key. The selected messages have the same byte-length and differ by only one bit at the binary...

2024/849 (PDF) Last updated: 2024-07-09
Fast, Large Scale Dimensionality Reduction Schemes Based on CKKS
Haonan Yuan, Wenyuan Wu, Jingwei Chen
Applications

The proliferation of artificial intelligence and big data has resulted in a surge in data demand and increased data dimensionality. This escalation has consequently heightened the costs associated with storage and processing. Concurrently, the confidential nature of data collected by various institutions, which cannot be disclosed due to personal privacy concerns, has exacerbated the challenges associated with data analysis and machine learning model training. Therefore, designing a secure...

2024/703 (PDF) Last updated: 2024-05-07
An Efficient and Extensible Zero-knowledge Proof Framework for Neural Networks
Tao Lu, Haoyu Wang, Wenjie Qu, Zonghui Wang, Jinye He, Tianyang Tao, Wenzhi Chen, Jiaheng Zhang
Applications

In recent years, cloud vendors have started to supply paid services for data analysis by providing interfaces of their well-trained neural network models. However, customers lack tools to verify whether outcomes supplied by cloud vendors are correct inferences from particular models, in the face of lazy or malicious vendors. The cryptographic primitive called zero-knowledge proof (ZKP) addresses this problem. It enables the outcomes to be verifiable without leaking information about the...

2024/689 (PDF) Last updated: 2024-07-10
Automated Creation of Source Code Variants of a Cryptographic Hash Function Implementation Using Generative Pre-Trained Transformer Models
Elijah Pelofske, Vincent Urias, Lorie M. Liebrock
Implementation

Generative pre-trained transformers (GPT's) are a type of large language machine learning model that are unusually adept at producing novel, and coherent, natural language. Notably, these technologies have also been extended to computer programming languages with great success. However, GPT model outputs in general are stochastic and not always correct. For programming languages, the exact specification of the computer code, syntactically and algorithmically, is strictly required in order to...

2024/659 (PDF) Last updated: 2024-04-29
Secure Latent Dirichlet Allocation
Thijs Veugen, Vincent Dunning, Michiel Marcus, Bart Kamphorst
Applications

Topic modelling refers to a popular set of techniques used to discover hidden topics that occur in a collection of documents. These topics can, for example, be used to categorize documents or label text for further processing. One popular topic modelling technique is Latent Dirichlet Allocation (LDA). In topic modelling scenarios, the documents are often assumed to be in one, centralized dataset. However, sometimes documents are held by different parties, and contain privacy- or...

2024/639 (PDF) Last updated: 2024-04-26
Computational Attestations of Polynomial Integrity Towards Verifiable Machine Learning
Dustin Ray, Caroline El Jazmi
Applications

Machine-learning systems continue to advance at a rapid pace, demonstrating remarkable utility in various fields and disciplines. As these systems continue to grow in size and complexity, a nascent industry is emerging which aims to bring machine-learning-as-a-service (MLaaS) to market. Outsourcing the operation and training of these systems to powerful hardware carries numerous advantages, but challenges arise when needing to ensure privacy and the correctness of work carried out by a...

2024/558 (PDF) Last updated: 2024-04-10
Scoring the predictions: a way to improve profiling side-channel attacks
Damien Robissout, Lilian Bossuet, Amaury Habrard
Attacks and cryptanalysis

Side-channel analysis is an important part of the security evaluations of hardware components and more specifically of those that include cryptographic algorithms. Profiling attacks are among the most powerful attacks as they assume the attacker has access to a clone device of the one under attack. Using the clone device allows the attacker to make a profile of physical leakages linked to the execution of algorithms. This work focuses on the characteristics of this profile and the...

2024/535 (PDF) Last updated: 2024-04-05
NodeGuard: A Highly Efficient Two-Party Computation Framework for Training Large-Scale Gradient Boosting Decision Tree
Tianxiang Dai, Yufan Jiang, Yong Li, Fei Mei
Cryptographic protocols

The Gradient Boosting Decision Tree (GBDT) is a well-known machine learning algorithm, which achieves high performance and outstanding interpretability in real-world scenes such as fraud detection, online marketing and risk management. Meanwhile, two data owners can jointly train a GBDT model without disclosing their private dataset by executing secure Multi-Party Computation (MPC) protocols. In this work, we propose NodeGuard, a highly efficient two party computation (2PC) framework for...

2024/529 (PDF) Last updated: 2024-04-05
Fully Homomorphic Training and Inference on Binary Decision Tree and Random Forest
Hojune Shin, Jina Choi, Dain Lee, Kyoungok Kim, Younho Lee

This paper introduces a new method for training decision trees and random forests using CKKS homomorphic encryption (HE) in cloud environments, enhancing data privacy from multiple sources. The innovative Homomorphic Binary Decision Tree (HBDT) method utilizes a modified Gini Impurity index (MGI) for node splitting in encrypted data scenarios. Notably, the proposed training approach operates in a single cloud security domain without the need for decryption, addressing key challenges in...

2024/506 (PDF) Last updated: 2024-03-29
A Decentralized Federated Learning using Reputation
Olive Chakraborty, Aymen Boudguiga
Applications

Nowadays Federated learning (FL) is established as one of the best techniques for collaborative machine learning. It allows a set of clients to train a common model without disclosing their sensitive and private dataset to a coordination server. The latter is in charge of the model aggregation. However, FL faces some problems, regarding the security of updates, integrity of computation and the availability of a server. In this paper, we combine some new ideas like clients’ reputation with...

2024/428 (PDF) Last updated: 2024-06-18
SNOW-SCA: ML-assisted Side-Channel Attack on SNOW-V
Harshit Saurabh, Anupam Golder, Samarth Shivakumar Titti, Suparna Kundu, Chaoyun Li, Angshuman Karmakar, Debayan Das
Attacks and cryptanalysis

This paper presents SNOW-SCA, the first power side-channel analysis (SCA) attack of a 5G mobile communication security standard candidate, SNOW-V, running on a 32-bit ARM Cortex-M4 microcontroller. First, we perform a generic known-key correlation (KKC) analysis to identify the leakage points. Next, a correlation power analysis (CPA) attack is performed, which reduces the attack complexity to two key guesses for each key byte. The correct secret key is then uniquely identified utilizing...

2024/297 (PDF) Last updated: 2024-02-21
Accelerating Training and Enhancing Security Through Message Size Optimization in Symmetric Cryptography
ABHISAR, Madhav Yadav, Girish Mishra

This research extends Abadi and Andersen's exploration of neural networks using secret keys for information protection in multiagent systems. Focusing on enhancing confidentiality properties, we employ end-to-end adversarial training with neural networks Alice, Bob, and Eve. Unlike prior work limited to 64-bit messages, our study spans message sizes from 4 to 1024 bits, varying batch sizes and training steps. An innovative aspect involves training model Bob to approach a minimal error value...

2024/194 (PDF) Last updated: 2024-06-18
Helium: Scalable MPC among Lightweight Participants and under Churn
Christian Mouchet, Sylvain Chatel, Apostolos Pyrgelis, Carmela Troncoso
Implementation

We introduce Helium, a novel framework that supports scalable secure multiparty computation (MPC) for lightweight participants and tolerates churn. Helium relies on multiparty homomorphic encryption (MHE) as its core building block. While MHE schemes have been well studied in theory, prior works fall short of addressing critical considerations paramount for adoption such as supporting resource-constrained and unstably connected participants. In this work, we systematize the requirements of...

2024/170 (PDF) Last updated: 2024-02-05
Train Wisely: Multifidelity Bayesian Optimization Hyperparameter Tuning in Side-Channel Analysis
Trevor Yap Hong Eng, Shivam Bhasin, Léo Weissbart
Implementation

Side-Channel Analysis (SCA) is critical in evaluating the security of cryptographic implementations. The search for hyperparameters poses a significant challenge, especially when resources are limited. In this work, we explore the efficacy of a multifidelity optimization technique known as BOHB in SCA. In addition, we proposed a new objective function called $ge_{+ntge}$, which could be incorporated into any Bayesian Optimization used in SCA. We show the capabilities of both BOHB and...

2024/167 (PDF) Last updated: 2024-02-05
Creating from Noise: Trace Generations Using Diffusion Model for Side-Channel Attack
Trevor Yap, Dirmanto Jap
Implementation

In side-channel analysis (SCA), the success of an attack is largely dependent on the dataset sizes and the number of instances in each class. The generation of synthetic traces can help to improve attacks like profiling attacks. However, manually creating synthetic traces from actual traces is arduous. Therefore, automating this process of creating artificial traces is much needed. Recently, diffusion models have gained much recognition after beating another generative model known as...

2024/162 (PDF) Last updated: 2024-07-22
Zero-Knowledge Proofs of Training for Deep Neural Networks
Kasra Abbaszadeh, Christodoulos Pappas, Jonathan Katz, Dimitrios Papadopoulos
Cryptographic protocols

A zero-knowledge proof of training (zkPoT) enables a party to prove that they have correctly trained a committed model based on a committed dataset without revealing any additional information about the model or the dataset. An ideal zkPoT should offer provable security and privacy guarantees, succinct proof size and verifier runtime, and practical prover efficiency. In this work, we present \name, a zkPoT targeted for deep neural networks (DNNs) that achieves all these goals at once. Our...

2024/150 (PDF) Last updated: 2024-02-02
SALSA FRESCA: Angular Embeddings and Pre-Training for ML Attacks on Learning With Errors
Samuel Stevens, Emily Wenger, Cathy Yuanchen Li, Niklas Nolte, Eshika Saxena, Francois Charton, Kristin Lauter
Attacks and cryptanalysis

Learning with Errors (LWE) is a hard math problem underlying post-quantum cryptography (PQC) systems for key exchange and digital signatures, recently standardized by NIST. Prior work [Wenger et al., 2022; Li et al., 2023a;b] proposed new machine learning (ML)-based attacks on LWE problems with small, sparse secrets, but these attacks require millions of LWE samples to train on and take days to recover secrets. We propose three key methods—better pre-processing, angular embeddings and model...

2024/124 (PDF) Last updated: 2024-07-23
Perceived Information Revisited II: Information-Theoretical Analysis of Deep-Learning Based Side-Channel Attacks
Akira Ito, Rei Ueno, Naofumi Homma
Attacks and cryptanalysis

Previous studies on deep-learning-based side-channel attacks (DL-SCAs) have shown that traditional performance evaluation metrics commonly used in DL, like accuracy and F1 score, are not effective in evaluating DL-SCA performance. Therefore, some previous studies have proposed new alternative metrics for evaluating the performance of DL-SCAs. Notably, perceived information (PI) and effective perceived information (EPI) are major metrics based on information theory. While it has been...

2024/090 (PDF) Last updated: 2024-01-22
Starlit: Privacy-Preserving Federated Learning to Enhance Financial Fraud Detection
Aydin Abadi, Bradley Doyle, Francesco Gini, Kieron Guinamard, Sasi Kumar Murakonda, Jack Liddell, Paul Mellor, Steven J. Murdoch, Mohammad Naseri, Hector Page, George Theodorakopoulos, Suzanne Weller
Applications

Federated Learning (FL) is a data-minimization approach enabling collaborative model training across diverse clients with local data, avoiding direct data exchange. However, state-of-the-art FL solutions to identify fraudulent financial transactions exhibit a subset of the following limitations. They (1) lack a formal security definition and proof, (2) assume prior freezing of suspicious customers’ accounts by financial institutions (limiting the solutions’ adoption), (3) scale poorly,...

2024/081 (PDF) Last updated: 2024-01-18
SuperFL: Privacy-Preserving Federated Learning with Efficiency and Robustness
Yulin Zhao, Hualin Zhou, Zhiguo Wan
Applications

Federated Learning (FL) accomplishes collaborative model training without the need to share local training data. However, existing FL aggregation approaches suffer from inefficiency, privacy vulnerabilities, and neglect of poisoning attacks, severely impacting the overall performance and reliability of model training. In order to address these challenges, we propose SuperFL, an efficient two-server aggregation scheme that is both privacy preserving and secure against poisoning attacks. The...

2024/071 (PDF) Last updated: 2024-01-17
Too Hot To Be True: Temperature Calibration for Higher Confidence in NN-assisted Side-channel Analysis
Seyedmohammad Nouraniboosjin, Fatemeh Ganji
Attacks and cryptanalysis

The past years have witnessed a considerable increase in research efforts put into neural network-assisted profiled side-channel analysis (SCA). Studies have also identified challenges, e.g., closing the gap between metrics for machine learning (ML) classification and side-channel attack evaluation. In fact, in the context of NN-assisted SCA, the NN’s output distribution forms the basis for successful key recovery. In this respect, related work has covered various aspects of integrating...

2024/049 (PDF) Last updated: 2024-01-15
CL-SCA: Leveraging Contrastive Learning for Profiled Side-Channel Analysis
Annv Liu, An Wang, Shaofei Sun, Congming Wei, Yaoling Ding, Yongjuan Wang, Liehuang Zhu
Attacks and cryptanalysis

Side-channel analysis based on machine learning, especially neural networks, has gained significant attention in recent years. However, many existing methods still suffer from certain limitations. Despite the inherent capability of neural networks to extract features, there remains a risk of extracting irrelevant information. The heavy reliance on profiled traces makes it challenging to adapt to remote attack scenarios with limited profiled traces. Besides, attack traces also contain...

2023/1917 (PDF) Last updated: 2023-12-19
Regularized PolyKervNets: Optimizing Expressiveness and Efficiency for Private Inference in Deep Neural Networks
Toluwani Aremu
Applications

Private computation of nonlinear functions, such as Rectified Linear Units (ReLUs) and max-pooling operations, in deep neural networks (DNNs) poses significant challenges in terms of storage, bandwidth, and time consumption. To address these challenges, there has been a growing interest in utilizing privacy-preserving techniques that leverage polynomial activation functions and kernelized convolutions as alternatives to traditional ReLUs. However, these alternative approaches often suffer...

2023/1729 (PDF) Last updated: 2023-11-08
CompactTag: Minimizing Computation Overheads in Actively-Secure MPC for Deep Neural Networks
Yongqin Wang, Pratik Sarkar, Nishat Koti, Arpita Patra, Murali Annavaram
Cryptographic protocols

Secure Multiparty Computation (MPC) protocols enable secure evaluation of a circuit by several parties, even in the presence of an adversary who maliciously corrupts all but one of the parties. These MPC protocols are constructed using the well-known secret-sharing-based paradigm (SPDZ and SPD$\mathbb{Z}_{2^k}$), where the protocols ensure security against a malicious adversary by computing Message Authentication Code (MAC) tags on the input shares and then evaluating the circuit with these...

2023/1681 (PDF) Last updated: 2023-10-30
The Need for MORE: Unsupervised Side-channel Analysis with Single Network Training and Multi-output Regression
Ioana Savu, Marina Krček, Guilherme Perin, Lichao Wu, Stjepan Picek
Attacks and cryptanalysis

Deep learning-based profiling side-channel analysis has gained widespread adoption in academia and industry due to its ability to uncover secrets protected by countermeasures. However, to exploit this capability, an adversary must have access to a clone of the targeted device to obtain profiling measurements and know secret information to label these measurements. Non-profiling attacks avoid these constraints by not relying on secret information for labeled data. Instead, they attempt all...

2023/1665 (PDF) Last updated: 2023-10-27
Model Stealing Attacks On FHE-based Privacy-Preserving Machine Learning through Adversarial Examples
Bhuvnesh Chaturvedi, Anirban Chakraborty, Ayantika Chatterjee, Debdeep Mukhopadhyay
Attacks and cryptanalysis

Classic MLaaS solutions suffer from privacy-related risks since the user is required to send unencrypted data to the server hosting the MLaaS. To alleviate this problem, a thriving line of research has emerged called Privacy-Preserving Machine Learning (PPML) or secure MLaaS solutions that use cryptographic techniques to preserve the privacy of both the input of the client and the output of the server. However, these implementations do not take into consideration the possibility of...

2023/1644 (PDF) Last updated: 2023-10-26
An End-to-End Framework for Private DGA Detection as a Service
Ricardo Jose Menezes Maia, Dustin Ray, Sikha Pentyala, Rafael Dowsley, Martine De Cock, Anderson C. A. Nascimento, Ricardo Jacobi
Applications

Domain Generation Algorithms (DGAs) are used by malware to generate pseudorandom domain names to establish communication between infected bots and Command and Control servers. While DGAs can be detected by machine learning (ML) models with great accuracy, offering DGA detection as a service raises privacy concerns when requiring network administrators to disclose their DNS traffic to the service provider. We propose the first end-to-end framework for privacy-preserving classification as a...

2023/1561 (PDF) Last updated: 2023-10-10
LLM for SoC Security: A Paradigm Shift
Dipayan Saha, Shams Tarek, Katayoon Yahyaei, Sujan Kumar Saha, Jingbo Zhou, Mark Tehranipoor, Farimah Farahmandi
Applications

As the ubiquity and complexity of system-on-chip (SoC) designs increase across electronic devices, the task of incorporating security into an SoC design flow poses significant challenges. Existing security solutions are inadequate to provide effective verification of modern SoC designs due to their limitations in scalability, comprehensiveness, and adaptability. On the other hand, Large Language Models (LLMs) are celebrated for their remarkable success in natural language understanding,...

2023/1551 (PDF) Last updated: 2023-10-09
Evaluating GPT-4’s Proficiency in Addressing Cryptography Examinations
Vasily Mikhalev, Nils Kopal, Bernhard Esslinger
Applications

In the rapidly advancing domain of artificial intelligence, ChatGPT, powered by the GPT-4 model, has emerged as a state-of-the-art interactive agent, exhibiting substantial capabilities across various domains. This paper aims to assess the efficacy of GPT-4 in addressing and solving problems found within cryptographic examinations. We devised a multi-faceted methodology, presenting the model with a series of cryptographic questions of varying complexities derived from real academic...

2023/1546 (PDF) Last updated: 2023-10-09
PERFORMANCE EVALUATION OF MACHINE LEARNING ALGORITHMS FOR INTRUSION DETECTION SYSTEM
Sudhanshu Sekhar Tripathy, Bichitrananda Behera
Implementation

The escalation of hazards to safety and hijacking of digital networks are among the strongest perilous difficulties that must be addressed in the present day. Numerous safety procedures were set up to track and recognize any illicit activity on the network's infrastructure. IDS are the best way to resist and recognize intrusions on internet connections and digital technologies. To classify network traffic as normal or anomalous, Machine Learning (ML) classifiers are increasingly utilized. An...

2023/1526 (PDF) Last updated: 2024-06-11
Polynomial Time Cryptanalytic Extraction of Neural Network Models
Isaac A. Canales-Martínez, Jorge Chavez-Saab, Anna Hambitzer, Francisco Rodríguez-Henríquez, Nitin Satpute, Adi Shamir
Attacks and cryptanalysis

Billions of dollars and countless GPU hours are currently spent on training Deep Neural Networks (DNNs) for a variety of tasks. Thus, it is essential to determine the difficulty of extracting all the parameters of such neural networks when given access to their black-box implementations. Many versions of this problem have been studied over the last 30 years, and the best current attack on ReLU-based deep neural networks was presented at Crypto’20 by Carlini, Jagielski, and Mironov. It...

2023/1345 (PDF) Last updated: 2023-09-08
Experimenting with Zero-Knowledge Proofs of Training
Sanjam Garg, Aarushi Goel, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Guru-Vamsi Policharla, Mingyuan Wang
Cryptographic protocols

How can a model owner prove they trained their model according to the correct specification? More importantly, how can they do so while preserving the privacy of the underlying dataset and the final model? We study this problem and formulate the notion of zero-knowledge proof of training (zkPoT), which formalizes rigorous security guarantees that should be achieved by a privacy-preserving proof of training. While it is theoretically possible to design zkPoT for any model using generic...

2023/1307 (PDF) Last updated: 2023-09-01
Constant-Round Private Decision Tree Evaluation for Secret Shared Data
Nan Cheng, Naman Gupta, Aikaterini Mitrokotsa, Hiraku Morita, Kazunari Tozawa
Cryptographic protocols

Decision tree evaluation is extensively used in machine learning to construct accurate classification models. Often in the cloud-assisted communication paradigm cloud servers execute remote evaluations of classification models using clients’ data. In this setting, the need for private decision tree evaluation (PDTE) has emerged to guarantee no leakage of information for the client’s input nor the service provider’s trained model i.e., decision tree. In this paper, we propose a private...

2023/1269 (PDF) Last updated: 2024-07-15
SIGMA: Secure GPT Inference with Function Secret Sharing
Kanav Gupta, Neha Jawalkar, Ananta Mukherjee, Nishanth Chandran, Divya Gupta, Ashish Panwar, Rahul Sharma
Cryptographic protocols

Secure 2-party computation (2PC) enables secure inference that offers protection for both proprietary machine learning (ML) models and sensitive inputs to them. However, the existing secure inference solutions suffer from high latency and communication overheads, particularly for transformers. Function secret sharing (FSS) is a recent paradigm for obtaining efficient 2PC protocols with a preprocessing phase. We provide SIGMA, the first end-to-end system for secure transformer inference...

2023/1174 (PDF) Last updated: 2023-12-08
zkDL: Efficient Zero-Knowledge Proofs of Deep Learning Training
Haochen Sun, Tonghe Bai, Jason Li, Hongyang Zhang
Applications

The recent advancements in deep learning have brought about significant changes in various aspects of people's lives. Meanwhile, these rapid developments have raised concerns about the legitimacy of the training process of deep neural networks. To protect the intellectual properties of AI developers, directly examining the training process by accessing the model parameters and training data is often prohibited for verifiers. In response to this challenge, we present zero-knowledge deep...

2023/1109 (PDF) Last updated: 2023-07-16
An End-to-end Plaintext-based Side-channel Collision Attack without Trace Segmentation
Lichao Wu, Sébastien Tiran, Guilherme Perin, Stjepan Picek
Attacks and cryptanalysis

Side-channel Collision Attacks (SCCA) constitute a subset of non-profiling attacks that exploit information dependency leaked during cryptographic operations. Unlike traditional collision attacks, which seek instances where two different inputs to a cryptographic algorithm yield identical outputs, SCCAs specifically target the internal state, where identical outputs are more likely. In CHES 2023, Staib et al. presented a Deep Learning-based SCCA (DL-SCCA), which enhanced the attack...

2023/1108 (PDF) Last updated: 2023-07-16
It's a Kind of Magic: A Novel Conditional GAN Framework for Efficient Profiling Side-channel Analysis
Sengim Karayalcin, Marina Krcek, Lichao Wu, Stjepan Picek, Guilherme Perin
Attacks and cryptanalysis

Profiling side-channel analysis is an essential technique to assess the security of protected cryptographic implementations by subjecting them to the worst-case security analysis. This approach assumes the presence of a highly capable adversary with knowledge of countermeasures and randomness employed by the target device. However, black-box profiling attacks are commonly employed when aiming to emulate real-world scenarios. These attacks leverage deep learning as a prominent alternative...

2023/1059 (PDF) Last updated: 2023-07-06
Provably Secure Blockchain Protocols from Distributed Proof-of-Deep-Learning
Xiangyu Su, Mario Larangeira, Keisuke Tanaka
Cryptographic protocols

Proof-of-useful-work (PoUW), an alternative to the widely used proof-of-work (PoW), aims to re-purpose the network's computing power. Namely, users evaluate meaningful computational problems, e.g., solving optimization problems, instead of computing numerous hash function values as in PoW. A recent approach utilizes the training process of deep learning as ``useful work''. However, these works lack security analysis when deploying them with blockchain-based protocols, let alone the informal...

2023/1010 (PDF) Last updated: 2023-07-04
End-to-end Privacy Preserving Training and Inference for Air Pollution Forecasting with Data from Rival Fleets
Gauri Gupta, Krithika Ramesh, Anwesh Bhattacharya, Divya Gupta, Rahul Sharma, Nishanth Chandran, Rijurekha Sen
Applications

Privacy-preserving machine learning (PPML) promises to train machine learning (ML) models by combining data spread across multiple data silos. Theoretically, secure multiparty computation (MPC) allows multiple data owners to train models on their joint data without revealing the data to each other. However, the prior implementations of this secure training using MPC have three limitations: they have only been evaluated on CNNs, and LSTMs have been ignored; fixed point approximations...

2023/804 (PDF) Last updated: 2023-06-01
Falkor: Federated Learning Secure Aggregation Powered by AES-CTR GPU Implementation
Mariya Georgieva Belorgey, Sofia Dandjee, Nicolas Gama, Dimitar Jetchev, Dmitry Mikushin
Cryptographic protocols

We propose a novel protocol, Falkor, for secure aggregation for Federated Learning in the multi-server scenario based on masking of local models via a stream cipher based on AES in counter mode and accelerated by GPUs running on the aggregating servers. The protocol is resilient to client dropout and has reduced clients/servers communication cost by a factor equal to the number of aggregating servers (compared to the naïve baseline method). It scales simultaneously in the two major...

2023/647 (PDF) Last updated: 2023-05-08
Efficient FHE-based Privacy-Enhanced Neural Network for AI-as-a-Service
Kwok-Yan Lam, Xianhui Lu, Linru Zhang, Xiangning Wang, Huaxiong Wang, Si Qi Goh
Applications

AI-as-a-Service has emerged as an important trend for supporting the growth of the digital economy. Digital service providers make use of their vast amount of user data to train AI models (such as image recognitions, financial modelling and pandemic modelling etc.) and offer them as a service on the cloud. While there are convincing advantages for using such third-party models, the fact that users need to upload their data to the cloud is bound to raise serious privacy concerns,...

2023/611 (PDF) Last updated: 2023-10-05
A Comparison of Multi-task learning and Single-task learning Approaches
Thomas Marquet, Elisabeth Oswald
Attacks and cryptanalysis

In this paper, we provide experimental evidence for the benefits of multi-task learning in the context of masked AES implementations (via the ASCADv1-r and ASCADv2 databases). We develop an approach for comparing single-task and multi-task approaches rather than comparing specific resulting models: we do this by training many models with random hyperparameters (instead of comparing a few highly tuned models). We find that multi-task learning has significant practical advantages that make it...

2023/597 (PDF) Last updated: 2023-04-26
FedVS: Straggler-Resilient and Privacy-Preserving Vertical Federated Learning for Split Models
Songze Li, Duanyi Yao, Jin Liu
Cryptographic protocols

In a vertical federated learning (VFL) system consisting of a central server and many distributed clients, the training data are vertically partitioned such that different features are privately stored on different clients. The problem of split VFL is to train a model split between the server and the clients. This paper aims to address two major challenges in split VFL: 1) performance degradation due to straggling clients during training; and 2) data and model privacy leakage from clients’...

2023/593 (PDF) Last updated: 2023-05-01
Implementing and Optimizing Matrix Triples with Homomorphic Encryption
Johannes Mono, Tim Güneysu
Implementation

In today’s interconnected world, data has become a valuable asset, leading to a growing interest in protecting it through techniques such as privacy-preserving computation. Two well-known approaches are multi-party computation and homomorphic encryption with use cases such as privacy-preserving machine learning evaluating or training neural networks. For multi-party computation, one of the fundamental arithmetic operations is the secure multiplication in the malicious security model and by...

2023/592 (PDF) Last updated: 2023-04-29
Blockchain Large Language Models
Yu Gai, Liyi Zhou, Kaihua Qin, Dawn Song, Arthur Gervais
Applications

This paper presents a dynamic, real-time approach to detecting anomalous blockchain transactions. The proposed tool, BlockGPT, generates tracing representations of blockchain activity and trains from scratch a large language model to act as a real-time Intrusion Detection System. Unlike traditional methods, BlockGPT is designed to offer an unrestricted search space and does not rely on predefined rules or patterns, enabling it to detect a broader range of anomalies. We demonstrate the...

2023/555 (PDF) Last updated: 2023-04-19
SAFEFL: MPC-friendly Framework for Private and Robust Federated Learning
Till Gehlhar, Felix Marx, Thomas Schneider, Ajith Suresh, Tobias Wehrle, Hossein Yalame
Implementation

Federated learning (FL) has gained widespread popularity in a variety of industries due to its ability to locally train models on devices while preserving privacy. However, FL systems are susceptible to i) privacy inference attacks and ii) poisoning attacks, which can compromise the system by corrupt actors. Despite a significant amount of work being done to tackle these attacks individually, the combination of these two attacks has received limited attention in the research community. To...

2023/486 (PDF) Last updated: 2024-05-18
Flamingo: Multi-Round Single-Server Secure Aggregation with Applications to Private Federated Learning
Yiping Ma, Jess Woods, Sebastian Angel, Antigoni Polychroniadou, Tal Rabin
Applications

This paper introduces Flamingo, a system for secure aggregation of data across a large set of clients. In secure aggregation, a server sums up the private inputs of clients and obtains the result without learning anything about the individual inputs beyond what is implied by the final sum. Flamingo focuses on the multi-round setting found in federated learning in which many consecutive summations (averages) of model weights are performed to derive a good model. Previous protocols, such as...

2023/368 (PDF) Last updated: 2023-03-14
AI Attacks AI: Recovering Neural Network architecture from NVDLA using AI-assisted Side Channel Attack
Naina Gupta, Arpan Jati, Anupam Chattopadhyay
Attacks and cryptanalysis

During the last decade, there has been a stunning progress in the domain of AI with adoption in both safety-critical and security-critical applications. A key requirement for this is highly trained Machine Learning (ML) models, which are valuable Intellectual Property (IP) of the respective organizations. Naturally, these models have become targets for model recovery attacks through side-channel leakage. However, majority of the attacks reported in literature are either on simple embedded...

2023/340 (PDF) Last updated: 2023-10-31
SALSA PICANTE: a machine learning attack on LWE with binary secrets
Cathy Li, Jana Sotáková, Emily Wenger, Mohamed Malhou, Evrard Garcelon, Francois Charton, Kristin Lauter
Attacks and cryptanalysis

Learning with Errors (LWE) is a hard math problem underpinning many proposed post-quantum cryptographic (PQC) systems. The only PQC Key Exchange Mechanism (KEM) standardized by NIST is based on module~LWE, and current publicly available PQ Homomorphic Encryption (HE) libraries are based on ring LWE. The security of LWE-based PQ cryptosystems is critical, but certain implementation choices could weaken them. One such choice is sparse binary secrets, desirable for PQ HE schemes for efficiency...

2023/206 (PDF) Last updated: 2024-05-10
Orca: FSS-based Secure Training and Inference with GPUs
Neha Jawalkar, Kanav Gupta, Arkaprava Basu, Nishanth Chandran, Divya Gupta, Rahul Sharma
Cryptographic protocols

Secure Two-party Computation (2PC) allows two parties to compute any function on their private inputs without revealing their inputs to each other. In the offline/online model for 2PC, correlated randomness that is independent of all inputs to the computation, is generated in a preprocessing (offline) phase and this randomness is then utilized in the online phase once the inputs to the parties become available. Most 2PC works focus on optimizing the online time as this overhead lies on the...

2023/073 (PDF) Last updated: 2023-10-12
FssNN: Communication-Efficient Secure Neural Network Training via Function Secret Sharing
Peng Yang, Zoe L. Jiang, Shiqi Gao, Jiehang Zhuang, Hongxiao Wang, Junbin Fang, Siuming Yiu, Yulin Wu, Xuan Wang
Cryptographic protocols

Privacy-preserving neural network enables multiple parties to jointly train neural network models without revealing sensitive data. However, its practicality is greatly limited due to the low efficiency caused by massive communication costs and a deep dependence on a trusted dealer. In this paper, we propose a communication-efficient secure two-party neural network framework, FssNN, to enable practical secure neural network training and inference. In FssNN, we reduce communication costs of...

2023/046 (PDF) Last updated: 2023-01-15
Cognitive Cryptography using behavioral features from linguistic-biometric data
Jose Contreras
Implementation

This study presents a proof-of-concept for a cognitive-based authentication system that uses an individual's writing style as a unique identifier to grant access to a system. A machine learning SVM model was trained on these features to distinguish between texts generated by each user. The stylometric feature vector was then used as an input to a key derivation function to generate a unique key for each user. The experiment results showed that the developed system achieved up to 87.42\%...

2023/019 (PDF) Last updated: 2023-07-20
Autoencoder-enabled Model Portability for Reducing Hyperparameter Tuning Efforts in Side-channel Analysis
Marina Krček, Guilherme Perin
Attacks and cryptanalysis

Hyperparameter tuning represents one of the main challenges in deep learning-based profiling side-channel analysis. For each different side-channel dataset, the typical procedure to find a profiling model is applying hyperparameter tuning from scratch. The main reason is that side-channel measurements from various targets contain different underlying leakage distributions. Consequently, the same profiling model hyperparameters are usually not equally efficient for other targets. This paper...

2023/006 (PDF) Last updated: 2024-02-06
Exploring multi-task learning in the context of masked AES implementations
Thomas Marquet, Elisabeth Oswald
Attacks and cryptanalysis

Deep learning is very efficient at breaking masked implementations even when the attacker does not assume knowledge of the masks. However, recent works pointed out a significant challenge: overcoming the initial learning plateau. This paper discusses the advantages of multi-task learning to break through the initial plateau consistently. We investigate different ways of applying multi-task learning against masked AES implementations (via the ASCAD-r, ASCAD-v2, and CHESCTF-2023 datasets)...

2022/1737 (PDF) Last updated: 2023-09-26
Regularizers to the Rescue: Fighting Overfitting in Deep Learning-based Side-channel Analysis
Azade Rezaeezade, Lejla Batina
Attacks and cryptanalysis

Despite considerable achievements of deep learning-based side-channel analysis, overfitting represents a significant obstacle in finding optimized neural network models. This issue is not unique to the side-channel domain. Regularization techniques are popular solutions to overfitting and have long been used in various domains. At the same time, the works in the side-channel domain show sporadic utilization of regularization techniques. What is more, no systematic study investigates these...

2022/1713 (PDF) Last updated: 2022-12-10
Breaking a Fifth-Order Masked Implementation of CRYSTALS-Kyber by Copy-Paste
Elena Dubrova, Kalle Ngo, Joel Gärtner
Public-key cryptography

CRYSTALS-Kyber has been selected by the NIST as a public-key encryption and key encapsulation mechanism to be standardized. It is also included in the NSA's suite of cryptographic algorithms recommended for national security systems. This makes it important to evaluate the resistance of CRYSTALS-Kyber's implementations to side-channel attacks. The unprotected and first-order masked software implementations have been already analysed. In this paper, we present deep learning-based message...

2022/1695 (PDF) Last updated: 2022-12-07
ELSA: Secure Aggregation for Federated Learning with Malicious Actors
Mayank Rathee, Conghao Shen, Sameer Wagh, Raluca Ada Popa
Cryptographic protocols

Federated learning (FL) is an increasingly popular approach for machine learning (ML) in cases where the train- ing dataset is highly distributed. Clients perform local training on their datasets and the updates are then aggregated into the global model. Existing protocols for aggregation are either inefficient, or don’t consider the case of malicious actors in the system. This is a major barrier in making FL an ideal solution for privacy-sensitive ML applications. We present ELSA,...

2022/1573 (PDF) Last updated: 2022-11-15
Solving Small Exponential ECDLP in EC-based Additively Homomorphic Encryption and Applications
Fei Tang, Guowei Ling, Chaochao Cai, Jinyong Shan, Xuanqi Liu, Peng Tang, Weidong Qiu
Applications

Additively Homomorphic Encryption (AHE) has been widely used in various applications, such as federated learning, blockchain, and online auctions. Elliptic Curve (EC) based AHE has the advantages of efficient encryption, homomorphic addition, scalar multiplication algorithms, and short ciphertext length. However, EC-based AHE schemes require solving a small exponential Elliptic Curve Discrete Logarithm Problem (ECDLP) when running the decryption algorithm, i.e., recovering the plaintext...

2022/1476 (PDF) Last updated: 2022-10-27
The EVIL Machine: Encode, Visualize and Interpret the Leakage
Valence Cristiani, Maxime Lecomte, Philippe Maurine
Attacks and cryptanalysis

Unsupervised side-channel attacks allow extracting secret keys manipulated by cryptographic primitives through leakages of their physical implementations. As opposed to supervised attacks, they do not require a preliminary profiling of the target, constituting a broader threat since they imply weaker assumptions on the adversary model. Their downside is their requirement for some a priori knowledge on the leakage model of the device. On one hand, stochastic attacks such as the Linear...

2022/1279 Last updated: 2023-04-17
Improved Neural Distinguishers with Multi-Round and Multi-Splicing Construction
Jiashuo Liu, Jiongjiong Ren, Shaozhen Chen, ManMan Li
Attacks and cryptanalysis

In CRYPTO 2019, Gohr successfully applied deep learning to differential cryptanalysis against the NSA block cipher Speck32/64, achieving higher accuracy than traditional differential distinguishers. Until now, the improvement of neural differential distinguishers is a mainstream research direction in neuralaided cryptanalysis. But the current development of training data formats for neural distinguishers forms barriers: (1) The source of data features is limited to linear combinations of...

2022/1069 (PDF) Last updated: 2022-11-30
A Theoretical Framework for the Analysis of Physical Unclonable Function Interfaces and its Relation to the Random Oracle Model
Marten van Dijk, Chenglu Jin
Foundations

Analysis of advanced Physical Unclonable Function (PUF) applications and protocols rely on assuming that a PUF behaves like a random oracle, that is, upon receiving a challenge, a uniform random response with replacement is selected, measurement noise is added, and the resulting response is returned. In order to justify such an assumption, we need to rely on digital interface computation that to some extent remains confidential -- otherwise, information about PUF challenge response pairs...

2022/935 (PDF) Last updated: 2023-04-21
SALSA: Attacking Lattice Cryptography with Transformers
Emily Wenger, Mingjie Chen, Francois Charton, Kristin Lauter
Attacks and cryptanalysis

Currently deployed public-key cryptosystems will be vulnerable to attacks by full- scale quantum computers. Consequently, quantum resistant cryptosystems are in high demand, and lattice-based cryptosystems, based on a hard problem known as Learning With Errors (LWE), have emerged as strong contenders for standardization. In this work, we train transformers to perform modular arithmetic and combine half-trained models with statistical cryptanalysis techniques to propose SALSA: a machine...

2022/933 (PDF) Last updated: 2022-07-18
Secure Quantized Training for Deep Learning
Marcel Keller, Ke Sun
Implementation

We implement training of neural networks in secure multi-party computation (MPC) using quantization commonly used in said setting. We are the first to present an MNIST classifier purely trained in MPC that comes within 0.2 percent of the accuracy of the same convolutional neural network trained via plaintext computation. More concretely, we have trained a network with two convolutional and two dense layers to 99.2% accuracy in 3.5 hours (under one hour for 99% accuracy). We have also...

2022/892 (PDF) Last updated: 2022-08-26
Piranha: A GPU Platform for Secure Computation
Jean-Luc Watson, Sameer Wagh, Raluca Ada Popa
Implementation

Secure multi-party computation (MPC) is an essential tool for privacy-preserving machine learning (ML). However, secure training of large-scale ML models currently requires a prohibitively long time to complete. Given that large ML inference and training tasks in the plaintext setting are significantly accelerated by Graphical Processing Units (GPUs), this raises the natural question: can secure MPC leverage GPU acceleration? A few recent works have studied this question in the context of...

2022/890 (PDF) Last updated: 2022-07-07
One Network to rule them all. An autoencoder approach to encode datasets
Cristian-Alexandru Botocan
Attacks and cryptanalysis

Side-channel attacks are powerful non-invasive attacks on cryptographic algorithms. Among such attacks, profiling attacks have a prominent place as they assume an attacker with access to a copy of the device under attack. The attacker uses the device's copy to learn as much as possible about the device and then mount the attack on the target device. In the last few years, Machine Learning has been successfully used in profiling attacks, as such techniques proved to be capable of breaking...

2022/866 (PDF) Last updated: 2024-05-13
Communication-Efficient Secure Logistic Regression
Amit Agarwal, Stanislav Peceny, Mariana Raykova, Phillipp Schoppmann, Karn Seth
Cryptographic protocols

We present a novel construction that enables two parties to securely train a logistic regression model on private secret-shared data. Our goal is to minimize online communication and round complexity, while still allowing for an efficient offline phase. As part of our construction, we develop many building blocks of independent interest. These include a new approximation technique for the sigmoid function that results in a secure protocol with better communication, protocols for secure...

2022/865 (PDF) Last updated: 2023-11-14
Linked Fault Analysis
Ali Asghar Beigizad, Hadi Soleimany, Sara Zarei, Hamed Ramzanipour
Attacks and cryptanalysis

Numerous fault models have been developed, each with distinct characteristics and effects. These models should be evaluated in light of their costs, repeatability, and practicability. Moreover, there must be effective ways to use the injected fault to retrieve the secret key, especially if there are some countermeasures in the implementation. In this paper, we introduce a new fault analysis technique called ``linked fault analysis'' (LFA), which can be viewed as a more powerful version of...

2022/852 (PDF) Last updated: 2022-06-28
Making Biased DL Models Work: Message and Key Recovery Attacks on Saber Using Amplitude-Modulated EM Emanations
Ruize Wang, Kalle Ngo, Elena Dubrova
Attacks and cryptanalysis

Creating a good deep learning (DL) model is an art which requires expertise in DL and a large set of labeled data for training neural networks. Neither is readily available. In this paper, we introduce a method which enables us to achieve good results with bad DL models. We use simple multilayer perceptron (MLP) networks, trained on a small dataset, which make strongly biased predictions if used without the proposed method. The core idea is to extend the attack dataset so that at least one...

2022/784 (PDF) Last updated: 2022-06-18
Fully Privacy-Preserving Federated Representation Learning via Secure Embedding Aggregation
Jiaxiang Tang, Jinbao Zhu, Songze Li, Kai Zhang, Lichao Sun
Cryptographic protocols

We consider a federated representation learning framework, where with the assistance of a central server, a group of $N$ distributed clients train collaboratively over their private data, for the representations (or embeddings) of a set of entities (e.g., users in a social network). Under this framework, for the key step of aggregating local embeddings trained at the clients in a private manner, we develop a secure embedding aggregation protocol named SecEA, which provides...

2022/663 (PDF) Last updated: 2022-09-08
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning
Harsh Chaudhari, Matthew Jagielski, Alina Oprea
Cryptographic protocols

Secure multiparty computation (MPC) has been proposed to allow multiple mutually distrustful data owners to jointly train machine learning (ML) models on their combined data. However, by design, MPC protocols faithfully compute the training functionality, which the adversarial ML community has shown to leak private information and can be tampered with in poisoning attacks. In this work, we argue that model ensembles, implemented in our framework called SafeNet, are a highly MPC-amenable way...

2022/493 (PDF) Last updated: 2022-10-11
Don’t Learn What You Already Know: Scheme-Aware Modeling for Profiling Side-Channel Analysis against Masking
Loïc Masure, Valence Cristiani, Maxime Lecomte, François-Xavier Standaert
Applications

Over the past few years, deep-learning-based attacks have emerged as a de facto standard, thanks to their ability to break implementations of cryptographic primitives without pre-processing, even against widely used counter-measures such as hiding and masking. However, the recent works of Bronchain and Standaert at Tches 2020 questioned the soundness of such tools if used in an uninformed setting to evaluate implementations protected with higher-order masking. On the opposite, worst-case...

2022/490 (PDF) Last updated: 2023-04-14
Information Bounds and Convergence Rates for Side-Channel Security Evaluators
Loïc Masure, Gaëtan Cassiers, Julien Hendrickx, François-Xavier Standaert
Implementation

Current side-channel evaluation methodologies exhibit a gap between inefficient tools offering strong theoretical guarantees and efficient tools only offering heuristic (sometimes case-specific) guarantees. Profiled attacks based on the empirical leakage distribution correspond to the first category. Bronchain et al. showed at Crypto 2019 that they allow bounding the worst-case security level of an implementation, but the bounds become loose as the leakage dimensionality increases. Template...

2022/396 Last updated: 2022-11-19
Side-channel attacks based on power trace decomposition
Fanliang Hu, Huanyu Wang, Junnian Wang
Applications

Side Channel Attacks (SCAs), an attack that exploits the physical information generated when an encryption algorithm is executed on a device to recover the key, have become one of the key threats to the security of encrypted devices. Recently, with the development of deep learning, deep learning techniques have been applied to side channel attacks with good results on publicly available dataset experiences. In this paper, we propose a power tracking decomposition method that divides the...

2022/360 (PDF) Last updated: 2022-03-30
Privacy-Preserving Contrastive Explanations with Local Foil Trees
Thijs Veugen, Bart Kamphorst, Michiel Marcus
Applications

We present the first algorithm that combines privacy-preserving technologies and state-of-the-art explainable AI to enable privacy-friendly explanations of black-box AI models. We provide a secure algorithm for contrastive explanations of black-box machine learning models that securely trains and uses local foil trees. Our work shows that the quality of these explanations can be upheld whilst ensuring the privacy of both the training data, and the model itself.

2022/335 (PDF) Last updated: 2022-03-14
Evaluation of Machine Learning Algorithms in Network-Based Intrusion Detection System
Tuan-Hong Chua, Iftekhar Salam
Implementation

Cybersecurity has become one of the focuses of organisations. The number of cyberattacks keeps increasing as Internet usage continues to grow. An intrusion detection system (IDS) is an alarm system that helps to detect cyberattacks. As new types of cyberattacks continue to emerge, researchers focus on developing machine learning (ML)-based IDS to detect zero-day attacks. Researchers usually remove some or all attack samples from the training dataset and only include them in the testing...

2022/183 (PDF) Last updated: 2024-01-07
Improving Differential-Neural Cryptanalysis
Liu Zhang, Zilong Wang, Baocang wang
Attacks and cryptanalysis

In CRYPTO'19, Gohr introduced a novel cryptanalysis method by developing a differential-neural distinguisher using neural networks as the underlying distinguisher. He effectively integrated this distinguisher with classical differentials, facilitating a 12-round key recovery attack on Speck32/64 (from a total of 22 rounds). Bao et al. refined the concept of neutral bits, enabling key recovery attacks up to 13 rounds for Speck32/64 and 16 rounds (from a total of 32) for Simon32/64. Our...

2022/146 (PDF) Last updated: 2022-02-12
Training Differentially Private Models with Secure Multiparty Computation
Sikha Pentyala, Davis Railsback, Ricardo Maia, Rafael Dowsley, David Melanson, Anderson Nascimento, Martine De Cock
Cryptographic protocols

We address the problem of learning a machine learning model from training data that originates at multiple data owners, while providing formal privacy guarantees regarding the protection of each owner's data. Existing solutions based on Differential Privacy (DP) achieve this at the cost of a drop in accuracy. Solutions based on Secure Multiparty Computation (MPC) do not incur such accuracy loss but leak information when the trained model is made publicly available. We propose an MPC solution...

2022/014 (PDF) Last updated: 2022-01-08
Transformer encoder-based Crypto-Ransomware Detection for Low-Power Embedded Processors
Hyunji Kim, Sejin Lim, Yeajun Kang, Wonwoong Kim, Hwajeong Seo
Applications

Crypto-ransomware has a process to encrypt the victim's files, and crypto-ransomware requests the victim for money for a key to decrypt the encrypted file. In this paper, we present new approaches to prevent crypto-ransomware by detecting block cipher algorithms for Internet of Things (IoT) platforms. The generic software of the AVR package and the lightweight block cipher library (FELICS) written in C language was trained through the neural network, and then we evaluated the result. Unlike...

2021/1636 (PDF) Last updated: 2021-12-17
Does Fully Homomorphic Encryption Need Compute Acceleration?
Leo de Castro, Rashmi Agrawal, Rabia Yazicigil, Anantha Chandrakasan, Vinod Vaikuntanathan, Chiraag Juvekar, Ajay Joshi

The emergence of cloud-computing has raised important privacy questions about the data that users share with remote servers. While data in transit is protected using standard techniques like Transport Layer Security (TLS), most cloud providers have unrestricted plaintext access to user data at the endpoint. Fully Homomorphic Encryption (FHE) offers one solution to this problem by allowing for arbitrarily complex computations on encrypted data without ever needing to decrypt it....

2021/1614 Last updated: 2021-12-15
PEPFL: A Framework for a Practical and Efficient Privacy-Preserving Federated Learning
Yange Chen, Baocang Wang, Hang Jiang, Pu Duan, Benyu Zhang, Chengdong Liu, Zhiyong Hong, Yupu Hua
Applications

As an emerging joint learning model, federated deep learning is a promising way to combine model parameters of different users for training and inference without collecting users’ original data. However, a practical and efficient solution has not been established in previous work due to the absence of effcient matrix computation and cryptography schemes in the privacy-preserving federated learning model, especially in partially homomorphic cryptosystems. In this paper, we propose a practical...

2021/1592 (PDF) Last updated: 2022-07-28
The Need for Speed: A Fast Guessing Entropy Calculation for Deep Learning-based SCA
Guilherme Perin, Lichao Wu, Stjepan Picek
Attacks and cryptanalysis

The adoption of deep neural networks for profiling side-channel attacks (SCA) opened new perspectives for leakage detection. Recent publications showed that cryptographic implementations featuring different countermeasures could be broken without feature selection or trace preprocessing. This success comes with a high price: extensive hyperparameter search to find optimal deep learning models. As deep learning models usually suffer from overfitting due to their high fitting capacity, it is...

2021/1517 (PDF) Last updated: 2023-03-06
HOLMES: Efficient Distribution Testing for Secure Collaborative Learning
Ian Chang, Katerina Sotiraki, Weikeng Chen, Murat Kantarcioglu, Raluca Ada Popa
Applications

Using secure multiparty computation (MPC), organizations which own sensitive data (e.g., in healthcare, finance or law enforcement) can train machine learning models over their joint dataset without revealing their data to each other. At the same time, secure computation restricts operations on the joint dataset, which impedes computation to assess its quality. Without such an assessment, deploying a jointly trained model is potentially illegal. Regulations, such as the European Union's...

2021/1464 (PDF) Last updated: 2021-11-06
Polynomial-time targeted attacks on coin tossing for any number of corruptions
Omid Etesami, Ji Gao, Saeed Mahloujifar, Mohammad Mahmoody
Foundations

Consider an $n$-message coin-tossing protocol between $n$ parties $P_1,\dots,P_n$, in which $P_i$ broadcasts a single message $w_i$ in round $i$ (possibly based on the previously shared messages) and at the end they agree on bit $b$. A $k$-replacing adversary $A_k$ can change up to $k$ of the messages as follows. In every round $i$, the adversary who knows all the messages broadcast so far, as well as a message $w_i$ that is prepared by $P_i$ to be just sent, can can to replace the prepared...

2021/1437 (PDF) Last updated: 2021-10-26
ModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking
Anuj Dubey, Afzal Ahmad, Muhammad Adeel Pasha, Rosario Cammarota, Aydin Aysu
Implementation

Intellectual Property (IP) thefts of trained machine learning (ML) models through side-channel attacks on inference engines are becoming a major threat. Indeed, several recent works have shown reverse engineering of the model internals using such attacks, but the research on building defenses is largely unexplored. There is a critical need to efficiently and securely transform those defenses from cryptography such as masking to ML frameworks. Existing works, however, revealed that a...

2021/1328 (PDF) Last updated: 2022-04-15
Cross Subkey Side Channel Analysis Based on Small Samples
Fanliang Hu, Huanyu Wang, Junnian Wang
Foundations

The majority of recently demonstrated Deep-Learning Side-Channel Analysis (DLSCA) use neural networks trained on a segment of traces containing operations only related to the target subkey. However, when the size of the training set is limited, as in this paper with only 5K power traces, the deep learning (DL) model cannot effectively learn the internal features of the data due to insufficient training data. In this paper, we propose a cross-subkey training approach that acts as a trace...

2021/1107 (PDF) Last updated: 2022-03-06
Multi-Leak Deep-Learning Side-Channel Analysis
Fanliang Hu, Huanyu Wang, Junnian Wang
Foundations

Deep Learning Side-Channel Attacks (DLSCAs) have become a realistic threat to implementations of cryptographic algorithms, such as Advanced Encryption Standard (AES). By utilizing deep-learning models to analyze side-channel measurements, the attacker is able to derive the secret key of the cryptographic alrgorithm. However, when traces have multiple leakage intervals for a specific attack point, the majority of existing works train neural networks on these traces directly, without a...

2021/1100 (PDF) Last updated: 2022-10-25
REDsec: Running Encrypted Discretized Neural Networks in Seconds
Lars Folkerts, Charles Gouert, Nektarios Georgios Tsoutsos
Applications

Machine learning as a service (MLaaS) has risen to become a prominent technology due to the large development time, amount of data, hardware costs, and level of expertise required to develop a machine learning model. However, privacy concerns prevent the adoption of MLaaS for applications with sensitive data. A promising privacy preserving solution is to use fully homomorphic encryption (FHE) to perform the ML computations. Recent advancements have lowered computational costs by several...

2021/1080 (PDF) Last updated: 2021-08-23
SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning
Ege Erdogan, Alptekin Kupcu, A. Ercument Cicek
Applications

Distributed deep learning frameworks, such as split learning, have recently been proposed to enable a group of participants to collaboratively train a deep neural network without sharing their raw data. Split learning in particular achieves this goal by dividing a neural network between a client and a server so that the client computes the initial set of layers, and the server computes the rest. However, this method introduces a unique attack vector for a malicious server attempting to steal...

2021/1074 (PDF) Last updated: 2021-08-23
UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning
Ege Erdogan, Alptekin Kupcu, A. Ercument Cicek
Applications

Training deep neural networks requires large scale data, which often forces users to work in a distributed or outsourced setting, accompanied with privacy concerns. Split learning framework aims to address this concern by splitting up the model among the client and the server. The idea is that since the server does not have access to client's part of the model, the scheme supposedly provides privacy. We show that this is not true via two novel attacks. (1) We show that an honest-but-curious...

2021/1017 (PDF) Last updated: 2021-08-06
Improve Neural Distinguisher for Cryptanalysis
Zezhou Hou, Jiongjiong Ren, Shaozhen Chen

At CRYPTO'19, Gohr built a bridge between deep learning and cryptanalysis. Based on deep neural networks, he trained neural distinguishers of Speck32/64 using a plaintext difference and single ciphertext pair. Compared with purely differential distinguishers, neural distinguishers successfully use features of the ciphertext pairs. Besides, with the help of neural distinguishers, he attacked 11-round Speck32/64 using Bayesian optimization. At EUROCRYPTO'21, Benamira proposed a detailed...

2021/981 (PDF) Last updated: 2021-07-23
Deep Learning-based Side-channel Analysis against AES Inner Rounds
Sudharshan Swaminathan, Lukasz Chmielewski, Guilherme Perin, Stjepan Picek
Implementation

Side-channel attacks (SCA) focus on vulnerabilities caused by insecure implementations and exploit them to deduce useful information about the data being processed or the data itself through leakages obtained from the device. There have been many studies exploiting these side-channel leakages, and most of the state-of-the-art attacks have been shown to work on systems implementing AES. The methodology is usually based on exploiting leakages for the outer rounds, i.e., the first and the last...

2021/968 (PDF) Last updated: 2023-07-20
Quantum-Resistance Meets White-Box Cryptography: How to Implement Hash-Based Signatures against White-Box Attackers?
Kemal Bicakci, Kemal Ulker, Yusuf Uzunay, Halis Taha Şahin, Muhammed Said Gündoğan
Implementation

White-box cryptography challenges the assumption that the endpoints are trusted and aims at providing protection against an adversary more powerful than the one in the traditional black-box cryptographic model. Motivating by the fact that most existing white-box implementations focus on symmetric encryption, we present implementations for hash-based signatures so that the security against white-box attackers (who has read-only access to data with a size bounded by a space-hardness parameter...

2021/939 (PDF) Last updated: 2021-09-24
OmniLytics: A Blockchain-based Secure Data Market for Decentralized Machine Learning
Jiacheng Liang, Songze Li, Wensi Jiang, Bochuan Cao, Chaoyang He
Applications

We propose OmniLytics, a blockchain-based secure data trading marketplace for machine learning applications. Utilizing OmniLytics, many distributed data owners can contribute their private data to collectively train an ML model requested by some model owners, and receive compensation for data contribution. OmniLytics enables such model training while simultaneously providing 1) model security against curious data owners; 2) data security against the curious model and data owners; 3)...

Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.