Search
Search Results
-
Watermarking PRFs and PKE Against Quantum Adversaries
We initiate the study of software watermarking against quantum adversaries. A quantum adversary generates a quantum state as a pirate software that...
-
On Time-Space Tradeoffs for Bounded-Length Collisions in Merkle-Damgård Hashing
We study the power of preprocessing adversaries in finding bounded-length collisions in the widely used Merkle-Damgård (MD) hashing in the random...
-
On Differential Privacy and Adaptive Data Analysis with Bounded Space
We study the space complexity of the two related fields of differential privacy and adaptive data analysis. Specifically,... -
On Time-Space Tradeoffs for Bounded-Length Collisions in Merkle-Damgård Hashing
We studyGhoshal,Ashrujit Komargodski,Ilan the power of preprocessing adversaries in finding bounded-length collisions in the widely used... -
Verification of randomized consensus algorithms under round-rigid adversaries
Randomized fault-tolerant distributed algorithms pose a number of challenges for automated verification: (i) parameterization in the number of...
-
Reducing classifier overconfidence against adversaries through graph algorithms
In this work we show that deep learning classifiers tend to become overconfident in their answers under adversarial attacks, even when the classifier...
-
The Relationship Between Idealized Models Under Computationally Bounded Adversaries
The random oracle, generic group, and generic bilinear map models (ROM, GGM, GBM, respectively) are fundamental heuristics used to justify new... -
Pruning in the Face of Adversaries
The vulnerability of deep neural networks against adversarial examples – inputs with small imperceptible perturbations – has gained a lot of... -
Verifiable Capacity-Bound Functions: A New Primitive from Kolmogorov Complexity
We initiate the study of verifiable capacity-bound function (VCBF). The main VCBF property imposes a strict lower bound on the number of bits read... -
Valency-Based Consensus Under Message Adversaries Without Limit-Closure
We introduce a novel two-step approach for develo** a distributed consensus algorithm, which does not require the designer to identify and exploit... -
Mal2GCN: a robust malware detection approach using deep graph convolutional networks with non-negative weights
With the growing use of Deep Learning (DL) to tackle various problems, securing these models against adversaries has become a primary concern for...
-
Adversarial Deep Learning
Deep learning is not provably secure. Deep neural networks are vulnerable to security attacks from malicious adversaries, which is an ongoing and... -
Packet Forwarding with Swaps
We consider packet forwarding in the adversarial queueing theory (AQT) model introduced by Borodin et al. In this context, a series of recent works... -
Actively Secure Garbled Circuits with Constant Communication Overhead in the Plain Model
We consider the problem of constant-round secure two-party computation in the presence of active (malicious) adversaries. We present the first...
-
Quantum Query Lower Bounds for Key Recovery Attacks on the Even-Mansour Cipher
The Even-Mansour (EM) cipher is one of the famous constructions for a block cipher. Kuwakado and Morii demonstrated that a quantum adversary can... -
A deep reinforcement learning approach for multi-agent mobile robot patrolling
Patrolling strategies primarily deal with minimising the time taken to visit specific locations and cover an area. The use of intelligent agents in...
-
Nearly Optimal Robust Secret Sharing Against Rushing Adversaries
Robust secret sharing is a strengthening of standard secret sharing that allows the shared secret to be recovered even if some of the shares being... -
On Non-uniform Security for Black-Box Non-interactive CCA Commitments
We obtain a black-box construction of non-interactive CCA commitments against non-uniform adversaries. This makes black-box use of an appropriate... -
Towards Defending Multiple \(\ell _p\)-Norm Bounded Adversarial Perturbations via Gated Batch Normalization
There has been extensive evidence demonstrating that deep neural networks are vulnerable to adversarial examples, which motivates the development of...
-
BVDFed: Byzantine-resilient and verifiable aggregation for differentially private federated learning
Federated Learning (FL) has emerged as a powerful technology designed for collaborative training between multiple clients and a server while...