-
Chapter
Feature Transfer-Based Stealthy Poisoning Attack for DNNs
Intentionally polluting training data with specific triggers can lead to poisoning attacks on deep neural networks. Defense algorithms can easily detect these poisoning samples, as existing episodes mainly foc...
-
Chapter
Attention Mechanism-Based Adversarial Attack Against DRL
Deep Reinforcement Learning (DRL) seeks to optimize long-term future returns through learning policies based with deep learning models for achieving specific targets. However, current research has discovered t...
-
Chapter
Detecting Adversarial Examples via Local Gradient Checking
Deep neural networks (DNNs) are vulnerable to adversarial examples, which may lead to catastrophe in security-critical domains. Numerous detection methods are proposed to characterize the feature uniqueness of...
-
Book
-
Chapter and Conference Paper
IKE: Threshold Key Escrow Service with Intermediary Encryption
Blockchain has gained significant attention for its potential to revolutionize various fields. The security of the blockchain system heavily relies on private key management, and traditional key management sch...
-
Chapter
A Novel Adversarial Defense by Refocusing on Critical Areas and Strengthening Object Contours
The success of deep learning is largely attributed to its representational capabilities, especially in computer vision tasks. However, recent researches have shown that deep neural networks (DNNs) are always v...
-
Chapter
A Novel DNN Object Contour Attack on Image Recognition
Deep neural networks (DNNs) have diverse applications due to their ability to learn features. However, recent studies have revealed that DNNs are susceptible to adversarial examples. Currently, the primary foc...
-
Chapter
Adaptive Channel Transformation-Based Detector for Adversarial Attacks
As deep neural networks (DNNs) are extensively used in computer vision tasks, the vulnerability of such systems to well-designed adversarial examples has received increasing attention. While various adversaria...
-
Chapter
Targeted Label Adversarial Attack on Graph Embedding
Graph embedding is a popular technique used in various real-world applications to learn low-dimensional representations for nodes or edges in a graph. The increasing interest in graph mining has led to the dev...
-
Chapter
An Effective Model Copyright Protection for Federated Learning
Federated learning (FL), an efficient distributed machine learning framework, carries out model training while safeguarding local data privacy. Due to its excellent performance and significant profits, it has ...
-
Chapter
Using Adversarial Examples to against Backdoor Attack in Federated Learning
As a distributed learning paradigm, Federated Learning (FL) has achieved great success in aggregating information from different clients to train a shared global model. Unluckily, by uploading a carefully craf...
-
Chapter
A Deep Learning Framework for Dynamic Network Link Prediction
Link prediction, which involves predicting potential relations between nodes in networks, has long been a challenge in network science. Most studies have focused on link prediction of static networks, while re...
-
Chapter
Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space
Although deep neural networks (DNNs) have shown superior performance in different software systems, they also display malfunctioning and can even lead to irreversible catastrophes. Hence, it is significant to ...
-
Chapter
Perturbation-Optimized Black-Box Adversarial Attacks via Genetic Algorithm
Deep learning models often exhibit vulnerabilities to adversarial attacks, which has led to the development of various attack methods to evaluate model robustness and devise defense strategies. Currently, adve...
-
Chapter
Adversarial Attacks on GNN-Based Vertical Federated Learning
Graph Neural Network (GNN) has emerged as a powerful technique for graph representation learning. However, when faced with large-scale private data collected from users, GNN may struggle to deliver optimal per...
-
Chapter
Neuron-Level Inverse Perturbation Against Adversarial Attacks
Although deep learning models have achieved unprecedented success, their vulnerabilities towards adversarial attacks have attracted increasing attention, especially when deployed in security-critical domains. ...
-
Chapter
Query-Efficient Adversarial Attack Against Vertical Federated Graph Learning
Graph neural networks (GNNs) are becoming more widely used due to their capacity to understand graph structures and learn graph representations of data. However, the performance of GNN is limited by distributi...
-
Chapter
Defense Against Free-Rider Attack from the Weight Evolving Frequency
Federated learning (FL) with multiple clients collaborating to train a federated model without exchanging their individual data is a method of distributed machine learning. Although federated learning has gain...
-
Chapter
Backdoor Attack on Dynamic Link Prediction
Based on historical information, graph prediction is performed by Dynamic Link Prediction (DLP). The quality of the training data plays a crucial role as it greatly impacts the prediction performance of most D...
-
Chapter
Guard the Vertical Federated Graph Learning from Property Inference Attack
Graph Neural Networks (GNNs) have been widely applied due to their powerful feature extraction capability on graph-structured data. At practice, they typically suffer from the large-scale data collection chall...