Skip to main content

previous disabled Page of 3
and
  1. No Access

    Chapter

    Feature Transfer-Based Stealthy Poisoning Attack for DNNs

    Intentionally polluting training data with specific triggers can lead to poisoning attacks on deep neural networks. Defense algorithms can easily detect these poisoning samples, as existing episodes mainly foc...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  2. No Access

    Chapter

    Attention Mechanism-Based Adversarial Attack Against DRL

    Deep Reinforcement Learning (DRL) seeks to optimize long-term future returns through learning policies based with deep learning models for achieving specific targets. However, current research has discovered t...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  3. No Access

    Chapter

    Detecting Adversarial Examples via Local Gradient Checking

    Deep neural networks (DNNs) are vulnerable to adversarial examples, which may lead to catastrophe in security-critical domains. Numerous detection methods are proposed to characterize the feature uniqueness of...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  4. No Access

    Book

  5. No Access

    Chapter and Conference Paper

    IKE: Threshold Key Escrow Service with Intermediary Encryption

    Blockchain has gained significant attention for its potential to revolutionize various fields. The security of the blockchain system heavily relies on private key management, and traditional key management sch...

    Yang Yang, Bingyu Li, Shihong **ong, Bo Qin in Algorithms and Architectures for Parallel … (2024)

  6. No Access

    Chapter

    A Novel Adversarial Defense by Refocusing on Critical Areas and Strengthening Object Contours

    The success of deep learning is largely attributed to its representational capabilities, especially in computer vision tasks. However, recent researches have shown that deep neural networks (DNNs) are always v...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  7. No Access

    Chapter

    A Novel DNN Object Contour Attack on Image Recognition

    Deep neural networks (DNNs) have diverse applications due to their ability to learn features. However, recent studies have revealed that DNNs are susceptible to adversarial examples. Currently, the primary foc...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  8. No Access

    Chapter

    Adaptive Channel Transformation-Based Detector for Adversarial Attacks

    As deep neural networks (DNNs) are extensively used in computer vision tasks, the vulnerability of such systems to well-designed adversarial examples has received increasing attention. While various adversaria...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  9. No Access

    Chapter

    Targeted Label Adversarial Attack on Graph Embedding

    Graph embedding is a popular technique used in various real-world applications to learn low-dimensional representations for nodes or edges in a graph. The increasing interest in graph mining has led to the dev...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  10. No Access

    Chapter

    An Effective Model Copyright Protection for Federated Learning

    Federated learning (FL), an efficient distributed machine learning framework, carries out model training while safeguarding local data privacy. Due to its excellent performance and significant profits, it has ...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  11. No Access

    Chapter

    Using Adversarial Examples to against Backdoor Attack in Federated Learning

    As a distributed learning paradigm, Federated Learning (FL) has achieved great success in aggregating information from different clients to train a shared global model. Unluckily, by uploading a carefully craf...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  12. No Access

    Chapter

    A Deep Learning Framework for Dynamic Network Link Prediction

    Link prediction, which involves predicting potential relations between nodes in networks, has long been a challenge in network science. Most studies have focused on link prediction of static networks, while re...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  13. No Access

    Chapter

    Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space

    Although deep neural networks (DNNs) have shown superior performance in different software systems, they also display malfunctioning and can even lead to irreversible catastrophes. Hence, it is significant to ...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  14. No Access

    Chapter

    Perturbation-Optimized Black-Box Adversarial Attacks via Genetic Algorithm

    Deep learning models often exhibit vulnerabilities to adversarial attacks, which has led to the development of various attack methods to evaluate model robustness and devise defense strategies. Currently, adve...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  15. No Access

    Chapter

    Adversarial Attacks on GNN-Based Vertical Federated Learning

    Graph Neural Network (GNN) has emerged as a powerful technique for graph representation learning. However, when faced with large-scale private data collected from users, GNN may struggle to deliver optimal per...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  16. No Access

    Chapter

    Neuron-Level Inverse Perturbation Against Adversarial Attacks

    Although deep learning models have achieved unprecedented success, their vulnerabilities towards adversarial attacks have attracted increasing attention, especially when deployed in security-critical domains. ...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  17. No Access

    Chapter

    Query-Efficient Adversarial Attack Against Vertical Federated Graph Learning

    Graph neural networks (GNNs) are becoming more widely used due to their capacity to understand graph structures and learn graph representations of data. However, the performance of GNN is limited by distributi...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  18. No Access

    Chapter

    Defense Against Free-Rider Attack from the Weight Evolving Frequency

    Federated learning (FL) with multiple clients collaborating to train a federated model without exchanging their individual data is a method of distributed machine learning. Although federated learning has gain...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  19. No Access

    Chapter

    Backdoor Attack on Dynamic Link Prediction

    Based on historical information, graph prediction is performed by Dynamic Link Prediction (DLP). The quality of the training data plays a crucial role as it greatly impacts the prediction performance of most D...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

  20. No Access

    Chapter

    Guard the Vertical Federated Graph Learning from Property Inference Attack

    Graph Neural Networks (GNNs) have been widely applied due to their powerful feature extraction capability on graph-structured data. At practice, they typically suffer from the large-scale data collection chall...

    **yin Chen, **min Zhang, Haibin Zheng in Attacks, Defenses and Testing for Deep Learning (2024)

previous disabled Page of 3