Thông tin chung

  English

  Đề tài NC khoa học
  Bài báo, báo cáo khoa học
  Hướng dẫn Sau đại học
  Sách và giáo trình
  Các học phần và môn giảng dạy
  Giải thưởng khoa học, Phát minh, sáng chế
  Khen thưởng
  Thông tin khác

  Tài liệu tham khảo

  Hiệu chỉnh

 
Số người truy cập: 106,840,351

 Adversarial Examples Identification in an End-to-end System with Image Transformation and Filters
Tác giả hoặc Nhóm tác giả: Dang Duy Thang and Toshihiro Matsui
Nơi đăng: IEEE ACCESS (SCIE, Q1, IF: 4.098); Số: 1, Volume 8, ISSN: 2169-3536;Từ->đến trang: 44426-44442;Năm: 2020
Lĩnh vực: Khoa học công nghệ; Loại: Bài báo khoa học; Thể loại: Quốc tế
TÓM TẮT

ABSTRACT
Deep learning has been receiving great attention in recent years because of its impressiveperformance in many tasks. However, the widespread adoption of deep learning also becomes a major security risk for those systems as recent researches have pointed out the vulnerabilities of deep learning models. And one of the security issues related to deep learning models is adversarial examples that are an instance with very small, intentional feature perturbations that cause a machine learning model to make a wrong prediction. There have been many proposed defensive methods to combat or detect adversarial examples but still not perfect, powerful and still need a lot of fine-tuning in the process of installing security systems. In this work, we introduce a completely automated method of identifying adversarial examples by using image transformation and filter techniques in an end-to-end system. By exploring the adversarial features that are sensitive to geometry and frequency, we integrate the geometric transformation and denoising based on the frequency domain for identifying adversarial examples. Our proposed detection system is evaluated on popular data sets such as ImageNet or MNIST and gives accurate results up to 99.9%with many optimizations.
© Đại học Đà Nẵng
 
 
Địa chỉ: 41 Lê Duẩn Thành phố Đà Nẵng
Điện thoại: (84) 0236 3822 041 ; Email: dhdn@ac.udn.vn