ADVM '21: Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia




ADVM '21: Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia

ADVM '21: Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia


Full Citation in the ACM Digital Library

SESSION: Oral Session

Comparative Study of Adversarial Training Methods for Long-tailed Classification

  • Xiangxian Li
  • Haokai Ma
  • Lei Meng
  • Xiangxu Meng

Adversarial training is originated in image classification to address the problem
of adversarial attacks, where an invisible perturbation in an image leads to a significant
change in model decision. It recently has been observed to be effective in alleviating
the long-tailed classification problem, where an imbalanced size of classes makes
the model has much lower performance on small classes. However, existing methods typically
focus on the methods to generate perturbations for data, while the contributions of
different perturbations to long-tailed classification have not been well analyzed.
To this end, this paper presents an investigation on the perturbation generation and
incorporation components of existing adversarial training methods and proposes a taxonomy
that defines these methods using three levels of components, in terms of information,
methodology, and optimization. This taxonomy may serve as a design paradigm where
an adversarial training algorithm can be created by combining different components
in the taxonomy. A comparative study is conducted to verify the influence of each
component in long-tailed classification. Experimental results on two benchmarking
datasets show that a combination of statistical perturbations and hybrid optimization
achieves a promising performance, and the gradient-based method typically improves
the performance of both the head and tail classes. More importantly, it is verified
that a reasonable combination of the components in our taxonomy may create an algorithm
that outperforms the state-of-the-art.

Imperceptible Adversarial Examples by Spatial Chroma-Shift

  • Ayberk Aydin
  • Deniz Sen
  • Berat Tuna Karli
  • Oguz Hanoglu
  • Alptekin Temizel

Deep Neural Networks have been shown to be vulnerable to various kinds of adversarial
perturbations. In addition to widely studied additive noise based perturbations, adversarial
examples can also be created by applying a per pixel spatial drift on input images.
While spatial transformation based adversarial examples look more natural to human
observers due to absence of additive noise, they still possess visible distortions
caused by spatial transformations. Since the human vision is more sensitive to the
distortions in the luminance compared to those in chrominance channels, which is one
of the main ideas behind the lossy visual multimedia compression standards, we propose
a spatial transformation based perturbation method to create adversarial examples
by only modifying the color components of an input image. While having competitive
fooling rates on CIFAR-10 and NIPS2017 Adversarial Learning Challenge datasets, examples
created with the proposed method have better scores with regards to various perceptual
quality metrics. Human visual perception studies validate that the examples are more
natural looking and often indistinguishable from their original counterparts.

Generating Adversarial Remote Sensing Images via Pan-Sharpening Technique

  • Maoxun Yuan
  • Xingxing Wei

Pan-sharpening is one of the most commonly used techniques in remote sensing, which
fuses panchromatic (PAN) and multispectral (MS) images to obtain both the high spectral
and high spatial resolution images. Due to these advantages, researchers usually apply
object detectors on these pan-sharpened images to achieve reliable detection results.
However, recent studies have shown that deep learning-based object detection methods
are vulnerable to adversarial examples, i.e., adding imperceptible noises on clean
images can fool well-trained deep neural networks. It is interesting to combine the
pan-sharpening technique and adversarial examples to attack object detectors in remote
sensing. In this paper, we propose a method to generate adversarial pan-sharpened
images. We utilize a generative network to generate the pan-sharpened images, and
then propose the shape loss and label loss to perform the attack task. To guarantee
the quality of pan-sharpened images, a perceptual loss is utilized to balance spectral
preserving and attacking performance. The proposed method is applied to attack two
object detectors: Faster R-CNN and Feature Pyramid Networks (FPN). Experimental results
on GaoFen-1 satellite images demonstrate that the proposed method can generate effective
adversarial images. The mAP of Faster R-CNN with VGG16 drops significantly from 0.870
to 0.014.

Improving Generalization of Deepfake Detection with Domain Adaptive Batch Normalization

  • Zixin Yin
  • Jiakai Wang
  • Yifu Ding
  • Yisong Xiao
  • Jun Guo
  • Renshuai Tao
  • Haotong Qin

Deepfake, a well-known face forgery technique, has raised serious concerns about personal
privacy and social media security. Therefore, a plenty of deepfake detection methods
come out and achieve outstanding performance in the single dataset case. However,
current deepfake detection methods fail to perform strong generalization ability in
cross-dataset case due to the domain gap. To tackle this issue, we propose Domain
Adaptive Batch Normalization (DABN) strategy to mitigate the domain distribution gap
on different datasets. Specifically, DABN utilizes the distribution statistics of
the testing dataset in replace of the original counterparts so as to avoid distribution
mismatch and restore the effectiveness of BN layers. Equipped with our DABN, detection
method can be more robust when generalized into a broader usage. Note that our method
is flexible and can be further employed on most existing deepfake detection methods
during testing, which shows a great practical value. Extensive experiments on multiple
datasets and models demonstrate the effectiveness of DABN. The proposed method achieves
an average accuracy improvement by nearly 20% of existing strategies on Celeb-DF dataset
under black-box settings, indicating strong enhancement of generalization ability
of the deepfake detection models.

SESSION: Poster Session

Comparative Study of Adversarial Training Methods for Cold-Start Recommendation

  • Haokai Ma
  • Xiangxian Li
  • Lei Meng
  • Xiangxu Meng

Adversarial training in recommendation is originated to improve the robustness of
recommenders to attack signals and has recently shown promising results to alleviate
cold-start recommendation. However, existing methods usually should make a trade-off
between model robustness and performance, and the underlying reasons why using adversarial
samples for training works has not been sufficiently verified. To address this issue,
this paper identifies the key components of existing adversarial training methods
and presents a taxonomy that defines these methods using three levels of components
for perturbation generation, perturbation incorporation, and model optimization. Based
on this taxonomy, different variants of existing methods are created, and a comparative
study is conducted to verify the influence of each component in cold-start recommendation.
Experimental results on two benchmarking datasets show that existing state-of-the-art
algorithms can be further improved by a proper pairing of the key components as listed
in the taxonomy. Moreover, using case studies and visualization, the influence of
the content information of items on cold-start recommendation has been analyzed, and
the explanations for the working mechanism of different components as proposed in
the taxonomy have been offered. These verify the effectiveness of the proposed taxonomy
as a design paradigm for adversarial training.

Detecting Adversarial Patch Attacks through Global-local Consistency

  • Bo Li
  • Jianghe Xu
  • Shuang Wu
  • Shouhong Ding
  • Jilin Li
  • Feiyue Huang

Recent works have well-demonstrated the threat of adversarial patch attacks to real-world
vision media systems. By arbitrarily modifying pixels within a small restricted area
in the image, adversarial patches can mislead neural-network-based image classifiers.
In this paper, we propose a simple but very effective approach to detect adversarial
patches based on an interesting observation called global-local consistency. We verify
this insight and propose to use Random-Local-Ensemble (RLE) strategy to further enhance
it in the detection. The proposed method is trivial to implement and can be applied
to protect any image classification models. Experiments on two popular datasets show
that our algorithm can accurately detect the adversarial patches while maintaining
high clean accuracy. Moreover, unlike the prior detection approaches which can be
easily broken by adaptive attacks, our method is proved to have high robustness when
facing adaptive attacks.

Real World Robustness from Systematic Noise

  • Yan Wang
  • Yuhang Li
  • Ruihao Gong
  • Tianzi Xiao
  • Fengwei Yu

Systematic error, which is not determined by chance, often refers to the inaccuracy
(involving either the observation or measurement process) inherent to a system. In
this paper, we exhibit some long-neglected but frequent-happening adversarial examples
caused by systematic error. More specifically, we find the trained neural network
classifier can be fooled by inconsistent implementations of image decoding and resize.
This tiny difference between these implementations often causes an accuracy drop from
training to deployment. To benchmark these real-world adversarial examples, we propose
ImageNet-S dataset, which enables researchers to measure a classifier's robustness
to systematic error. For example, we find a normal ResNet-50 trained on ImageNet can
have 1%$\sim$5% accuracy difference due to the systematic error. Together our evaluation
and dataset may aid future work toward real-world robustness and practical generalization.

Enhancing Adversarial Examples Transferability via Ensemble Feature Manifolds

  • Dongdong Yang
  • Wenjie Li
  • Rongrong Ni
  • Yao Zhao

The adversarial attack is a technique that causes intended misclassification by adding
imperceptible perturbations to benign inputs. It provides a way to evaluate the robustness
of models. Many existing adversarial attacks have achieved good performance in the
white-box settings. However, these adversarial examples generated by various attacks
typically overfit the particular architecture of the source model, resulting in low
transferability in the black-box scenarios. In this work, we propose a novel feature
attack method called Features-Ensemble Generative Adversarial Network (FEGAN), which
ensembles multiple feature manifolds to capture intrinsic adversarial information
that is most likely to cause misclassification of many models, thereby improving the
transferability of adversarial examples. Accordingly, a generator trained based on
various latent feature vectors of benign inputs can produce adversarial examples containing
this adversarial information. Extensive experiments on the MNIST and CIFAR10 datasets
demonstrate that the proposed method improves the transferability of adversarial examples
while ensuring the attack success rate in the white-box scenario. In addition, the
generated adversarial examples are more realistic with distribution close to that
of the actual data.

An Investigation on Sparsity of CapsNets for Adversarial Robustness

  • Lei Zhao
  • Lei Huang

The routing-by-agreement mechanism in capsule networks (CapsNets) is used to build
visual hierarchical relationships with a characteristic of assigning parts to wholes.
The connections between capsules of different layers become sparser with more iterations
of routing. This paper proposes techniques in measuring, controlling, and visualizing
the sparsity of CapsNets. One essential observation in this paper is that the sparser
CapsNets are possibly more robust to the adversarial attacks. We believe this observation
will provide insights into designing more robust models.

Frequency Centric Defense Mechanisms against Adversarial Examples

  • Sanket B. Shah
  • Param Raval
  • Harin Khakhi
  • Mehul S. Raval

Adversarial example(AE) aims at fooling a Convolution Neural Network by introducing
small perturbations in the input image. The proposed work uses the magnitude and phase
of the Fourier Spectrum and the entropy of the image to defend against AE. We demonstrate
the defense in two ways: by training an adversarial detector and denoising the adversarial
effect. Experiments were conducted on the low-resolution CIFAR-10 and high-resolution
ImageNet datasets. The adversarial detector has 99% accuracy for FGSM and PGD attacks
on the CIFAR-10 dataset. However, the detection accuracy falls to 50% for sophisticated
DeepFool and Carlini & Wagner attacks on ImageNet. We overcome the limitation by using
autoencoder and show that 70% of AEs are correctly classified after denoising.