Trustworthy AI'21: Proceedings of the 1st International Workshop on Trustworthy AI for Multimedia Computing

Trustworthy AI'21: Proceedings of the 1st International Workshop on Trustworthy AI for Multimedia Computing

Full Citation in the ACM Digital Library

SESSION: Accepted Papers

An Empirical Study of Uncertainty Gap for Disentangling Factors

  • Jiantao Wu
  • Shentong Mo
  • Lin Wang

Disentangling factors has proven to be crucial for building interpretable AI systems. Disentangled generative models would have explanatory input variables to increase the trustworthiness and robustness. Previous works apply a progressive disentanglement learning regime where the ground-truth factors are disentangled in an order. However, they didn't answer why such an order for disentanglement is important. In this work, we propose a novel metric, namely Uncertainty Gap, to evaluate how the uncertainty of generative models changes given input variables. We generalize the Uncertainty Gap to image reconstruction tasks using BCE and MSE. Extensive experiments on three commonly-used benchmarks also demonstrate the effectiveness of our Uncertainty Gap in evaluating both informativeness and redundancy of given variables. We empirically find that the significant factor with the largest Uncertainty Gap should be disentangled before insignificant factors, which indicates that a suitable order of disentangling factors facilities the performance.

Patch Replacement: A Transformation-based Method to Improve Robustness against Adversarial Attacks

  • Hanwei Zhang
  • Yannis Avrithis
  • Teddy Furon
  • Laurent Amsaleg

Deep Neural Networks (DNNs) are robust against intra-class variability of images, pose variations and random noise, but vulnerable to imperceptible adversarial perturbations that are well-crafted precisely to mislead. While random noise even of relatively large magnitude can hardly affect predictions, adversarial perturbations of very small magnitude can make a classifier fail completely. To enhance robustness, we introduce a new adversarial defense called patch replacement, which transforms both the input images and their intermediate features at early layers to make adversarial perturbations behave similarly to random noise. We decompose images/features into small patches and quantize them according to a codebook learned from legitimate training images. This maintains the semantic information of legitimate images, while removing as much as possible the effect of adversarial perturbations. Experiments show that patch replacement improves robustness against both white-box and gray-box attacks, compared with other transformation-based defenses. It has a low computational cost since it does not need training or fine-tuning the network. Importantly, in the white-box scenario, it increases the robustness, while other transformation-based defenses do not.

Dataset Diversity: Measuring and Mitigating Geographical Bias in Image Search and Retrieval

  • Abhishek Mandal
  • Susan Leavy
  • Suzanne Little

Many popular visual datasets used to train deep neural networksfor computer vision applications, especially for facial analytics,are created by retrieving images from the internet. Search enginesare often used to perform this task. However, due to localisationand personalisation of search results by the search engines alongwith the image indexing method used by these search engines, theresultant images overrepresent the demographics of the region fromwhere they were queried from. As most of the visual datasets arecreated in western countries, they tend to have a western centricbias and when these datasets are used to train deep neural networks,they tend to inherit these biases. Researchers studying the issue ofbias in visual datasets have focused on the racial aspect of thesebiases. We approach this from a geographical perspective. In thispaper, we 1) study how linguistic variations in search queries andgeographical variations in the querying region affect the social andcultural aspects of retrieved images focusing on facial analytics, 2)explore how geographical bias in image search and retrieval cancause racial, cultural and stereotypical bias in visual datasets and3) propose methods to mitigate such biases.

Hierarchical Semantic Enhanced Directional Graph Network for Visual Commonsense Reasoning

  • Mingyan Wu
  • Shuhan Qi
  • Jun Rao
  • Jiajia Zhang
  • Qing Liao
  • Xuan Wang
  • Xinxin Liao

Visual commonsense reasoning (VCR) task aims at boosting research of cognition-level correlations reasoning. It requires not only a thorough understanding of correlated details of the scene but also the ability to infer correlation with related commonsense knowledge. Existing approaches consider the region-word affinity to perform the semantic alignment between vision and linguistic domains, which neglect the implicit correspondence (e.g. word-scene, region-phrase, and phrase-scene) among visual concepts and linguistic words. Although efforts have been made to deliver promising results in previous work, these methods are still confronted with challenges when comes to make interpretable reasoning. Toward this end, we present a novel hierarchical semantic enhanced directional graph network. To be more specific, we design a Modality Interaction Unit (MIU) module, which captures high-order cross-modal alignment by aggregating the hierarchical vision-language relationships. Afterward, we propose a direction clue-aware graph reasoning (DCGR) module. In this module, valuable entities can be dynamically selected in each reasoning step, according to the importance of these entities. This leads to a more interpretable reasoning procedure. Ultimately, heterogeneous graph attention is introduced to filter the irrelevant parts of the final answers. Extensive experiments have been conducted on the VCR benchmark dataset, which demonstrates that our method can achieve competitive results and better interpretability compared with several state-of-the-art baselines.