MUSA2 '17- Proceedings of the Workshop on Multimodal Understanding of Social, Affective and Subjective Attributes


SESSION: Keynote Address

  • Shih-Fu Chang

Mixed Methods and the Future of Multi-Modal Media

  • Saeideh Bakhshi
  • David A. Shamma

SESSION: Mining

  • Shih-Fu Chang

User Group based Viewpoint Recommendation using User Attributes for Multiview Videos

  • Xueting Wang
  • Yu Enokibori
  • Takatsugu Hirayama
  • Kensho Hara
  • Kenji Mase

Beyond Concept Detection: The Potential of User Intent for Image Retrieval

  • Bo Wang
  • Martha Larson

SESSION: Spotlights

  • Shih-Fu Chang

Image Captioning in the Wild: How People Caption Images on Flickr

  • Philipp Blandfort
  • Tushar Karayil
  • Damian Borth
  • Andreas Dengel

A Deep Multi-Modal Fusion Approach for Semantic Place Prediction in Social Media

  • Kaidi Meng
  • Haojie Li
  • Zhihui Wang
  • Xin Fan
  • Fuming Sun
  • Zhongxuan Luo

Movie Genre Classification based on Poster Images with Deep Neural Networks

  • Wei-Ta Chu
  • Hung-Jui Guo

Robust Multi-Modal Cues for Dyadic Human Interaction Recognition

  • Rim Trabelsi
  • Jagannadan Varadarajan
  • Yong Pei
  • Le Zhang
  • Issam Jabri
  • Ammar Bouallegue
  • Pierre Moulin

SESSION: Modeling

  • Mohammed Soleymani

Head Pose Recommendation for Taking Good Selfies

  • Yi-Tsung Hsieh
  • Mei-Chen Yeh

More Cat than Cute?: Interpretable Prediction of Adjective-Noun Pairs

  • Delia Fernandez
  • Alejandro Woodward
  • Victor Campos
  • Xavier Giro-i-Nieto
  • Brendan Jou
  • Shih-Fu Chang