Program

The CIVR 2009 conference will include sessions for presenting high-quality research papers and for sharing practitioner experience. For this purpose it will contain keynote speeches, oral/poster sessions, as well as a demo session for demonstration of existing commercial products, systems, and prototypes.

The program will also include the practitioner day and the VideOlympics showcase.

During the welcome reception, the SEMEDIA project, will demonstrate the group of search tools they have developed for video environments, such as audiovisual production, film industry postproduction and online communications media. SEMEDIA is co-funded by the European Union. 




Invited Speakers

Prof. Luis von Ahn (CMU) and Prof. Luc Van Gool (ETHZ and Univ. Leuven) are confirmed as invited speakers.




Day 1: July 8

[09.00-10.00] Keynote 1 - Chair: Yiannis Kompatsiaris

"Human Computation", Prof. Luis von Ahn 

[10.00-10.20] Coffee break

[10.20-12.00] Oral session 1 - Chair: Marcel Worring

Interactive systems:Retrieval and Browsing

  • Peter Wilkins, Raphael Troncy, Martin Halvey, Daragh Byrne, Alia Amin, P Punitha, Alan Smeaton and Robert Villa, User Variance and its Impact on Video Retrieval Benchmarking
  • Grant Strong and Minglun Gong, Organizing and Browsing Photos using Different Feature Vectors and Their Evaluations
  • Qizhen He, Zhiwu Lu and Horace Ip, View Topics: Automatically Generated Characteristic View for Content-Based 3D Object Retrieval
  • Edward Lo, Mark Pickering, Michael Frater and John Arnold, Query by Example using Invariant Features from the Double Dyadic Dual-Tree Complex Wavelet Transform

[12.00-13.30] Lunch break

[13.30-15.10] Oral session 2 - Chair: Nicu Sebe

Best paper candidates

  • Jasper Uijlings, Arnold Smeulders and Remko Scha, Real-time Bag of Words, Approximately
  • Lyndsey Pickup and Andrew Zisserman, Automatic retrieval of visual continuity errors in movies
  • Martin Halvey and Joemon M. Jose, Role of Expertise in Aiding Video Search
  • Rainer Lienhart, Stefan Romberg and Eva Hörster, Multilayer pLSA for Multimodal Image Retrieval

[15.40-18.00] Poster session - Chair: Vasileios Mezaris

  • Costantino Grana, Daniele Borghesani and Rita Cucchiara. Picture Extraction from Digitized Historical Manuscripts
  • Motoaki Kawanabe, Shinichi Nakajima and Alexander Binder. A Procedure of Adaptive Kernel Combination with Kernel-Target Alignment for Object Classification
  • Mihir Jain, Sreekanth Vempati, Chandrika Pulla and Jawahar C. V.. Example Based Video Filters
  • Georgios Goudelis, Anastasios Tefas and Ioannis Pitas. Using Mutual Information to Indicate Facial Poses in Video Sequences
  • Takahiko Furuya and Ryutarou Ohbuchi. dense sampling and fast encoding for 3d model retrieval using bag-of-visual features
  • Joao Magalhaes, José Iria and Fabio Ciravegna. Web News Categorization using a Cross-Media Document Graph
  • Rainer Lienhart and Ina Döhring. Mining TV Broadcasts for Recurring Video Sequences
  • Ioannis Arapakis, Yashar Moshfeghi, Hideo Joho, Reede Ren, David Hannah and Joemon M. Jose. Enriching User Profiling with Affective Features for the Improvement of a Multimodal Recommender System
  • Zhiwu Lu, Horace Ip and Qizhen He. Context-Based Multi-Label Image Annotation
  • Hideo Joho, Joemon Jose, Roberto Valenti and Nicu Sebe. Exploiting Facial Expressions for Affective Video Summarisation
  • Philip DeCamp and Deb Roy. A Human-Machine Collaborative Approach to Tracking Human Movement in Multi-Camera Video
  • Philip Kelly, Ciaran O Conaire and Noel OConnor. Exploiting Contextual Data for Event Retrieval in Surveillance Video
  • Ceyhun Burak Akgul, Devrim Unay and Ahmet Ekin. Automated Diagnosis of Alzheimer’s Disease using Image Similarity and User Feedback
  • Vasileios Chasanis, Argyris Kalogeratos and Aristidis Likas. Movie Segmentation into Scenes and Chapters Using Locally Weighted Bag of Visual Words
  • Jianping Fan. Integrating Visual and Semantic Contexts for Topic Network Generation and Word Sense Disambiguation
  • Ville Viitaniemi and Jorma Laaksonen. Spatial Extensions to Bag of Visual Words
  • Stefanie Tellex and Deb Roy. Towards Surveillance Video Search by Natural Language Query
  • Thomas Deselaers, Tobias Gass, Philippe Dreuw and Hermann Ney. Jointly Optimising Relevance and Diversity in Image Retrieval
  • Thi Lan Le, Monique Thonnat, Alain Boucher and Francois Bremond. Appearance based retrieval for tracked objects in surveillance videos
  • Manni Duan, Adrian Ulges, Thomas Breuel and Xiu-qing Wu. Style Modeling for Tagging Personal Photo Collections
  • Songhua Xu, Hao Jiang and Francis C.M. Lau. Learning to rank videos personally using multiple clues
  • Alexandre Hervieu, Patrick Bouthemy and Jean-Pierre Le Cadre. Trajectory-based handball video undestanding
  • Robin ALy, Djoerd Hiemstra and Arjen de Vries. Reusing Annotation Labor for Concept Selection
  • Mei-Chen Yeh and Kwang-Ting Cheng. Video Copy Detection by Fast Sequence Matching
  • Hongtao Xu, Xiangdong Zhou, Mei Wang, Yu Xiang and Baile Shi. Exploring Flickr's Related Tags for Semantic Annotation of Web Images
  • Thierry Urruty, Frank Hopfgartner, David Hannah, Desmond Elliott and Joemon M. Jose. Supporting Aspect-Based Video Browsing – Analysis of a User Study
  • Tat-Seng Chua, Jinhui Tang, Richang Hong, Haojie Li, Zhiping Luo and Yantao Zheng. NUS-WIDE: A Real-World Web Image Database from National University of Singapore

[19:00 - 21:00] SEMEDIA Welcome Reception

 


Day 2: July 9

[09.00-10.00] Keynote 2 - Chair: Stephane Marchand-Maillet

"Mining from large image sets", Prof. Luc Van Gool 

[10.00-10.20] Coffee break

[10.20-12.25] Oral session 3 - Chair: Georges Quénot

Geo-tagging and high-level semantic annotation

  • Adrian Popescu and Pierre-Alain Moëllic, MonuAnno: Automatic Annotation of Georeferenced Landmarks Images
  • Jim Kleban, Emily Moxley, Jiejun Xu and B.S. Manjunath, Global Annotation on Georeferenced Photographs
  • Keiji Yanai, Hidetoshi Kawakubo and Bingyu Qiu, A Visual Analysis of the Relationship between Word Concepts and Geographical Locations
  • Theodora Tsikrika, Christos Diou, Arjen de Vries and Anastasios Delopoulos, Image annotation using clickthrough data
  • Xiao-Yong Wei, Yu-Gang Jiang and Chong-Wah Ngo, Exploring Inter-Concept Relationship with Context Space for Semantic Video Indexing

[12.25-14.00] Lunch break

[14.00-16.05] Oral session 4 - Chair: Cees Snoek

Image and Video Processing

  • Ruixuan Wang, Stephen J. McKenna and Junwei Han, High-Entropy Layouts for Content-based Browsing and Retrieval
  • Hideki Nakayama,Tatsuya Harada and Yasuo Kuniyoshi, Dense Sampling Low-Level Statistics of Local Features
  • Wei-Ta Chu, Che-Cheng Lin and Jen-Yu Yu, Using Cross-Media Correlation for Scene Detection in Travel Videos
  • Matthijs Douze, Hervé Jégou, Harsimrat Singh, Laurent Amsaleg and Cordelia Schmid. Evaluation of GIST descriptors for web-scale image search
  • Mahmudur Rahman and Prabir Bhattacharya, Image Retrieval with Automatic Query Expansion Based on Local Analysis in a Semantical Concept Feature Space

[16.05-16.30] Coffee break

[16.30-18.30] Videolympics - Chair: Alan Smeaton

  • Colum Foley, Peter Wilkins and Alan F. Smeaton, DCU Collaborative Video Search System
  • Ork de Rooij, Cees G.M. Snoek, and Marcel Worring, MediaMill: Guiding the User to Results using the ForkBrowser
  • Stefanos Vrochidis, Paul King, Lambros Makris, Anastasia Moumtzidou, Spiros Nikolopoulos, Anastasios Dimou, Vasileios Mezaris and Ioannis Kompatsiaris, MKLab Interactive Video Retrieval System
  • Jianmin Li, Zhikun Wang, Bo Zhang, The Interactive Video Retrieval System in SMARTV 2009
  • Stéphane Ayache, Georges Quénot, Laurent Besacier, The LIG Multi-Criteria System for Video Retrieval
  • Juan Cao, Yong-Dong Zhang, Jun-Bo Guo, Lei Bao, Jin-Tao Li, VideoMap: An Interactive Video Retrieval System of MCG-ICT-CAS
  • Yan-Tao Zheng, Shi-Yong Neo, Xiangyu Chen and Tat-Seng Chua, VisionGo: Towards True Interactivity

[19:30-22:30] Social event - Dinner at Skala Restaurant, Oia

 


Day 3: July 10

Practitioners day




VideOlympics Showcase

VideOlympics Showcase is real-time demo session of video retrieval systems. The major aim of the VideOlympics is to promote video retrieval research. An additional main goal of the VideOlympics is giving the audience a good perspective on the possibilities and limitations of the current state-of-the-art systems. Where traditional evaluation campaigns like TRECVID focus primarily on the effectiveness of collected retrieval results, the VideOlympics also allows to take into account the influence of interaction mechanisms and the advanced visualizations in the interface. Specifically, VideOlympics is a showcase that goes beyond the regular demo session: it should be fun to do for the participants and fun to watch for the conference audience. For all these reasons, the VideOlympics should only have winners. Similar to previous years, a number of TRECVID participants will simultaneously do an interactive search task during the VideOlympics showcase event. For the first time, in the 2009 edition of the VideOlympics a round with novice users will be included, in addition to the round with expert users.