LSC '21: Proceedings of the 4th Annual on Lifelog Search Challenge
LSC '21: Proceedings of the 4th Annual on Lifelog Search Challenge
SESSION: Keynote Talk
Lifelogging as a Memory Prosthetic
- Alan F. Smeaton
Since computers were first used to address the challenge of managing information rather
than performing calculations computing arithmetic values, or even before that since
the time that MEMEX was designed by Vannevar Bush in the 1940s, we have been building
systems that help people like us to find information accurately and quickly. These
systems have grown to be technological marvels, discovering and indexing information
almost as soon as it appears available online and making it available to billions
of people for searching and delivery within fractions of a second, and across a range
of devices. Yet it is well known that half the time people are actually searching
for things that they once knew but have since forgotten, or can't remember where they
found that information first time around, and need to re-find it. As our science of
information seeking and information discovery has progressed, we rarely ask why people
forgot those things in the first place. If we were allowed to jump back in time say
50 years, and to re-start the development of information retrieval as a technology
then perhaps we would be build systems that help us to remember and to learn, rather
than trying to plug the gap and find information for us when we forget. In separate
but parallel and sometimes overlapping developments, the analysis and indexing of
visual information -- images and video -- has also made spectacular progress mostly
within the last decade. Using automated processes we can detect and track objects,
we can describe visual content as tags or even as text captions, we can now generate
realistic high quality visual content using machine learning and we can compute high-level
abstract features of visual content like salience, aesthetics, and even memorability.
One of the areas where information management/retrieval with its 50 years of technological
progress meets computer vision with its recent decade of spectacular development is
in lifelogging. At this intersection we can apply computer vision techniques to analyse
and index visual lifelogs generated from wearable cameras, for example, in order to
support lifelog search and browsing tasks. But we should ask ourselves whether this
really is the right way for us to use our lifelogs. Memory is one of the core features
that make us what we are yet it is fragile and only partly understood. We have no
real control over what we remember and what we forget and when we really do need to
remember something that could be important then we make ham-fisted efforts to consciously
over-ride our natural tendency to forget. We do this, for example, rehearsing and
replaying information, building on the Ebbinghaus principle of repeated conscious
reviewing to overcome transience which is the general deterioration of memory over
time. In this presentation I will probe deeper into memory, recall, recognition, memorability
and memory triggers and how our lifelogs could really act as memory prosthetics, visual
triggers for our own natural memory. This will allow us to ask whether the lifelog
challenges that we build and run in events such as this Annual Lifelog Search Challenge
meeting are appropriately framed and whether they are taking us in the direction where
lifelogs are genuinely useful to a wide population rather than to a niche set of people.
Finally I will address the frightening scenario where everything for us is potentially
remembered and whether or not we want that to actually happen.
SESSION: Oral Paper Session
Exquisitor at the Lifelog Search Challenge 2021: Relationships Between Semantic Classifiers
- Omar Shahbaz Khan
- Aaron Duane
- Björn Þór Jónsson
- Jan Zahálka
- Stevan Rudinac
- Marcel Worring
Exquisitor is a scalable media exploration system based on interactive learning. To
satisfy a user's information need, the system asks the user for feedback on media
items and uses that feedback to interactively construct a classifier, that is in turn
used to identify the next potentially relevant set of media items. To facilitate effective
exploration of a collection, the system offers filters to narrow the scope of exploration,
search functionality for finding good examples for the classifier, and support for
timeline browsing of videos or image sequences. For this year's Lifelog Search Challenge,
we have enhanced Exquisitor to better support tasks with a temporal component, by
adding features that allow the user to build multiple classifiers and merge the classifier
results, using both traditional set operators and advanced temporal operators.
Exploring Graph-querying approaches in LifeGraph
- Luca Rossetto
- Matthias Baumgartner
- Ralph Gasser
- Lucien Heitz
- Ruijie Wang
- Abraham Bernstein
The multi-modal and interrelated nature of lifelog data makes it well suited for graph-based
representations. In this paper, we present the second iteration of LifeGraph, a Knowledge
Graph for Lifelog Data, initially introduced during the 3rd Lifelog Search Challenge
in 2020. This second iteration incorporates several lessons learned from the previous
version. While the actual graph has undergone only small changes, the mechanisms by
which it is traversed during querying as well as the underlying storage system which
performs the traversal have been changed. The means for query formulation have also
been slightly extended in capability and made more efficient and intuitive. All these
changes have the aim of improving result quality and reducing query time.
Myscéal 2.0: A Revised Experimental Interactive Lifelog Retrieval System for LSC'21
- Ly-Duyen Tran
- Manh-Duy Nguyen
- Nguyen Thanh Binh
- Hyowon Lee
- Cathal Gurrin
Building an interactive retrieval system for lifelogging contains many challenges
due to massive multi-modal personal data besides the requirement of accuracy and rapid
response for such a tool. The Lifelog Search Challenge (LSC) is the international
lifelog retrieval competition that inspires researchers to develop their systems to
cope with the challenges and evaluates the effectiveness of their solutions. In this
paper, we upgrade our previous Myscéal and present Myscéal 2.0 system for the LSC'21
with the improved features inspired by the novice users experiments. The experiments
show that a novice user achieved more than half of the expert score on average. To
mitigate the gap of them, some potential enhancements were identified and integrated
to the enhanced version.
Exploring Intuitive Lifelog Retrieval and Interaction Modes in Virtual Reality with
vitrivr-VR
- Florian Spiess
- Ralph Gasser
- Silvan Heller
- Luca Rossetto
- Loris Sauter
- Milan van Zanten
- Heiko Schuldt
The multimodal nature of lifelog data collections poses unique challenges for multimedia
management and retrieval systems. The Lifelog Search Challenge (LSC) offers an annual
evaluation platform for such interactive retrieval systems. They compete against one
another in finding items of interest within a set time frame.
In this paper, we present the multimedia retrieval system vitrivr-VR, the latest addition
to the vitrivr stack, which participated in the LSC in recent years. vitrivr-VR leverages
the 3D space in virtual reality (VR) to offer novel retrieval and user interaction
models, which we describe with a special focus on design decisions taken for the participation
in the LSC.
lifeXplore at the Lifelog Search Challenge 2021
- Andreas Leibetseder
- Klaus Schoeffmann
Since its first iteration in 2018, the Lifelog Search Challenge (LSC) continues to
rise in popularity as an interactive lifelog data retrieval competition, co-located
at the ACM International Conference on Multimedia Retrieval (ICMR). The goal of this
annual live event is to search a large corpus of lifelogging data for specifically
announced memories using a purposefully developed tool within a limited amount of
time. As long-standing participants, we present our improved lifeXplore -- a retrieval
system combining chronologic day summary browsing with interactive combinable concept
filtering. Compared to previous versions, the tool is improved by incorporating temporal
queries, advanced day summary features as well as usability improvements.
ViRMA: Virtual Reality Multimedia Analytics at LSC 2021
- Aaron Duane
- Bjorn Þór Jónsson
In this paper we describe the first iteration of the ViRMA prototype system, a novel
approach to multimedia analysis in virtual reality and inspired by the M3 data model.
We intend to evaluate our approach via the Lifelog Search Challenge (LSC) to serve
as a benchmark against other multimedia analytics systems.
Interactive Multimodal Lifelog Retrieval with vitrivr at LSC 2021
- Silvan Heller
- Ralph Gasser
- Mahnaz Parian-Scherb
- Sanja Popovic
- Luca Rossetto
- Loris Sauter
- Florian Spiess
- Heiko Schuldt
The Lifelog Search Challenge (LSC) is an annual benchmarking competition for interactive
multimedia retrieval systems, where participating systems compete in finding events
based on textual descriptions containing hints about structured, semi-structured,
and/or unstructured data. In this paper, we present the multimedia retrieval system
vitrivr, a long-time participant to LSC, with a focus on new functionality. Specifically,
we introduce the image stabilisation module which is added prior to the feature extraction
to reduce the image degradation caused by lifelogger movements, and discuss how geodata
is used during query formulation, query execution, and result presentation.
LifeSeeker 3.0: An Interactive Lifelog Search Engine for LSC'21
- Thao-Nhu Nguyen
- Tu-Khiem Le
- Van-Tu Ninh
- Minh-Triet Tran
- Nguyen Thanh Binh
- Graham Healy
- Annalina Caputo
- Cathal Gurrin
In this paper, we present the interactive lifelog retrieval engine developed for the
LSC'21 comparative benchmarking challenge. The LifeSeeker 3.0 interactive lifelog
retrieval engine is an enhanced version of our previous system participating in LSC'20
- LifeSeeker 2.0. The system is developed by both Dublin City University and the Ho
Chi Minh City University of Science. The implementation of LifeSeeker 3.0 focuses
on searching and filtering by text query using a weighted Bag-of-Words model with
visual concept augmentation and three weighted vocabularies. The visual similarity
search is improved using a bag of local convolutional features; while improving the
previous version's performance, enhancing query processing time, result displaying,
and browsing support.
LifeConcept: An Interactive Approach for Multimodal Lifelog Retrieval through Concept
Recommendation
- Wei-Hong Ang
- An-Zi Yen
- Tai-Te Chu
- Hen-Hsen Huang
- Hsin-Hsi Chen
The major challenge in visual lifelog retrieval is the semantic gap between textual
queries and visual concepts. This paper presents our work on the Lifelog Search Challenge
2021 (LSC'21), an annual comparative benchmarking activity for comparing approaches
to interactive retrieval from multimodal lifelogs. We propose LifeConcept, an interactive
lifelog search system that is aimed at accelerating the retrieval process and retrieving
more precise results. In this work, we introduce several new features such as the
number of people, location cluster, and object with color. Moreover, we obtain visual
concepts from the images with computer vision models and propose a concept recommendation
method to reduce the semantic gap. In this way, users can efficiently set up the related
conditions for their requirements and search the desired images with appropriate query
terms based on the suggestion.
Memento: A Prototype Lifelog Search Engine for LSC'21
- Naushad Alam
- Yvette Graham
- Cathal Gurrin
In this paper, we introduce a new lifelog retrieval system called Memento that leverages
semantic representations of images and textual queries projected into a common latent
space to facilitate effective retrieval. It bridges the semantic gap between complex
visual scenes/events and user information needs expressed as textual and faceted queries.
The system, developed for the 2021 Lifelog Search Challenge, also has a minimalist
user interface that includes primary search, temporal search, and visual data filtering
components.
PhotoCube at the Lifelog Search Challenge 2021
- Jihye Shin
- Alexandra Waldau
- Aaron Duane
- Björn Þór Jónsson
The Lifelog Search Challenge (LSC) is a venue where retrieval system researchers compete
in solving tasks to retrieve the correct image from a lifelog collection. At LSC 2021,
we introduce the PhotoCube system as a new competitor. PhotoCube is an interactive
media retrieval system that considers media items to exist in a hypercube in multidimensional
metadata space. To solve tasks, users explore the contents of the hypercube by dynamically
(a) applying a variety of filters and (b) projecting the hypercube to a three-dimensional
cube that is visualised on screen.
Voxento 2.0: A Prototype Voice-controlled Interactive Search Engine for Lifelogs
- Ahmed Alateeq
- Mark Roantree
- Cathal Gurrin
In this paper, we describe an extended version of Voxento which is an interactive
voice-based retrieval system for lifelogs that has been developed to participate in
the fourth Lifelog Search Challenge LSC'21, at ACM ICMR'21. Voxento provides a spoken
interface to the lifelog dataset, which facilitates a novice user to interact with
a personal lifelog using a range of vocal commands and interactions. For the version
presented here, Voxento has been enhanced with new retrieval features and better user
interaction support. In this paper, we introduce these new features, which include
dynamic result filtering, predefined interactive responses and the development of
a new retrieval API. Although Voxento was proposed for wearable technologies such
as Google Glass or interactive devices like smart TVs, the version of Voxento presented
here uses a desktop computer in order to participate in the LSC'21 competition. In
the current Voxento iteration, the user has the option to enable voice interaction
or use standard text-based retrieval.
Enhanced SOMHunter for Known-item Search in Lifelog Data
- Jakub Lokoč
- František Mejzlik
- Patrik Veselý
- Tomáš Souček
SOMHunter represents a modern light-weight framework for known-item search in datasets
of visual data like images or videos. The framework combines an effective W2VV++ text-to-image
search approach, a traditional Bayesian like model for maintenance of relevance scores
influenced by positive examples, and several types of exploration and exploitation
displays. With this initial setting in 2020, already the first prototype of the system
turned out to be highly competitive in comparison with other state-of-the-art systems
at Video Browser Showdown and Lifelog Search Challenge competitions. In this paper,
we present a new version of the system further extending the list of visual data search
capabilities. The new version combines localized text queries with collage queries
tested at VBS 2021 in two separate systems by our team. Furthermore, the new version
of SOMHunter will integrate also the new CLIP text search model recently released
by OpenAI. We believe that all the extensions will improve chances to effectively
initialize the search that can continue with already supported browsing capabilities.
LifeMon: A MongoDB-Based Lifelog Retrieval Prototype
- Alexander Christian Faisst
- Björn Þór Jónsson
We present LifeMon, a new lifelog retrieval prototype targeting LSC. LifeMon is based
around the MongoDB document store, which is one of a host of scalable NoSQL systems
developed over the last two decades, with a semi-structured data model that seems
well matched with lifelog requirements. Preliminary results indicate that the system
is efficient and that novice users can successfully use it to solve some LSC tasks.
Flexible Interactive Retrieval SysTem 2.0 for Visual Lifelog Exploration at LSC 2021
- Hoang-Phuc Trang-Trung
- Thanh-Cong Le
- Mai-Khiem Tran
- Van-Tu Ninh
- Tu-Khiem Le
- Cathal Gurrin
- Minh-Triet Tran
With a huge collection of photos and video clips, it is essential to provide an efficient
and easy-to-use system for users to retrieve moments of interest with a wide variation
of query types. This motivates us to develop and upgrade our flexible interactive
retrieval system for visual lifelog exploration. In this paper, we briefly introduce
version 2 of our system with the following main features. Our system supports multiple
modalities for interaction and query processing, including visual query by meta-data,
text query and visual information matching based on a joint embedding model, scene
clustering based on visual and location information, flexible temporal event navigation,
and query expansion with visual examples. With the flexibility in system architecture,
we expect our system can easily integrate new modules to enhance its functionalities.
XQC at the Lifelog Search Challenge 2021: Interactive Learning on a Mobile Device
- Emil Knudsen
- Thomas Holstein Qvortrup
- Omar Shahbaz Khan
- Björn Þór Jónsson
In a society dominated by mobile phones and still increasing media collections, Interactive
Learning is slowly becoming the favored paradigm for managing these collections. Still,
however, no scaling Interactive Learning system exists on a mobile phone. In this
paper, we present XQC, an Interactive Learning platform with a user interface that
fits most modern smartphones, and scales to large media collections.