CEA '21: Proceedings of the 13th International Workshop on Multimedia for Cooking and Eating Activities

CEA '21: Proceedings of the 13th International Workshop on Multimedia for Cooking and Eating Activities

CEA '21: Proceedings of the 13th International Workshop on Multimedia for Cooking and Eating

Full Citation in the ACM Digital Library

SESSION: Session 1: Long Oral Session

IYASHI Recipe: Cooking Recipe Recommendation for Healing based on Physical Conditions
and Human Relations

  • Takuya Yonezawa
  • Shion Yamaguchi
  • Yuanyuan Wang
  • Kazutoshi Sumiya
  • Yukiko Kawai

In this paper, we propose a recipe recommendation method that can heal both the provider
and the recipient of food. In particular, we focus on the motivation of recipe contributors,
and propose a recipe recommendation method that can heal both parties by considering
not only the recipe procedure but also the relationship between the provider and the
recipient, and their health conditions. Specifically, we extract feature words by
learning from the items in the recipe data that contain a lot of information other
than the cooking procedure, such as "title," "introduction of the recipe," "one-point
information," and "trigger." In addition, each item of the recipe is analyzed for
sentiment, and ranked based on the extracted feature words and sentiment values of
the physical condition and relationship. We validate the usefulness of the proposed
method by evaluating the relationship and physical condition feature words extracted
by the proposed method and by evaluating the recipe ranking using Rakuten's recipe

Increasing Diversity through Dynamic Critique in Conversational Recipe Recommendations

  • Fakhri Abbas
  • Nadia Najjar
  • David Wilson

Conversational recommender systems help to guide users to discover items of interest
while exploring the search space. During the exploration process, the user provides
feedback on recommended items to refine subsequent recommendations. On one hand, critiquing
as a way of feedback has proven effective for conversational interactions. On the
other hand, diversifying the recommended items during exploration can help increase
user understanding of the search space, which critiquing alone may not achieve. Both
aspects are important elements for recommender applications in the food domain. Conversational
exploration can help to introduce new food items, and diversity in diet has been shown
to predict nutritional health. This paper introduces a novel approach that combines
critique and diversity to support conversational recommendation in the recipe domain.
Our initial evaluation in comparison to a baseline similarity-based recommender shows
that the proposed approach increases diversity during the exploration process.

Region-Based Food Calorie Estimation for Multiple-Dish Meals

  • Kaimu Okamoto
  • Kento Adachi
  • Keiji Yanai

One of the major tasks in food computing is vision-based food calorie estimation.
However, unfortunately, food image datasets annotated with calorie amounts are very
hard to obtain. In fact, no large-scale food datasets annotated with calorie amounts
exits as long as we know. However, we can see some Web sites which provide photos
of food set menus with only total calorie values. Then, in this work, we crawl such
data from the Web and use them as training data of a vision-based food calorie estimation
model. To estimate calorie amounts of food items, the calorie values of each food
item in meal photos should be known in general. However, they are not available in
this setting. Then, we propose a model employing food segmentation which can estimate
calorie amounts of each food item from total calorie values of set meal photos. The
experimental results showed that our region-segmentation-based calorie estimation
model was able to estimate calorie amounts of individual food items roughly.

SESSION: Session 2: Short Oral Session

Few-Shot and Zero-Shot Semantic Segmentation for Food Images

  • Yuma Honbu
  • Keiji Yanai

With the popularity of health management applications, awareness of dietary management
is increasing. When calculating the number of calories in a dish, discriminating between
food regions is an important factor. However, when using deep learning, a large amount
of data is required for training, and it is impractical to collect data for countless
food categories. In recent years, a method called few-shot segmentation has been studied
to learn a semantic segmentation model using a small amount of training data. In this
study, we propose a few-shot and zero-shot segmentation model which targets food images
to overcome the insufficient amount of food training data and show the effectiveness
of the proposed model on a semantic segmentation task for new food classes. In the
proposed model, we employed the word embedding pretrained with a large-scale recipe
text dataset, which results in better accuracy than the previous methods.

Boosting Personalized Food Image Classifier by Sharing Food Records

  • Seum Kim
  • Yoko Yamakata
  • Kiyoharu Aizawa

Food image recognition tasks are generally addressed by using a closed dataset. In
a real-world setting, however, the dataset is updated as the new class of food appears,
and it is impossible to train a model that distinguishes all kinds of food in advance.
Also, inter-class similarity and intra-class diversity make food image recognition
tasks more challenging. Previous works have shown that a personalized classifier using
an individual user's food records can deal with some of these problems. On top of
that, we propose a personalized classifier that learns from not only the accumulating
food records of the individual user but also the growing records of the entire user.
To conduct a realistic experiment, we build a new dataset of daily food images using
a food-logging application called FoodLog Athl. As a result, our proposed method significantly
outperforms prior personalized classification methods for food image recognition in
a realistic setting.

World Food Atlas Project

  • Ali Rostami
  • Zhouhang Xie
  • Akihisa Ishino
  • Yoko Yamakata
  • Kiyoharu Aizawa
  • Ramesh Jain

A coronavirus pandemic is forcing people to be "at home" all over the world. In a
life of hardly ever going out, we would have realized how the food we eat affects
our bodies. What can we do to know our food more and control it better? To give us
a clue, we are trying to build a World Food Atlas (WFA) that collects all the knowledge
about food in the world. In this paper, we present two of our trials. The first is
the Food Knowledge Graph (FKG), which is a graphical representation of knowledge about
food and ingredient relationships derived from recipes and food nutrition data. The
second is the FoodLog Athl and the RecipeLog that are applications for collecting
people's detailed records about food habit. We also discuss several problems that
we try to solve to build the WFA by integrating these two ideas.