background imagebackground imagebackground image

Tutorials

State-of-the-Art Tutorials

Tutorials will address the state-of-the-art developments regarding all aspects of multimedia, and will be of interest to the entire multimedia community, from novices in the world of multimedia to the most seasoned researchers, from people working in academia to industry professionals.

Call for Tutorials(Expired)

ACM MM2009 Tutorials Call for proposals can be found HERE.

Camera-ready Submission Instruction (Expired)

All ACM sponsored proceedings will be included in the ACM Digital Library as well as prepared for printed publication. All papers for the conference must be submitted in an electronic format which conforms to ACM specifications. Your electronic submission is due on or before the morning of July 27th (11:00 AM EST, NY time).  Please submit as early as possible if you are able to do so. For more details about the camera-ready paper instruction, please refer to the following site: http://www.sheridanprinting.com/typedept/mm.htm

Tutorial Co-Chairs

Svetha Venkatesh (Curtin University of Technology, Australia)
Shin'ichi Satoh (National Institute of Informatics, Japan)

Tutorial Program

Time: Monday, October 19, 2009

8:30 - 12:00 

Hall: Opal Room @ Raffles

Content-based and Concept-based Analysis for Large-Scale Image/Video Retrieval
Rong Yan (IBM T.J. Watson Research Center, USA)
Winston H. Hsu (National Taiwan University, Taiwan)

14:00 - 17:30

Hall: Opal Room @ Raffles

Music Information Retrieval-Theory and Applications Tutorial Brief Description
George Tzanetakis (University of Victoria, Canada)

8:30 - 13:00

Hall: Sapphire  Room @ Raffles

Parallel Algorithms for Mining Large-scale Multimedia Datasets
Edward Y. Chang (Google Research, China)
Kaihua Zhu (Google Research, China)
Hongjie Bai (Google Research, China)

14:00 - 17:30

Hall: Sapphire  Room @ Raffles

The Future Internet and its Prospects for Distributed Multimedia Systems and Applications
Thomas Plagemann (University of Oslo, Norway)
Vera Goebel (University of Oslo, Norway)

8:30 - 12:00

Hall: Amethyst Room @ Raffles

Ambient Media, Ambient Media Computation, and Media Technology Beyond the Current State
Artur Lugmayr (Tampere University of Technology, Finland)

14:00 - 17:30

Hall: Amethyst Room @ Raffles

Multimedia Aspects in Health Care
B. Prabhakaran (University of Texas at Dallas, USA)

Tutorial Events

  • Tutorial 1: Content-based and Concept-based Retrieval for Large-Scale Image/Video Collections
    Presenters: Rong Yan, and Winston H. Hsu
    Abstract: This tutorial aims to provide the participants a broad and comprehensive coverage on the foundations and recent developments of content-based and concept-based image and video retrieval on large-scale image / video collections. We will present a balanced review of the area of content-based and concept-based visual retrieval for image and video collections by presenting topics of both practical and theoretical interest. Besides, we will also include open research issues and emerging opportunities for large-scale multimedia collections, such as those from fast-growing social media. As an extension of last years' tutorial, this tutorial incorporates additional topics on the latest development of visual word generation, efficient feature indexing, scalable concept detection, distributed computing platform, and web-scale multimedia corpus. Finally, we will project these techniques to promising applications and open problems such as image search, duplicate detection, distributed concept detection, photo-based question and answering, annotation by search, 3D photo tourism, video advertisements, etc.
    Link: More details about the tutorial and the biographies of presenters can be found HERE.

  • Tutorial 2: Music Information Retrieval-Theory and Applications Tutorial Brief Description
    Presenters: George Tzanetakis
    Abstract: Music Information Retrieval (MIR) is an emerging research discipline that deals with all aspects of organizing and extracting information from music. The goal of this turorial is to provide a thorough theoretical overview of the state-of-the-art in Music Information Retrieval combined with a practical hands-on demonstration of several existing tools and resources that can be used for research in this area. Specific emphasis will be given on how MIR techniques relate to other fields of current multimedia research. MIR is an inherently interdisciplinary area touching on several research areas such as digital signal processing, machine learning, perception, visualization, human-computer interaction, content-based retrieval and digital libraries. The target audience is multimedia researchers interested in expanding their research to this new area as well as anyone interested in acquiring a high-level view of the state-of-the art in this field in terms of problems, tools and datasets.
    Link: More details about the tutorial and the biographies of presenters can be found HERE.

  • Tutorial 3: Parallel Algorithms for Mining Large-scale Multimedia Dataset

    Presenters: Edward Y. Chang, Kaihua Zhu, and Hongjie Bai, Google Research
    Abstract: Thanks to the explosive growth of photo/video capturing devices, the amount of online photos and videos is now at the scale of tens of billions. Specifically, YouTube attracts more than10-hour videos per minute, and photo sites such as Flickr and PicasaWeb receive millions of uploads per week. To organize, index, and retrieve these large-scale multimedia data, a system must employ scalable algorithms. Therefore, at the forefront, the research community ought to consider solving the real, large-scale problems, rather than dealing with small toy datasets, which success does not translate to real-world, large datasets.
    Link: More details about the tutorial and the biographies of presenters can be found HERE.

  • Tutorial 4: The Future Internet and its Prospects for Distributed Multimedia Systems and Applications
    Presenters: Thomas Plagemann, Vera Goebel
    Abstract: Today's Internet has been evolved from an early experimental small-scale network to a world-wide infrastructure that is used on a day to day basis by companies, public bodies, governments, individuals etc. multimedia systems and applications are to a large majority distributed and as such also depending on the Internet. All major architectural concepts of the Internet, like IP, DNS, and BGP have not changed since their introduction. The increasing number of internet users, traffic, and requirements like security, privacy, reliability that have become more and more important do not only stress the Internet, but pinpoint many shortcomings of the Internet. As a recent consequence, many "Future Internet" projects have been launched. While the terminology used in the ongoing projects differs strongly, there is a certain consensus among several advanced projects on the core concepts that need to be revised for the Future Internet, including naming and addressing, routing and forwarding, structuring of networks, etc. Due to the importance of the Internet for distributed multimedia systems and applications it should also be of great interests for researchers and developers of multimedia systems and application to understand these concepts and how they can be used. It is interesting to note that the major future Internet projects have a network-centric view, but the concepts they provide can be used to easily develop for example content centric systems.
    Link: More details about the tutorial and the biographies of presenters can be found HERE.
  • Tutorial 5: Ambient Media, Ambient Media Computation, and Media Technology Beyond Ambient Media

    Presenters: Artur Lugmayr
    Abstract: McLuhan's statemet, "the medium is the message" is all-present in discussions within the media community. However, how does ubiquitous and pervasive computation impact when the medium is 'inside' the natural environment of humans? How do location based services, context awareness, emotional responsive interfaces, touch and guesture based interfaces, haptic devices, biometrics, sensor data fusion, mobile embedded systems, distributed networks, and smart data mining change the way of how media are presented, distributed, and consumed? These technical enablers go far beyond existing well known computer screen concepts or simple keyboard and mouse interaction methods. Within the scope of this tutorial several aspects of ambient media are presented: the media viewpoint, the computational/technological viewpoint, and the HCI aspects. The tutorial presents case-studies and latest research methods in the field of ambient media. Examples of existing services are ambient assisting living, user experience design, sensor networks, distributed systems, and mobile location based services ambient media are explained in further detail. The tutorial intends to train participants in the principles of ambient media and its concepts, content creation techniques, and methods. The tutorial rounds up with a more visionary viewpoint towards media technology in the future: the use of biological metaphors in presenting media, shortly called 'biomedia'.
    Link: More details about the tutorial and the biographies of presenters can be found HERE.

  • Tutorial 6: Multimedia Aspects in Health Care

    Presenters: B. Prabhakaran
    Abstract: This tutorial describes the technologies that go behind BSNs-both in terms of the hardware infrastructure as well as the basic software. First, we outline the BSN hardware features and the related requirements. We then discuss the energy and communication choices for BSNs. Next, we discuss approaches for classification, data mining, visualization, and securing these data. We also show several demonstrations of body sensor networks as well as the software that aid in analyzing the data.
    Link: More details about the tutorial and the biographies of presenters can be found HERE.

Content Access

Tutorial 1: Content-based and Concept-based Retrieval for Large-Scale Image/Video Collections

This tutorial aims to provide the participants a broad and comprehensive coverage on the foundations and recent developments of content-based and concept-based image and video retrieval on large-scale image / video collections. We will present a balanced review of the area of content-based and concept-based visual retrieval for image and video collections by presenting topics of both practical and theoretical interest. Besides, we will also provide open research issues and emerging opportunities for large-scale multimedia collections, such as those from fast-growing social media. As an extension of last years' tutorial, this tutorial incorporates additional topics on the latest development of visual word generation, efficient feature indexing, scalable concept detection, distributed computing platform, and web-scale multimedia corpus. Finally, we will project these techniques to promising applications and open problems such as image search, duplicate detection, distributed concept detection, photo-based question and answering, annotation by search, 3D photo tourism, video advertisements, etc.

 

Rong YanPresenter's biography: Dr. Rong Yan is a Research Staff Member in Intelligent Information Management Department at the IBM T. J. Watson Research Center. Dr. Yan received his M.Sc. (2004) and Ph.D. (2006) degree from Carnegie Mellon University's School of Computer Science, and joined IBM Research in 2006. His research interests include (multimedia) information retrieval, video content analysis, large-scale machine learning, data mining and computer vision. Dr. Yan received the Best Paper Runner-Up awards in ACM MM 2004 and ACM CIVR 2007. He was the leading designer of the automatic video retrieval system that achieves the best performance in the world-wide TRECVID evaluation in 2003 / 2005, and best interactive search system in 2007. He has got the IBM Research External Recognition Award in 2007. Dr. Yan has authored or co-authored 5 book chapters and more than 60 international conference and journal papers. Dr. Yan has served or is serving on the Program Committees in more than 20 ACM / IEEE conferences, and as co-chair for 5 conferences/workshops. He is a reviewer for more than 10 international journals.

 

Winston HsuPresenter's biography: Dr. Winston Hsu is an Assistant Professor in the Graduate Institute of Networking and Multimedia, National Taiwan University and the founder of MiRA (Multimedia indexing, Retrieval, and Analysis) Research Group. He received his Ph.D. (2006) degree from Columbia University, New York. Before that, he was devoted to a multimedia software company, where experiencing Engineer, Project Leader, and R&D Manager. Dr. Hsu's current research interests are to enable "Next-Generation Multimedia Retrieval" and generally include content analysis, mining, retrieval, and machine learning over large-scale multimedia databases. Dr. Hsu's research work in video analysis and retrieval had achieved one of the best systems in TRECVID benchmarks since 2003. He received the Best Paper Runner-Up award in ACM Multimedia 2006 and was named in the "Watson Emerging Leaders in Multimedia Workshop 2006" by IBM. Dr. Hsu is a frequent reviewer for major international journals. He is a member of IEEE and ACM.

Tutorial 2: Music Information Retrieval-Theory and Applications Tutorial Brief Description

Music has always been profoundly transformed by advances in technology. Examples of such transformations include the use of music notation, the invention of recording and more recently digital music storage and distribution. Today portable digital music players such as the iPod can store thousands of songs and online music sales have been steadily increasing. It is likely that in the near future anyone will be able to access digitally all of recorded music in human history. In order to efficiently interact with these large collections of music it is necessary to develop tools that have some understanding of the actual musical content. Music Information Retrieval (MIR) is an emerging research discipline that deals with all aspects of organizing and extracting information from music. Interest in MIR has been steadily increasing as can be evidenced by the numbers of MIR-related papers in ICASSP, ACM Multimedia, ICME and other conferences as well as the 9th year existence of ISMIR which is a conference solely focused on MIR.

MIR is a rapidly growing research area with increasing commercial potential. The field has matured with a large number of tools and datasets available for experimentation and evaluation and a diverse set of challenging and fascinating problems have been proposed. There are many interesting ideas from the ACM multimedia community that could be used in a music related context. The goal of this turorial is to provide a thorough theoretical overview of the state-of-the-art in Music Information Retrieval combined with a practical hands-on demonstration of several existing tools and resources that can be used for research in this area. Specific emphasis will be given on how MIR techniques relate to other fields of current multimedia research. MIR is an inherently interdisciplinary area touching on several research areas such as digital signal processing, machine learning, perception, visualization, human-computer interaction, content-based retrieval and digital libraries. The target audience is multimedia researchers interested in expanding their research to this new area and anyone who are interested in acquiring a high-level view of the state-of-the art in this field in terms of problems, tools and datasets.

George TzanetakisPresenter's biography: George Tzanetakis is an assistant Professor of Computer Science at the University of Victoria. He frequently teaches course in Music Information Retrieval and Multimedia Processing. He received his PhD degree in Computer Science from Princeton University in May 2002 and was a PostDoctoral Fellow at Carnegie Mellon University working on query-by-humming systems with Prof. Dannenberg and on video and audio retrieval with the Informedia group. In addition he was chief designer of the patented audio fingerprinting technology of Moodlogic Inc. He has consulted extensevily in audio and music related topics for companies including Nuance, Teligence and IVL. He is also the main designer and developer of Marsyas (http://marsyas.sness.net) a well-known open source software framework for audio processing with specific emphasis on music information retrieval.

His research deals with all stages of audio content analysis such as feature extraction, segmentation, classification with specific focus on Music Information Retrieval (MIR). His work on musical genre classification received an IEEE Signal Processing Young Author award in 2004 and is frequently cited. He has presented tutorials on MIR and audio feature extraction at several international conferences. He is assistant editor of Computer Music Journal and was the chair of the Int. Conf. on Music Information Retrieval (ISMIR) in 2006. He is also an active musician and has studied saxophone performance, music theory and composition. More information can be found at: http://www.cs.uvic.ca/ gtzan.

Tutorial 3: Parallel Algorithms for Mining Large-scale Multimedia Dataset

Thanks to the explosive growth of photo/video capturing devices, the amount of online photos and videos is now at the scale of tens of billions. Specifically, YouTube attracts more than10-hour videos per minute, and photo sites such as Flickr and PicasaWeb receive millions of uploads per week. To organize, index, and retrieve these large-scale multimedia data, a system must employ scalable algorithms. Therefore, at the forefront, the research community ought to consider solving the real, large-scale problems, rather than dealing with small toy datasets, which success does not translate to real-world, large datasets.

In this tutorial, we will present key models and parallel algorithms for dealing with data in the Gegascale. We will also provide participates a huge annotated dataset (at least two-million photos) to conduct future research. Our-four-hour tutorial outlines as follows:

Hour 1: Introduce multimedia machine learning modeling and large-scale parallel algorithms including Parallel Spectral Clustering, Parallel Frequent Itemset Mining, Support Vector Machines (PSVM), and Parallel Latent Dirichlet Allocation (PLDA).
Hour 2: Drill down into the details of PSVMs and PLDA.
Hour 3: Conduct code lab on OpenSource PSVM, providing participants experience in running a large-scale training exercise.
Hour 4. Conduct code lab on OpenSource PLDA, providing hands on experience on running PLDA.

What a participant will be able to assess during and after the course?

  • PSVM,
  • PSVD,
  • A 2-million image dataset for conducting experiments, and
  • Invaluable experience to use these tools and dataset.

Edward Chang Presenter's biography: Edward Chang joined the department of Electrical & Computer Engineering at University of California, Santa Barbara, in September 1999. Ed received his tenure in March 2003, and was promoted to full professor of Electrical Engineering in 2006. His recent research activities are in the areas of distributed data mining and their applications to rich-media data management and social-network collaborative filtering. His research group (which consists of members from Google, UC, MIT, Tsinghua, PKU, and Zheda) recently parallelized SVMs (NIPS 07), PLSA (KDD 08), Association Mining (ACM RS 08), Spectral Clustering (ECML 08), and LDA (WWW 09) (see MMDS/CIVR keynote slides for details) to run on thousands of machines for mining large-scale datasets. Ed has served on ACM (SIGMOD, KDD, MM, CIKM), VLDB, IEEE, WWW, and SIAM conference program committees, and co-chaired several conferences including MMM, ACM MM, ICDE, and WWW. Ed is a recipient of the IBM Faculty Partnership Award and the NSF Career Award. He heads Google Research in China since March 2006. He received his M.S. in IEOR and M.S. in Computer Science from UC Berkeley and Stanford, respectively; and received his PhD in Electrical Engineering from Stanford University in 1999.

Hongjie Bai Presenter's biography: Hongjie Bai received the B.S. degree in Computer Science from Peking University in 2003 and the M.S. degree from EDA lab of Tsinghua University in 2007. He joined Google China Research in July, 2007 and his interest is in large-scale machine learning algorithms. He worked on Parallel SVM(NIPS 07), Paralle SVD, Parallel Spectral Clustering(ECML 08), Parallel LDA(WWW 09, AAIM 09) and made them practically runnable on thousands of machines for web-scale datasets.

 

Kaihua Zhu Presenter's biography: Kaihua Zhu graduated from Shanghai JiaoTong University and joined Google in 2006. He is the tech lead of the web spam fighting project in China and also participates in parallel machine learning algorithm research.

 

 

Tutorial 4: The Future Internet and its Prospects for Distributed Multimedia Systems and Applications

Today's Internet has been evolved from an early experimental small-scale network to a world-wide infrastructure that is used on a day to day basis by companies, public bodies, governments, individuals etc. multimedia systems and applications are to a large majority distributed and as such also depending on the Internet. All major architectural concepts of the Internet, like IP, DNS, and BGP have not changed since their introduction. The increasing number of internet users, traffic, and requirements like security, privacy, reliability that have become more and more important do not only stress the Internet, but pinpoint many shortcomings of the Internet. As a recent consequence, many "Future Internet" projects have been launched. While the terminology used in the ongoing projects differs strongly, there is a certain consensus among several advanced projects on the core concepts that need to be revised for the Future Internet, including naming and addressing, routing and forwarding, structuring of networks, etc. Due to the importance of the Internet for distributed multimedia systems and applications it should also be of great interests for researchers and developers of multimedia systems and application to understand these concepts and how they can be used. It is interesting to note that the major future Internet projects have a network-centric view, but the concepts they provide can be used to easily develop for example content centric systems.

Thomas PlagemannPresenter's biography: Thomas Plagemann is Professor at the University of Oslo since 1996. Currently, he leads the research group in Distributed Multimedia Systems at the Department of Informatics. He has a Dr.SC degree from Swiss Federal Institute of Technology (ETH) in 1994 and received in 1995 the Medal of the ETH Zurich for his excellent Dr.Scient thesis. He has successfully managed national and international projects, like INSTANCE, Ad-Hoc InfoWare and Midas. He is currently involved in two large Future Internet projects: in the ANA project (a European Project with nine partners) he is performing actively research and he is serving in the scientific advisory board of the SpoVNet (a German project with five academic partners). He has published over 100 papers in peer reviewed journals, conferences and workshops in his field. He serves as Associate Editor for ACM Transactions of Multimedia Computing, Communications and Applications and as Editor-in-Chief for the Springer Multimedia Systems Journal. He has successfully given tutorials at IDMS 1999, ACM Multimedia 2001, PROMS 2001, DAIS 2002, MIPS 2004, and ConTel 2005.

Vera GoebelPresenter's biography: Vera Goebel is professor in the Distributed Multimedia Systems group at the Department of Informatics of the University of Oslo, Norway. She obtained a PhD degree from the University of Zurich, Switzerland in 1994 and an MSc from the University of Erlangen-Nuremberg, Germany in 1989. Her research interests are Distributed Systems, Database Systems, Middleware, Operating Systems, and the Future Internet. Currently, she is responsible for the ANA project at the University of Oslo.

 

Tutorial 5: Ambient Media, Ambient Media Computation, and Media Technology Beyond Ambient Media

McLuhan's statemet, 'the medium is the message' is all-present in discussions within the media community. However, how does ubiquitous and pervasive computation impact when the medium is ‘inside' the natural environment of humans? How do location based services, context awareness, emotional responsive interfaces, touch and guesture based interfaces, haptic devices, biometrics, sensor data fusion, mobile embedded systems, distributed networks, and smart data mining change the way of how media are presented, distributed, and consumed? These technical enablers go far beyond existing well known computer screen concepts or simple keyboard and mouse interaction methods. Within the scope of this tutorial several aspects of ambient media are presented: the media viewpoint, the computational/technological viewpoint, and the HCI aspects. The tutorial presents case-studies and latest research methods in the field of ambient media. Examples of existing services are ambient assisting living, user experience design, sensor networks, distributed systems, and mobile location based services ambient media are explained in further detail. The tutorial intends to train participants in the principles of ambient media and its concepts, content creation techniques, and methods. The tutorial rounds up with a more visionary viewpoint towards media technology in the future: the use of biological metaphors in presenting media, shortly called ‘biomedia'.

1. Objectives and Schedule
The goal of the tutorial is to train participants in the basics of ambient media especially viewing ambient media from the media, human-computer-interaction, and technical viewpoint. The tutorial is designed for a general audience with interest in a newly emerging media environment and its possibilities.

Part 1: Introduction, Concepts Overview, Media Viewpoint (1.5 hours)
Part 2: HCI Aspects, Technical Components, Outlook in the Future (1.5 hours)

Please visit http://www.cs.tut.fi/~lartur for further material and information.

2. Focus Points of the Tutorial
The tutorial covers the following topics in further depth:

  • case-studies of existing ambient media services
  • basic concepts and technologies of ambient media
  • location based services, mobile interaction, and smart environments
  • user experience and interaction design guidelines
  • ambient content production and creation
  • natural and intuitive interaction methods
  • context awareness and intelligent behavior modeling
  • proactive and emotional responsive system designs
  • ambient services and business models
  • ambient social networks

 

Techn. Artur LugmayrPresenter's biography: Adj.-Prof. Dr.-Techn. Artur Lugmayr describes himself as a creative thinker and his scientific work is situated between art and science. His vision can be expressed as to create media experiences and evaluate business opportunities on future emerging media technology platforms. He starts his full professorship at the Faculty of Business and Technology Management at the Department of Business Information Management and Logistics within the topic Entertainment and Media Production Management (EMMI) in fall 2009. Currently he is the head and founder of the New AMbient MUltimedia (NAMU) research group at the Tampere University of Technology (Finland) which is part of the Finnish Academy Centre of Excellence of Signal Processing from 2006 to 2011 (http://namu.cs.tut.fi). He is holding a Dr.-Techn. degree from the Tampere University of Technology (TUT, Finland), and is currently engaged in Dr.-Arts studies at the School of Motion Pictures, TV and Production Design (UIAH, Helsinki). He chaired the ISO/IEC ad-hoc group "MPEG-21 in broadcasting"; won the NOKIA Award of 2003 with the text book "Digital  interactive TV and Metadata" published by Springer-Verlag in 2004; representative of the Swan Lake Moving Image & Music Award (http://www.swan-lake-award.org/); board member of MindTrek (http://www.mindtrek.org), EU project proposal reviewer; invited key-note speaker for conferences; organizer and reviewer of several conferences; and has contributed one book chapter and written over 25 scientific publications. His passion in private life is to be a notorious digital film-maker. He is founder of the production company LugYmedia Inc. (http://www.lugy-media.tv). More about him on Google.

Tutorial 6: Multimedia Aspects in Health Care

Recently, Body Sensor Networks (BSNs) are being deployed for monitoring and managing medical conditions as well as human performance in sports. These BSNs include various sensors such as accelerometers, gyroscopes, EMG (Electro myograms), EKG (Electro-cardiograms), and other sensors depending on the needs of the medical conditions. Data from these sensors are typically Time Series data and the data from multiple sensors form multiple, multidimensional time series data. Analyzing data from such multiple medical sensors pose several challenges: different sensors have different characteristics, different people generate different patterns through these sensors, and even for the same person the data can vary widely depending on time and environment.

Body Sensor Networks (BSN) data has several similarities to other multimedia data. BSN data may have both discrete and continuous components, with or without real-time requirements. The data can be voluminous. Continuous BSN data may need signal processing techniques for recognition and interpretation.

This tutorial describes the technologies that go behind BSNs-both in terms of the hardware infrastructure as well as the basic software. First, we outline the BSN hardware features and the related requirements. We then discuss the energy and communication choices for BSNs. Next, we discuss approaches for classification, data mining, visualization, and securing these data. We also show several demonstrations of body sensor networks as well as the software that aid in analyzing the data.

Tutorial Outline

Following topics will be discussed during the tutorial.

  1. Introduction: (30 minutes).
    Discussion on hardware components that go into BSNs and the TinyOS as well as the wireless communication standards for BSNs.
  2. Operating System and Wireless Communication: (60 minutes)
    Presentation on IEEE 802.15 series standards for BSNs and the TinyOS architecture.
  3. Demonstration: (30 minutes)
    Hands-on demo of few BSN configurations and personal trials for participants (participants will wear the BSNs and try).
  4. Data Characteristics of BSNs: (30 minutes)
    Outline of the characteristics of data from different body sensors and how these characteristics influence classification and mining.
  5. Strategies for Classification, Data Mining, and Visualization (30 minutes):
    Discussion on the techniques that have been developed and their performance consideration.
  6. Demonstration of Mining & Classification Software: (30minutes)

 

B. PrabhakaranPresenter's biography: Dr. B. Prabhakaran is an Associate Professor with the faculty of Computer Science Department, University of Texas at Dallas. He has been working in the area of multimedia systems: animation & multimedia databases, authoring & presentation, resource management, and scalable web-based multimedia presentation servers. Dr. Prabhakaran received the prestigious National Science Foundation (NSF) CAREER Award in 2003 for his proposal on Animation Databases. He is also the Principal Investigator for US Army Research Office (ARO) grant on 3D data storage, retrieval, and delivery. He has published several research papers in various refereed conferences and journals in this area.

He has served as an Associate Chair of the ACM Multimedia Conferences in 2006 (Santa Barbara), 2003 (Berekeley, CA), 2000 (Los Angeles, CA) and in 1999 (Orlando, FL) He has served as guest-editor (special issue on Multimedia Authoring and Presentation) for ACM Multimedia Systems journal. He is also serving on the editorial board of Multimedia Tools and Applications journal, Springer Publishers. He has also served as program committee member on several multimedia conferences and workshops. He has presented tutorials in ACM Multimedia and other multimedia conferences.

Dr. Prabhakaran has served as a visiting research faculty with the Department of Computer Science, University of Maryland, College Park. He also served as a faculty in the Department of Computer Science, National University of Singapore as well as in the Indian Institute of Technology, Madras, India.

© ACM Multimedia 2009
Technical Support: acmmm09@jdl.ac.cn