Informed Feature Representations for Music and Motion
Meinard Müller (Saarland University and MPI Informatik)

PRESENTATION SLIDES:S2-Mueller.pdf

ABSTRACT:

Meinard Müller received the Diplom degree in mathematics, the Ph.D. degree in computer science, and the Habilitation degree in the field of multimedia retrieval from Bonn University, Bonn, Germany. In 2002/2003, he conducted postdoctoral research in combinatorics in the Mathematical Department, Keio University, Tokyo, Japan. In 2007, he finished his Habilitation at Bonn University in the field of multimedia retrieval writing a book titled "Information Retrieval for Music and Motion," which appeared as Springer monograph. Currently, he is a member of the Saarland University and the Max-Planck Institut für Informatik, Saarbücken, Germany, where he leads the research group "Multimedia Information Retrieval and Music Processing" within the Cluster of Excellence on Multimodal Computing and Interaction. His recent research interests include content-based multimedia retrieval, audio signal processing, music processing, music information retrieval, and motion processing.


SPEAKER BIO:

Modern information society is experiencing an explosion of digital content, comprising text, audio, video, and graphics. The challenge is to organize, understand, and search multimodal information in a robust, efficient, and intelligent manner. One challenge arises from the fact that multimedia objects, even though they are similar from a structural or semantic viewpoint, often reveal significant spatial or temporal differences. In this talk, I will discuss various strategies for handling such variations by means of informed feature representations that exploit specific properties of the underlying data. In particular, dealing with two different time-dependent multimedia domains, music audio data and human motion data, I will give plenty of examples on how to design robust features and on how to apply these features to facilitate and support content-based music retrieval and motion reconstruction.