Innovation Forums: Video for VR/AR

Moderator: Haricharan Lakshman
Dolby Labs
Daniel Kopeinigg
CTO, JauntXR
Sam Tsai
Applied Research Scientist, Facebook

Shlomo Dubnov
Professor, UCSD
Talk Title: Improvised Video - Machine Learning for Interactive Procedural Content Generation

Machine Learning holds the promise to produce cheaper and more realistic content for VR/AR. The main challenge for such applications is understanding the environment for seamless integration of synthetic content with the real, understanding user actions in order to trigger appropriate machine response, and finally act as the content generator itself. Recently generative Deep Learning methods have shown very promising results for image and video synthesis. Moreover sequence models may capture dynamics of movement in synthetic agents. In the talk I will compare generative methods with query-based image concatenative methods that are inspired by speech and music synthesis. We will discuss issues of quality and problems in producing credible autonomous responses for improvisational video generation.

Shlomo Dubnov is a Professor in Music and Computer Science and Engineering and founding member of the Halıcıoğlu Data Science Institute in UCSD. He is a graduate of the prestigious Israel Defence Forces (IDF) Talpiot program, the Rubin Music Academy in Jerusalem in composition and holds a doctorate in computer science from the Hebrew University, Jerusalem. Prior to joining UCSD, he served as a researcher at the world-renowned Institute for Research and Coordination in Acoustics/Music (IRCAM) in Paris, and headed the multimedia track for the Department of Communication Systems Engineering at Ben-Gurion University in Israel. He was a visiting Professor in KEIO University in Japan and University of Bordeaux, France. Currently he is serving as a director of the Center for Research in Entertainment and Learning at UCSD’s Qualcomm Institute and teaches in the Interdisciplinary Computing in the Arts program.

Phil Chou