For AR/VR experiences to be immersive, the requirements on image quality are demanding, e.g., high resolution, wide field-of-view, high frame rate, 6 DoF user perspective, etc. While the lack of some of these features lead to loss in immersion, others can lead to discomfort. Recent developments in image capture and processing enable us to create media-based immersive experiences involving real-world scenes. However, to build practical systems, many engineering decisions need to be made on content creation, representation, interaction and rendering, and may need a fundamental redesign of the media pipeline in the context of AR/VR. In this panel, we will discuss some of the challenges and trade-offs for building AR/VR media systems.
Hari Lakshman is currently a Research Scientist at Facebook. Prior to this, he was a Research Architect in Dolby Labs, Visiting Assistant Professor in Stanford University, and Researcher in Fraunhofer HHI, Germany. His interests are in image processing, video coding, and AR/VR, with a focus on creating immersive visual experiences. A central theme in his recent years' work is to establish a media pipeline from novel capture devices that model the plenoptic function to display devices like VR & AR head-mounted displays.
Sam Tsai is an Applied Research Scientist at Facebook, developing machine learning-based computer vision solutions to enable intelligent cameras. Prior to Facebook, he worked at Amazon’s subsidiary A9, developing visual search solutions, and at Realtek developing embedded multimedia systems solution.
Shlomo Dubnov is a Professor in Music and Computer Science and Engineering and founding member of the Halıcıoğlu Data Science Institute in UCSD. He is a graduate of the prestigious Israel Defence Forces (IDF) Talpiot program, the Rubin Music Academy in Jerusalem in composition and holds a doctorate in computer science from the Hebrew University, Jerusalem. Prior to joining UCSD, he served as a researcher at the world-renowned Institute for Research and Coordination in Acoustics/Music (IRCAM) in Paris, and headed the multimedia track for the Department of Communication Systems Engineering at Ben-Gurion University in Israel. He was a visiting Professor in KEIO University in Japan and University of Bordeaux, France. Currently he is serving as a director of the Center for Research in Entertainment and Learning at UCSD’s Qualcomm Institute and teaches in the Interdisciplinary Computing in the Arts program.
Philip A. Chou is currently a senior staff research scientist in the Google Daydream group. Prior to Google, he was head of compression at 8i.com, partner research manager at Microsoft Research, compression group manager at VXtreme, member of research staff at Xerox PARC, and member of technical staff at AT&T Bell Laboratories. He has also been on the affiliate faculty at Stanford University, the University of Washington, and the Chinese University of Hong Kong. He has been an active member of MPEG. Dr. Chou has been responsible for seminal work in rate-distortion optimization, multiple reference frame video coding, streaming video on demand over the Internet, decision tree and dag design, practical network coding with random linear codes, wireless network coding, and a host of other work including point cloud compression. He has over 250 publications, 120 patents (some pending), and 85 standards contributions (including instigation of the MP4 file format). He is an IEEE Fellow.
With a background in Software Engineering for 3D Animation and 10+ years of industry experience working in technology companies that produce creative media, Adam has enjoyed creating software products that empower creative content to be made and distributed. His background includes studying CS and Electronic Media at Rensselaer Polytechnic Institute, working at PDI/DreamWorks Animation producing in house tools and plugins, and most recently working with Jaunt on software for generating and distributing VR/AR video.