Innovation Forums: Next-Generation Video & Display Technology

  • The forum will be held at 10:00 - 12:00 on 03/29 (Friday)


  • Scott Daly
    Moderator: Scott Daly
    Dolby Labs
    Talk Title: How does AI-based image synthesis change the role of vision science in video and display engineering?

    Abstract:
    There has been increasing activity in using GANs for image synthesis in a variety of applications, and AI in general for video processing for TVs and other high-end displays. The simpler perceptual distortions like blur, wrong colors, and noise, as commonly characterized with engineering metrics like MTF, DE, and noise power spectra, respectively, are no longer sufficient to describe the newer interactions of the technology and perception. Our current understanding of visually lossless quality must be refined for quantifying plausibly lossless quality, for example. Another example is that using AI-based spatial upscaling for HD resolution taps into different visual system mechanisms than in going from 4K input to 8K displays. Understanding more advanced concepts in vision science can aid in delineating applications where image quality goals can vary widely. These include the well-known concept of signal-known-exactly (SKE) as well as newer practical extensions of that concept such as Noise-Known-Exactly for certain texture classes, and Signal Expressed Exactly for applications where maintaining the creative intent is a goal. More in-depth visual models can also provide a better understanding of the variability of viewers, as well as how to use physiological measurements (of both subjects in psychophysical studies and consumers engaged with media devices). These have arisen as critical to the newer media goals of personalization and immersion.

    Mylene Farias
    Mylene Farias
    Associate Professor at University of Brasilia
    Talk Title: Audio-Visual Multimedia Quality Assessment

    Abstract:
    The great progress achieved by communications in the last twenty years can be attested by the amount of audio-visual multimedia services available nowadays, such as digital television and IP-based video transmission. The success of these kind of services relies on their trustworthiness and the delivered quality of experience. Therefore, the development of efficient real-time quality monitoring tools that can quantify the audio-visual experience (as perceived by the end user), is key to the success of any multimedia service or application. Although the research in audio and video quality assessment (tackled as individual modalities) is mature, there are still several issues that need to be addressed in the area of audio-visual quality. For instance, modelling how humans perceive audio and video signals (altogether) is a challenging task, especially when we consider the interaction in the perception of audio and video signals. This task becomes even harder because there is little knowledge about the cognitive processing that humans use to interpret the interaction of these types of stimuli. In this talk, I will discuss state-of-the-art research on audio-visual quality metrics, going from combination-type, hybrid, signal-based approaches to current machine–learning-based ones.
    Biography:

    Mylene C.Q. Farias received her B.Sc. degree in electrical engineering from Federal University of Pernambuco (UFPE), Brazil, in 1995 and her M.Sc. degree in electrical engineering from the State University of Campinas (UNICAMP), Brazil, in 1998. She received her Ph.D. in electrical and computer engineering from the University of California Santa Barbara (UCSB), USA, in 2004 for work in no-reference video quality metrics. Dr. Farias has worked at CPqD (Brazil) and for Intel Corporation (Phoenix, USA). Currently, she is an Associate professor in the Department of Electrical Engineering at the University of Brasilia (UnB). Her current interests include video and image quality, no-reference quality metrics, quality of experience, video processing, and visual attention. Dr. Farias is a member of IEEE, the IEEE Signal Processing Society, ACM, and IST.



    Marina Zannoli
    Marina Zannoli
    Facebook Reality Labs
    Talk Title: Perception-centered development of AR/VR displays

    Abstract:
    Mixed reality technologies have transformed the way content creators build experiences for their users. Pictures and movies are created from the point of view of the artist and the viewer is a passive observer. In contrast, creating compelling experiences in AR/VR requires us to better understand what it means to be an active observer in a complex environment in order develop novel image quality metrics to better predict subjective interactive experience. In this talk, I will present a theoretical framework that describes how AR/VR technologies interface with our sensorimotor system. I will then focus on how, at Facebook Reality Labs, we develop image quality metrics, testbeds, and prototypes to define requirements for future AR/VR displays
    Biography:

    Marina is a vision scientist in the Display Systems Research group at Facebook Reality Labs. Prior to joining Facebook, she spent a few years in Berkeley studying how the optics or our eyes determine how we recover the three-dimensional structure of our visual environment. Now, she applies her knowledge of the human visual system to optimize the design of mixed reality technology.



    Aldo Badano
    Aldo Badano
    Senior Biomedical Researcher at Food and Drug Administration (FDA)
    Talk Title: An update on the medical display landscape

    Abstract:
    I will describe recent topics in medical displays including the use of handheld and mixed-reality approaches for better integration of imaging into medical practice. I will also highlight emerging trends and challenges including convergence and interoperability towards more efficient, more consistent, and more user-friendly visualizations, particularly in the area of diagnostic imaging.
    Biography:

    Aldo Badano holds a Senior Biomedical Researcher Service appointment at FDA and currently serves as Deputy Director of the Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. FDA. He received a MEng in Radiological Health Engineering and a PhD in Nuclear Engineering from the University of Michigan in 1999 and 1995 after obtaining a Chemical Engineering from the Universidad de la República, Montevideo, Uruguay in 1992. His primary interests are in the characterization and modeling of medical imaging acquisition and visualization systems. Aldo has published over 300 publications including a tutorial book on medical displays. In addition, he leads several international consensus development efforts through standards and professional organizations. He received CDRH's Excellence in Supervisory (2018) and Mentoring Award (2013), and FDA's Excellence in Laboratory Science Award (2003). Dr. Badano is also an affiliate faculty in Bioengineering at the University of Maryland, College Park and has graduated 4 doctoral students. He is a member of the AAPM and a fellow of the SPIE.



    Ronan Boitard
    Ronan Boitard
    Imaging Scientist at MTT
    Talk Title: Light Steering - HDR Projection for Cinema

    Abstract:
    The best image belongs on the big screen and with improved colors and contrast, cinema can once again be the undisputed go-to venue for immersive story-telling and blockbuster releases. Despite active efforts and early rollout of display solutions for cinema that focus on enhanced image quality, such as high-contrast projectors, IMAX GT projectors and Dolby Cinema projectors, there is clearly an appetite to improve image quality even further. Amidst recent demonstrations of High Dynamic Range (HDR) solutions, such as Sony C-LED and Samsung Onyx systems, Barco is developing a new projection concept, called Light Steering. Light Steering is capable of delivering a full HDR experience using a projection display. In this talk, we will discuss the challenges that projection display is faced with in order to increase peak luminance and contrast in giant screen size and describe how light steering technology overcomes these challenges.
    Biography:

    R. Boitard earned his M.Eng. degree in electrical engineering from the Institut National des Sciences Appliquées (INSA) in 2009. In 2014, he was granted his Ph.D. degree from the University of Rennes 1 while working for Technicolor and IRISA (Institut de recherche en informatique et systèmes aléatoires) on High Dynamic Range (HDR) technology, more specifically, on video tone mapping and temporal coherency. He is presently HDR Imaging Scientist at Barco\MTT Innovation in Vancouver. He has been an active member of MPEG (Motion Pictures Expert Group) and SMPTE (Society of Motion Picture and Television Engineers).


    Tao Chen
    Tao Chen
    Dolby
    Talk Title: Representation and compression of high dynamic range and wide color gamut visual data

    Abstract:
    The emerging technologies in HDR and WCG provide the capabilities of capturing and representing video with an extensive dynamic range and color gamut. On the other hand, an increased amount of detail in both luminance and chrominance imposes challenges to the performance of established video processing and encoding algorithms. This talk presents some recent advancement of delivering a faithful representation of HDR/WCG video and also efficient compression of such signal.
    Biography:

    Tao Chen is currently Sr. Director of Applied Research at Dolby Labs. Prior to joining Dolby in 2011, he held technical and managerial positions at Sarnoff Corporation and Panasonic Hollywood Lab. Over the years, he and his teams have developed video and imaging technologies that led to successful adoption into various standards and ultimate deployment in consumer products. The accomplishments include video encoding systems for Blu-ray and 3D authoring, which created the first AVC Blu-ray titles and also MVC 3D titles, and more recently the core technologies in Dolby Vision, which is widely deployed in TVs, laptops, and mobile phones. In 2002, he received the Most Outstanding Ph.D. Thesis Award from Australian Computer Association and a Mollie Holman Doctoral Medal from Monash University. He was a recipient of an Emmy Engineering Award in 2008.