Morning – Block A – HDR Fundamentals

11:00 - 11:25 Eastern Time
3D LUT design and implementation for Single-Stream live HDR-SDR Production Nick Shaw, Consultant

In collaboration with Pablo Garcia Soriano of Cromorama, the authors created the Emmy nominated LUTs being used for single-stream High Dynamic Range (HDR) and Standard Dynamic Range (SDR) production for broadcasts of Notre Dame football and UEFA soccer, among others. These LUTs permit a mix of HDR and SDR cameras to be used on the same production, up-mapping SDR specialist slow-mo and miniature cameras, as well as SDR graphics, so they can be intercut seamlessly with the primary HDR cameras. The HDR program output can then be down-mapped to provide a simulcast stream for legacy SDR transmission. The LUTs are being made freely available by NBCUI to broadcast partners, to enable consistent results across a range of hardware setups. This session will describe the rationale behind a LUT based approach and the design requirements for the transforms embodied in the LUTs. It will also detail the metrics used to validate the results. Selection of appropriate hardware to apply the LUTs will be discussed, along with differences between various interpolation methods and signal range considerations.

 

11:25 - 11:50 Eastern Time
Visual Quality Impairments in High Dynamic Range and Wide-Color-Gamut Videos
Dr. Hojatollah Yeganeh, SSIMWAVE

Content producers have been making great progress in the past few years adopting advanced technologies to create video content of ultra-high definition (UHD), high dynamic range (HDR), and wide color gamut (WCG). Meanwhile, there has been an accelerated growth of UHD/HDR/WCG content being delivered to consumers’ home TVs and mobile devices. While consumers are enjoying the improved resolution/contrast/brightness/bit depth and enhanced richness of colors, maintaining the quality in video production and distribution becomes an even greater challenge for IPTV and OTT service providers. The challenge is twofold: much larger bandwidth is required to guarantee seamless UHD/HDR/WCG video transmission, and there is difficulty in preserving the visual quality during video production, post-production and delivery. HDR/WCG video content is much more prone to quality issues than Standard Dynamic Range (SDR) content with smaller color gamut, not only because of the higher user expectations, but also because any loss in detail and color shift could be much more manifest on HDR/WCG displays and visual settings. Taking banding, an annoying artifact frequently occurring in HDR/WCG content, as an example, its visibility could vary drastically across scenes and across viewing devices. In this presentation we will discuss the common quality issues in HDR/WCG video production and distribution. We will showcase how such quality issues may be diagnosed by objective quality assessment methods that can help identify, localize and assess the visual quality impairments in HDR/WCG videos.

 

11:50 - 12:15 Eastern Time
Cutting edge measurement tools for HDR workflows
Lakshmanan Gopishankar (Gopi), Principal Engineer, Tek video business unit at Telestream

HDR/WCG technologies have added increased complexity in production workflows, starting with camera setup all the way to post production. For post production, HDR workflows create unique challenges: how do I transfer HDR artistic intent? How can I ensure that my HDR content fits within the end user's HDR display capabilities? We will consider a new set of HDR measurements that relate to the picture screen area, that aim to help answer these questions. For camera setup, each camera manufacturer has come up with their own way to compress and represent HDR content, leading to difficulties in both camera setup and in visualizing changes in light level (due to the non-linearity in the camera gammas). The talk will introduce a new type of waveform trace called as the "stop" waveform that aims to address these challenges.

 

Afternoon – Block B – HDR Workflows

1:25 - 1:50 Eastern Time
Getting the most out of your HDR production
Tim Walker, Senior Product Manager, AJA Video Systems

Large investments are made in production tools and technology to create compelling HDR content only to have a small but growing audience view it.  Native HDR cameras, SDR cameras, SDR/HDR replay machines, SDR graphics, and inserts all have to be processed together in a single HDR production format or success is elusive.  HLG BT.2020 is the common production format today but distribution formats are often PQ BT.2020 and SDR BT.709.  This means that from production to distribution, high quality color gamut and dynamic range converters are an absolute requirement.  

Learn about new HDR conversion techniques that allow you to get more into and more out of your HDR production.

 

1:50 - 2:15 Eastern Time
Securing SMPTE ST 2110 Systems
Leigh Whitcomb, Systems Engineer, Imagine Communications

Securing SMPTE ST 2110 systems is becoming an important issue since it adds new ways that your facility can be attacked. For example, an attacker could disable your SMPTE ST 2059/Precision Time Protocol (PTP) infrastructure, crippling your facility. Tackling security may seem like a daunting challenge. Many users and equipment vendors do not know where to start. To assist users and vendors, standards and related organizations such as SMPTE, the Joint Taskforce on Networked Media (JT-NM), the Advanced Media Workflow Association (AMWA), and the European Broadcasting Union (EBU) are working on specific parts of the security challenge. They are developing practical and actionable solutions. While these solutions do not address all aspects of security, they are a good starting point to get the industry moving in the right direction. This paper summarizes the ongoing work in different standards and related organizations and provide practical and actionable solutions for securing SMPTE ST 2110 systems. Because of its criticality, specific focus will be placed on the SMPTE ST 2059/PTP infrastructure.

 

2:15 - 2:40 Eastern Time
The Requirement for Multicast Flow Orchestration in Media Workflows
Ryan Morris, Systems Engineer, Arista Networks

Broadcast facilities are converting to IP workflows more swiftly as the quality of these systems continues to increase.  There is significant flexibility and scalability that is added when designing a media workflow with a COTS backbone, especially considering the size of uncompressed video streams.  Dense UHD systems, where SMPTE-2110 essence sizes range from a few kbps/Mbps up to greater 10Gbps are becoming more common.  The physical layout of the facility and the wide range of media essences will result in critical decisions being made very early in the design process, such as the required network topology - and then based on that network topology -  how should the essences, which are most likely multicast, be distributed.

Depending on the model, such as a single monolithic approach, traditional multicast protocols ( IGMP and PIM) will allow for proper multicast distribution, however, when the topology is a distributed one (Spine-Leaf), then the aforementioned protocols may not suffice.   Flows should be “load balanced” over multiple physical bearers, and / or across multiple spine devices in environments with large flow sizes with the expectation of high flow density. These types of environments generally require some form of flow orchestration.

A broadcast controller that participates with the network controller (or also acts as the network controller itself) in managing how a multicast route is built, based on factors such as bandwidth of the sender and bandwidth availability on any given link allows for a safer distribution model that negates the risk of link over-provisioning.