Grand Challenges

January 9, 2017

Segment-based Rate Control of Video Encoder for Live ABR Streaming

Organizer: Twitch Interactive Inc.

OTT live streaming services (e.g., Twitch, YouTube Live, Douyu, Huya, etc.) become more and more popular for media consumers aged between 18 and 35. The rate control of encoder plays a critical role in determining the user experience of ABR playback algorithms on any client platform (web, set top box, or mobile device). Twitch invites researchers around the world to propose new rate control algorithms designed for segment-based HTTP streaming.

For more technical details, please read the attached PDF – Twitch’s ICME 2017 Grand Challenge: Segment-based Rate Control of Video Encoder for Live ABR Streaming

 

About Twitch:
Founded in June 2011, Twitch is the world’s leading social video platform and community for gamers, video game culture, and the creative arts. Each day, close to 10 million visitors gather to watch and talk about video games and more than 2 million streamers. Join the millions of people who come to Twitch to stream, view, and interact around these shared passions together.

For more infomoration, please visit:

http://engineering.twitch.tv/

https://blog.twitch.tv/tagged/engineering

Important Dates

Deadline of submission June 4, 2017 (8am PDT)
Notification of acceptance June 18, 2017 (8am PDT)

Salient360!: Visual attention modeling for 360° Images Grand Challenge

Organizer: University of Nantes, Technicolor

Understanding how users watch a 360° image and analyzing how they scan through the content with a combination of head and eye movement, is necessary to develop appropriate rendering devices and also create good VR/AR content for consumers. Good visual attention modelling is a key factor in that perspective that helps enhance the overall Quality of Experience (QoE). Although a huge number of algorithms have been developed in recent years to gauge visual attention in flat-2D images and videos and also a benchmarking platform where users can submit and assess their results, attention studies in 360 scenarios are absent. The goal of this challenge is to therefore two-fold:

  • to produce a dataset to ensure easy and precise reproducibility of results for future saliency / scan-path computational models in line with the principles of Reproducible and Sustainable research from IEEE.
  • to set a first baseline for the taxonomy of several types of visual attention models (saliency models, importance models, saccadic models) and the correct methodology and ground-truth data to test each of them.

What we provide

A Test Dataset, parsing and evaluation tools will be made available to the participants. We specifically present a dataset of sixty 360 images along with heat maps and scan-paths. We provide two sets of software: a data parser module (for reading the ground-truth heat-maps and scan-paths) and a benchmarking tool(to evaluate the performance of your model against the ground-truth) for use by the participants.

What is expected from the participants

Participants are free to submit computational models in the following category (Each of the submissions will be evaluated with the respective collected ground-truth by the organizers):

  1. Head motion based saliency model (Model type 1): these models are expected to predict Ground Truth Heat Map (GTHM) derived from the “movement of the head” only.
  2. (head+eye)-motion based saliency model (Model type 2): these models are expected to predict Ground Truth Heat Map (GTHM) derived from the “movement of the head” as well as from the “movement of the eye within the viewport”.
  3. Scan-paths of observers in the entire 360 panorama (Model type 3): the models are expected to predict the Groundtruth scan-path (GTSP) that are obtained from the head and eye-movement data from several observers.

The participants are free to use the Test Dataset (Forty images for models 2 and 3 and twenty for model 1) for training and tune their algorithms as necessary and may also compute the benchmark scores as a reference for themselves. When the participants have decided to go ahead with the algorithm submission, they need to submit the binaries to the organizers so that we may evaluate the performance of their algorithms
Winners and top-performing algorithms will be contacted separately by the organizers to encourage them for a demo during the actual ICME conference. In addition, the participants will have a chance to submit a paper for the upcoming ICME 2017 conference. Notification deadlines for the paper is mentioned below.

Important Dates
Release of Test dataset and parsing tools: Feb 18. 2017
Release of evaluation tools: Feb 20.2017
Model submission Deadline: May 31st
Performance evaluation (feedback from Organizers to Participants): June 7th
Paper submission deadline (for interested participants): June 11th

News
FAQs on the challenge and a detailed annexe available here.

 

DASH-IF Grand Challenge: Dynamic Adaptive Streaming over HTTP

Organizer: DASH-IF

Real-time entertainment services such as streaming video and audio are currently accounting for more than 70% of the Internet traffic during peak hours. Interestingly, these services are all delivered over-the-top (OTT) of the existing networking infrastructure using the Hypertext Transfer Protocol (HTTP). The MPEG Dynamic Adaptive Streaming over HTTP (DASH) standard enables smooth multimedia streaming towards heterogeneous devices. The aim of this grand challenge is to solicit contributions addressing end-to-end delivery aspects utilizing MPEG-DASH that will help improve the QoE while optimally using the network resources at an acceptable cost. Such aspects include, but are not limited to, content preparation for adaptive streaming, delivery in the Internet and streaming client implementations. A special focus of 2017’s grand challenge will be related to virtual reality applications and services including 360 degree videos. Detailed information about this grand challenge is available at http://dashif.org/icme2017grandchallenge/

About DASH-IF

In April 2012, ISO, the international standards body which had already given us the core media foundations of MPEG-2, MP3 and MP4, finally ratified the version of its next generation adaptive streaming standard: MPEG-DASH. In an industry besieged by three comparable (but incompatible) segmented formats many asked – why another? The participating companies in the MPEG-DASH standardization (including Microsoft, Apple, Netflix, Qualcomm, Ericsson, Samsung, and many others) saw a vision of interoperability and convergence required for large-scale market growth that trumped the proprietary and competing solutions. They replaced multiple corporation-controlled solutions with a single industry-defined open standard. Further information about DASH-IF is available at http://dashif.org/about/

dash_if