Grand Challenges

January 9, 2017

Segment-based Rate Control of Video Encoder for Live ABR Streaming

Organizer: Twitch Interactive Inc.

OTT live streaming services (e.g., Twitch, YouTube Live, Douyu, Huya, etc.) become more and more popular for media consumers aged between 18 and 35. The rate control of encoder plays a critical role in determining the user experience of ABR playback algorithms on any client platform (web, set top box, or mobile device). Twitch invites researchers around the world to propose new rate control algorithms designed for segment-based HTTP streaming.

For more technical details, please read the attached PDF – Twitch’s ICME 2017 Grand Challenge: Segment-based Rate Control of Video Encoder for Live ABR Streaming

 

About Twitch:
Founded in June 2011, Twitch is the world’s leading social video platform and community for gamers, video game culture, and the creative arts. Each day, close to 10 million visitors gather to watch and talk about video games and more than 2 million streamers. Join the millions of people who come to Twitch to stream, view, and interact around these shared passions together.

For more infomoration, please visit:

http://engineering.twitch.tv/

https://blog.twitch.tv/tagged/engineering

Important Dates

Deadline of submission June 4, 2017 (8am PDT)
Notification of acceptance June 18, 2017 (8am PDT)

Salient360!: Visual attention modeling for 360° Images Grand Challenge

Organizer: University of Nantes, Technicolor

Understanding how users watch a 360° image and analyzing how they scan through the content with a combination of head and eye movement, is necessary to develop appropriate rendering devices and also create good VR/AR content for consumers. Good visual attention modeling is a key factor in that perspective that helps enhance the overall Quality of Experience (QoE). Although a huge number of algorithms have been developed in recent years to gauge visual attention in flat-2D images and videos and also a benchmarking platform (saliency.mit.edu) where users can submit and assess their results, attention studies in 360o scenarios are absent. The goal of this challenge is to therefore two-fold:

  • to produce a dataset to ensure easy and precise reproducibility of results for future saliency / scan-path computational models in line with the principles of Reproducible and Sustainable research from IEEE.
  • to set a first baseline for the taxonomy of several types of visual attention models (saliency models, importance models, saccadic models) and the correct methodology and ground-truth data to test each of them.

What we provide

In the first stage, we present a dataset of sixty 360o images along with the associated head and eye-tracking data. An additional 20 images will be provided without any tracking data. As all images are covered under the Creative Commons copyright, you are free to reuse and redistribute the content for research purposes along with relevant citations and links to our hosting website and paper which provides appropriate credits to the photographers. We additionally provide 3 software: VR content playback module, a benchmarking tool and saliency/ scan-path generator for use by the participants.

What is expected from the participants

Proponent/participants are expected to develop computational models to detect the visually salient regions for the given 360o images in a task independent free-viewing condition encompassing possible user head and eye motion.

The participants are free to use the 60 image+tracking dataset for training and tune their algorithms as necessary and may also compute the benchmark scores as a reference for themselves. When the participants have decided to go ahead with the algorithm submission, they need to submit the binaries to the organizers so that we may evaluate the performance of their algorithms on the 20 image dataset (only whose images have been publically available to the participants without the tracking data). An automated e-mail confirms the status of your submission. Winners and top-performing algorithms will be contacted separately by the organizers to encourage them for a demo during the actual ICME conference.

PDF version of the Grand Challenge description (includes deadlines)

 

DASH-IF Grand Challenge: Dynamic Adaptive Streaming over HTTP

Organizer: DASH-IF

Real-time entertainment services such as streaming video and audio are currently accounting for more than 70% of the Internet traffic during peak hours. Interestingly, these services are all delivered over-the-top (OTT) of the existing networking infrastructure using the Hypertext Transfer Protocol (HTTP). The MPEG Dynamic Adaptive Streaming over HTTP (DASH) standard enables smooth multimedia streaming towards heterogeneous devices. The aim of this grand challenge is to solicit contributions addressing end-to-end delivery aspects utilizing MPEG-DASH that will help improve the QoE while optimally using the network resources at an acceptable cost. Such aspects include, but are not limited to, content preparation for adaptive streaming, delivery in the Internet and streaming client implementations. A special focus of 2017’s grand challenge will be related to virtual reality applications and services including 360 degree videos. Detailed information about this grand challenge is available at http://dashif.org/icme2017grandchallenge/

About DASH-IF

In April 2012, ISO, the international standards body which had already given us the core media foundations of MPEG-2, MP3 and MP4, finally ratified the version of its next generation adaptive streaming standard: MPEG-DASH. In an industry besieged by three comparable (but incompatible) segmented formats many asked – why another? The participating companies in the MPEG-DASH standardization (including Microsoft, Apple, Netflix, Qualcomm, Ericsson, Samsung, and many others) saw a vision of interoperability and convergence required for large-scale market growth that trumped the proprietary and competing solutions. They replaced multiple corporation-controlled solutions with a single industry-defined open standard. Further information about DASH-IF is available at http://dashif.org/about/

dash_if