Keynotes

January 25, 2017

The ICME 2017 will feature excellent keynote speakers.

Tactile Internet with Humans in the Loop

Frank Fitzek (TU Dresden, Germany)

Currently, the fifth generation (5G) communication system is prominently discussed in the research and industry environment. Despite earlier generations, which were more or less evolutions towards higher data rates, 5G can be considered as a revolution that will change the game for operators and manufactures completely. First, there are new technical requirements such as latency, resilience, and security on top of data rate. Second, 5G will happen as holistic approach including the mobile device, air interface, cloud, and network. The fusion of network and cloud is realized by softwarization, which allows for more flexible network and cloud solutions such as multi path communication, network slicing, or mobile edge clouds. This will lead to a new kind of network referred to as Tactile Internet. In this talk the Tactile Internet with Humans in the loop is discussed for new use cases such as tele surgery, robot co-habitation, Gaming, and education.

 

Biography

Frank H. P. Fitzek is a Professor and head of the Deutsche Telekom Chair of Communication Networks at the Technical University Dresden, Germany, coordinating the 5G Lab Germany. He received his diploma (Dipl.-Ing.) degree in electrical engineering from the University of Technology – Rheinisch-Westfälische Technische Hochschule (RWTH) – Aachen, Germany, in 1997 and his Ph.D. (Dr.-Ing.) in Electrical Engineering from the Technical University Berlin, Germany in 2002 and became Adjunct Professor at the University of Ferrara, Italy in the same year. In 2003 he joined Aalborg University as Associate Professor and later became Professor. Dr. Fitzek co-founded several start-up companies starting with acticom GmbH in Berlin in 1999. He was selected to receive the NOKIA Champion Award several times in a row from 2007 to 2011. In 2008 he was awarded the Nokia Achievement Award for his work on cooperative networks. In 2011 he received the SAPERE AUDE research grant from the Danish government and in 2012 he received the Vodafone Innovation prize. In 2015 he was awarded the honorary degree “Doctor Honoris Causa” from Budapest University of Technology and Economy (BUTE). His current research interests are in the areas of wireless and mobile 5G communication networks, mobile phone programming, network coding, cross layer as well as energy efficient protocol design and cooperative networking.

ff_img_8246-phonedeck-2012_by_georg-roske-2_small

Visual comfort in 3D display experience drive the needs for novel optical architectures in Mixed Reality headsets

Bernard Kress (Microsoft Corporation, USA)

Abstract
The past decade has seen blooming various heterogeneous Near To Eye (NTE) display systems addressing different product category requirements ranging from monocular smart glasses to binocular stereo video players, wide FOV Virtual Reality (VR) to Augmented Reality (AR) Head Mounted Displays (HMDs), to more elaborate Mixed Reality (MR) headsets.

Recently, market expectations driven by technology analysts as well as customer pull have risen faster than the actual hardware developments. Next generation HMDs require increased immersion experience, better 3D display experience and better wearable and visual comfort.

Such requirements are pointing to the limits and constrains of current optical and display technologies. As a result of this push, start-ups as well as established companies are investing today heavily in the development of new optical and display technologies which might be able to address such requirements (improved imaging and combiner optics, display and sensor technologies).

However, such technology developments are only part of the solution: a better understanding of the specifics and limitations of the human visual system and more generally the human perception system are required (as in optical foveation, peripheral displays, hard edge occlusion, high dynamic range, oculo-motor depth cues, …).

Biography
Bernard has made over the past two decades significant scientific contributions as an engineer, researcher, associate professor, consultant, instructor, and author.
He has been instrumental in developing numerous optical sub-systems for both consumer electronics and industrial equipment, generating IP, teaching and transferring technological solutions from academia to industry. Application sectors include laser materials processing, optical anti-counterfeiting, biotech sensors, optical telecom devices, optical data storage, optical computing, optical motion sensors, digital image projection, 3D displays, holographic imaging, depth map sensing, and more recently Head-Up and Head Mounted Displays.
He is more specifically involved in the fields of wafer scale micro-optics, holography and nano-photonics.
Bernard has published numerous books and book chapters on these topics and has more than 30 patents granted worldwide. He is a short course instructor for the SPIE and is acting as chair and co-chair in various SPIE conferences. He is an SPIE fellow since 2013 as has been elected in 2016 to the board of Directors of the SPIE.
Bernard has joined Google [X] Labs. in 2011 as the Principal Optical Architect on the Google Glass project, and is since 2015 the Partner Optical Architect at Microsoft Corp, on the Hololens project.

bernard_headshot

From Image to Video: the connection between Vision and language

Tat-Seng CHUA

Abstract
We are experiencing an unprecedented evolution of deep learning techniques. As a holy grail of Artificial Intelligence, the task of connecting vision and language perhaps gains the most appreciable benefits in recent years. For example, today’s machines are able to outperform humans in a number of large-scale visual recognition tasks, and able to describe or answer questions about an image/video in natural language; all of which were unthinkable just a decade ago. In this talk, I will first provide a brief retrospect of the progress of the connection in the pre- and current deep learning era. I will then introduce our research on image concept annotation and deeper scene understanding such as object relation inference. I will then move on to our current research on video, in particular, on venue category estimation in micro-videos by leveraging the multimodal features of visual, text and audio; and other video related research such as the extraction of object tracks, their recognition and relation inference. I will conclude the talk by outlining several interesting future research directions and applications in social media analytics.

Biography
Dr Chua is the KITHCT Chair Professor at the School of Computing, National University of Singapore. He was the Acting and Founding Dean of the School from 1998-2000. Dr Chua’s main research interest is in multimedia information retrieval and social media analytics. In particular, his research focuses on the extraction, retrieval and question-answering (QA) of text and rich media arising from the Web and multiple social networks. He is the co-Director of NExT, a joint Center between NUS and Tsinghua University on Extreme Search.

Dr Chua is the recipient of the 2015 ACM SIGMM Achievements Award for the Outstanding Technical Contributions to Multimedia Computing, Communications and Applications. He is the Chair of steering committee of ACM International Conference on Multimedia Retrieval (ICMR) and Multimedia Modeling (MMM) conference series. Dr Chua is also the General Co-Chair of ACM Multimedia 2005, ACM CIVR (now ACM ICMR) 2005, ACM SIGIR 2008, and ACM Web Science 2015. He serves in the editorial boards of four international journals. Dr. Chua is the co-Founder of two technology startup companies in Singapore. He holds a PhD from the University of Leeds, UK.

Chua