Keynote Talk

Keynote talk -Day 1

Prof. Kosin Chamnongthai
(King Mongkut’s University of Technology Thonburi)

Title:
Eye-Gaze-Based Human-Intention-Detection

Recently, many robots have been working with humans in many places such as
manufacturing factories, hospitals, department stores, airports, construction sites, and so on. Sometimes, accidents occur by robot and human crashes, since the robot has no function to sense human intention, while a man has some senses to estimate the intention of another man by detecting eye gaze. Eye gaze detection becomes an important basic function, which would be extended to detect intention and other applications such as intention reading for stroke patients, customer intention estimation, worker intention estimation for collaborative robots, game entertainment, and so on. This talk introduces three methods of 3D POI (Point of Intention) estimation using eye gazes, multimodal fusion between hand pointing and eye gaze, and multimodal fusion between hand pointing, eye gaze, and depth information for collaborative robots. Based on the nature of the human head which is assumed to always move, the 1st method using eye gaze rays finds a 3D POI using a crossing point between a couple of consecutive eye gaze rays detected by an eye tracker, while the 2nd method uses multimodal of hand pointing and eye gaze estimates the 3D POI based on a crossing point in the space of interest (SOI) between eye gaze and hand pointing rays in which eye gaze and hand pointing are detected by an eye tracker and a Leap Motion sensor, respectively. On the other hand, the last method assumes a working status between human workers and collaborative robots in a working site where a workpiece in the working site is assumed to become an obstacle and interfere with the 3D POI estimation. To solve this problem, a depth sensor system is mathematically designed to be added to the 3D POI estimation system to sense all needed views which makes the system possible to reconstruct the 3D shape of all objects existing in the working site. Then, the 3D POI is determined based on a volume of interest (VOI) which is the 3D crossing space created by eye gaze, hand pointing, and a 3D object reconstructed by depth information. These proposed three methods of 3D POI estimation will be introduced and discussed in their pros and cons in many applications in the talk.

Bibliography: Dr. Kosin Chamnongthai is a Professor at King Mongkut’s University of Technology Thonburi (KMUTT), Bangkok, Thailand. He received a B.Eng in Applied Electronics from the University of Electro-Communications in 1985, an M.Eng in Electrical Engineering from Nippon Institute of Technology in 1987, and a Ph.D. in Electrical Engineering from Keio University in 1991.
Then, he joined the King Monkut’s University of Technology Thonburi (from 1991 until now), served as head of the Electronic & Telecommunication Engineering Department, KMUTT (1999-2002), and dean of Engineering Faculty at the University of Thai Chamber of Commerce (2002-2004). He was promoted to professor of KMUTT in 2013.
Dr. Kosin is a senior member of IEEE, and a member of APSIPA, ECTI, AIAT, TRS, TESA, and EEAAT.  His research interests are included in the areas of computer vision, image processing, signal processing, and pattern recognition. In these areas, he has published over 60 refereed international journal papers, over 200 international conference papers, and three books. He holds three patents related to fruit inspection.
He was a founding member of the ECTI Association and served as editor of ECTI E-Magazine (2011-2015), associate editor of ECTI-CIT (2011-2016), and ECTI-EEC (2003-2010). He has worked as a committee (2002-2015), vice president (2016-2017), and president (2018-2019) of the ECTI Association.

Keynote talk -Day 2

Prof. Midori Sugaya
(Shibaura Institute of Technology)

Title:
Edge AI technology and its application to robotics to realise Society 5.0
—Towards a problem-solving society through technology—

In Japan today, there is a need to realise the problem-solving services of Society 5.0 as problem-solving information technology. In this context, services that combine advanced AI processing with various Edge devices, such as robots that accompany people in their daily lives, bodies and minds, are expected. The technology supporting these services is expected to provide a platform that can process large amounts of data, provide services to the edge with high responsiveness, and meet the requirements of advanced AI processing. Older clouds are not sufficient due to issues such as inadequate responsiveness and processing time delays.

We propose the Edge AI system platform to address these challenges. We discuss how this platform enables offloading of heavy processing loads on edge devices and how to perform advanced AI processing while increasing responsiveness, how to build Edge, and how to manage and utilise power-saving FPGA devices, high-performance hardware accelerators in a unified manner. Thus, the embedded systems of the future will be able to realise advanced AI processing and high-throughput processing distributed on a large scale as a lightweight Edge, facilitating the use of Society 5.0 apps.

In this lecture, we will introduce innovative technologies and their applications that can be realised by using this platform, such as a robot that can read ‘human feelings’, a voice-guiding robot and a nursing home robot, as innovative applications for the future society and platform and their social implementation. We will introduce innovative problem-solving technologies and their applications that can be realised using this platform. In addition to conventional research, the exhibition also includes new content. We hope you enjoy it.