Oral Session

Information for Oral Presenters

The presentation time will be different based on the type of papers as follows:
1. Regular paper: 15mins presentation + 5mins Q&A
2. WiP paper: 12mins presentation + 3mins Q&A

For on-site presenters
1. Presentations shall be conducted using each presenter’s personal computer. We kindly request that you prepare your presentation materials in advance of the day of the presentation.
2. For on-site presentations, the output will be projected or displayed via HDMI. Although we will provide a Type-C to HDMI adapter, we ask that each presenter ensure they have their own suitable HDMI adapter for their PC if a different type is required.
– For online presenters
1. Please access the online presentation link 10 minutes before the starting time of the session to check the screen sharing. (The link will be announced shortly.)
2. During the presentation, you will be utilizing Zoom’s screen-sharing feature. We kindly request that you participate from a location with a stable internet connection.

Oral session A: Embedded System & Edge Computing
Nov. 1st 11:00

A1Using Low Power Coprocessors in an FRP Language for Embedded SystemsGo Suzuki, Akihiko Yokoyama, Sosuke Moriguchi and Takuo WatanabeRegular paper
A2Exploring the Training Performance of Deep Neural Networks on Embedded Many-core ProcessorsMasahiro Hasumi and Takuya AzumiRegular paper
A3Offloading Image Recognition Processing to FPGA Using Resource Manager for Multi-access Edge ComputingHayato Mori, Eisuke Okazaki, Gai Nagahashi, Mikiko Sato, Takeshi Ohkawa and Midori SugayaRegular paper
A4Proposal of inter-node communication method using P2P communication in multi-node computing platformJun Liang, Yanzhi Li and Midori SugayaWiP

Oral Session B: IoT and Robotics
Nov. 1st 14:30

B1Decentralized Network Booting for Geographical-distributed IoT Device ClustersTaito Morikawa, Katsuya Matsubara and Shun KumagaiRegular paper
B2Design of Necessary Safety Functions and Fault Detection in Electric Vehicle Control Unit based on Software-in-the-LoopKiet Vo, Danai Phaoharuhansa and Chadchai SrisurangkulRegular paper
B3Decentralized Systems to Improve Inter-Robot Communication in Hazardous Chemical IncidentsLiangwen Wang, Yanzi Li and Midori SugayaWiP
B4Rethinking Human-Robot Emotional Interaction: The Case of Socially Assistive RobotsManishk Gawande and Gabriele TrovatoWiP
B5ToDo: Plant Robot to Support Habit Formation and Task ManagementMehriell Eliana Siajuat Ang and Gabriele TrovatoWiP

Oral Session C: Society5.0/Beyond 5G
Nov. 2nd 10:45

C1PyHARK: A HARK-based python package for robot audition and computational auditory scene analysisKazuhiro Nakadai, Masayuki Takigahira and Katsutoshi ItoyamaWiP
C2PyHARK Acceleration: A GPU-based ApproachZirui Lin, Katsutoshi Itoyama, Kazuhiro Nakadai, Masayuki Takigahira, Haris Gulzar, Takeharu Eda, Monikka Roslianna Busto and Hideharu AmanoWiP
C3Sound event localization and detection utilizing overlapping end-to-end learningYanke Long, Riku Yasuda, Yui Sudo, Katsutoshi Itoyama, Kazuhiro Nakadai, Hideharu Amano and Kenji NishidaWiP
C4Evaluation of the Second FPGA-IP Prototype for MEC Devices and its ImprovementsMorihiro Kuga, Masahiro Sumita, Sen Gen and Masahiro IidaWiP
C5The information transfer and sharing platform for remote control over B5G infrastructureKazuhiro Kosaka, Tetsuya Yokotani and Koichi IshibashiWiP
C6Automatic Generation of communication paths between multiple FPGA boardsHiroaki Suzuki, Kensuke Iizuka, Hideharu Amano, Wataru Takahashi and Kazutoshi WakabayashiWiP

Oral Session D: Innovations in Affective Computing
Nov. 2nd 15:15

D1Feature Selection of EEG and Heart Rate Variability Indexes for Estimation of Cognitive Function in the ElderlyKentarou Kanai, Yuri Nakagawa and Midori SugayaRegular paper
D2Swin Transformer Based Depression Detection Model Learning Only Single Channel EEG Signal Kei Suzuki and Midori SugayaWiP
D3 Investigating Effects of Stimuli on Attentional Focus during Cognitive Test using EEGFelipe Yudi Fulini, Chen Feng, Keiji Tatani, Masashi Sakagami and Midori SugayaWiP
D4Analysis of Driver’s Emotion and Driving Performance during Music Listening using Physiological IndexesNarumon Jadram and Midori SugayaWiP
D5A Proposal of Emotion Estimation Method in Social Robots Using Facial Expression Recognition Model with Graph-based TechniquesNopphakorn Subsa-Ard, Felipe Yudi Fulini, Tipporn Laohakangvalvit, Kaoru Suzuki and Midori SugayaWiP
D6Automated Review Tool for Educational Models Utilizing Generative AI Daisuke Saito, Taiyou Minagawa and Kenji HisazumiWiP
D7Cuff-Less Blood Pressure Classification from ECG and PPG Signals using Deep LearningRakkrit Duangsoithong, Kulika Pahukarn and Dujdow BuranapanichkitWiP