Keynote Speaker 1

Professor Chong Nak-Young

  • Professor of Computer Science, School of Information Science, Japan Advanced Institute of Science and Technology (JAIST)

Specialization: Human-Friendly Robots, Teleoperation, Intelligent Mechanical Systems, Knowledge Networking Robot Control

Biography: Nak Young Chong received the B.S., M.S., and Ph.D. degrees in mechanical engineering from Hanyang University, Seoul, Korea, in 1987, 1989, and 1994, respectively. From 1994 to 2007, he was a member of research staff at Daewoo Heavy Industries and KIST in Korea, and MEL and AIST in Japan. In 2003, he joined the faculty of Japan Advanced Institute of Science and Technology, where he currently is a Professor of Information Science. He also served as Vice Dean for Research and Director of the Center for Intelligent Robotics at JAIST. He was a Visiting Scholar at Northwestern University, Georgia Institute of Technology, University of Genoa, and Carnegie Mellon University, and also served as an Associate Graduate Faculty at the University of Nevada, Las Vegas, International Scholar at Kyung Hee University, and Distinguished Invited Research Professor at Hanyang University. He serves as Senior Editor of the IEEE Robotics and Automation Letters, Journal of Intelligent Service Robotics, International Journal of Advanced Robotic Systems, and serves/served as Senior Editor of UR 2018, IEEE ICRA, IEEE Ro-Man, IEEE CASE Conference Editorial Boards, and Associate Editor of the IEEE Transactions on Robotics. He served as Program Chair/Co-Chair for JCK Robotics 2009, ICAM 2010, IEEE Ro-Man 2011, IEEE CASE 2012, IEEE Ro-Man 2013, URAI 2012/2013, and DARS 2014. He was a General Co-Chair of URAI 2017. He also served as Co-Chair for IEEE-RAS Networked Robots Technical Committee from 2004 to 2006, and Fujitsu Scientific System Working Group from 2004 to 2008.

Title: Culture-Aware Elderly Care Robots in a Smart ICT Environment

Abstract: Rapid demographic change constitutes an unprecedented societal challenge for Japan. I will shed light on the issues of Japan’s super-aging society and introduce the human-robot interaction work package of our ongoing EC Horizon 2020 project “CARESSES”, aiming at developing culturally competent elderly care robots, jointly commissioned by the Ministry of Internal Affairs and Communications of Japan. We envision a future where care robots are able to interact with the elderly with different cultural and personality traits through personalized emotion generation and facial/vocal/body expression. I will share some of our preliminary results of multi-modal human-robot interaction and explore opportunities for future collaboration with universities and research institutes in Malaysia. Furthermore, I will introduce a smart ICT environment testbed iHouse, and a user speech activated interface to enable care robots to gain control over the iHouse devices and functions and provide data through verbal interaction with the user. I am hoping to discuss the technical feasibility of a robotic smart care home interface toward supporting independent living of the elderly.

Keynote Speaker 2

Professor Kenji Suzuki

  • Professor of Computer Science, World Research Hub Initiative (WRHI)  & Laboratory for Future Interdisciplinary Research of Science and Technology (FIRST), Institute of Innovative Research (IIR), Tokyo Institute of Technology

Specialization: Machine learning, deep learning, computer-aided diagnosis, medical imaging, artificial intelligence

Biography: Kenji Suzuki, Ph.D. (by Published Work; Nagoya University) worked at Hitachi Medical Corp., Japan, Aichi Prefectural University, Japan, as a faculty member, and in Department of Radiology, University of Chicago, as Assistant Professor. In 2014, he joined Department of Electric and Computer Engineering and Medical Imaging Research Center, Illinois Institute of Technology, as Associate Professor (Tenured). Since 2017, he has been jointly appointed in World Research Hub Initiative, Tokyo Institute of Technology, Japan, as Full Professor. He published more than 320 papers (including 110 peer-reviewed journal papers). He has been actively studying deep learning in medical imaging and computer-aided diagnosis in the past 20 years. He is inventor on 30 patents (including ones of earliest deep-learning patents), which were licensed to several companies and commercialized. He published 10 books and 22 book chapters, and edited 13 journal special issues. He was awarded more than 25 grants as PI including NIH R01 and ACS. He served as the Editor of a number of leading international journals, including Pattern Recognition and Medical Physics. He served as a referee for 83 international journals, an organizer of 35 international conferences, and a program committee member of 150 international conferences. He received 26 awards, including Springer-Nature EANM Most Cited Journal Paper Award 2016 and 2017 Albert Nelson Marquis Lifetime Achievement Award.

Title: Deep Learning in Medical Image Processing and Diagnosis

Abstract: Machine leaning (ML) in artificial intelligence has become one of the most active areas of research in the biomedical imaging field including medical image analysis and computer-aided diagnosis (CAD), because “learning from examples or data” is crucial to handling a large amount of data (“Big data”) coming from medical imaging informatics systems. Recently, as the available computational power increased dramatically, image-based ML or “deep learning” emerged.  Deep learning, including our original massive-training artificial neural networks (MTANNs) and the most popular convolutional neural networks (CNNs), is an end-to-end ML model that enables a direct mapping from the raw input data to the desired outputs, eliminating the need for handcrafted features in feature-based ML. Deep learning (or image-based ML) is a versatile, powerful framework that can acquire image-processing and analysis functions through training with image examples.  In this talk, deep learning in medical imaging is overviewed to make clear a) what has changed in machine learning after the introduction of deep learning, b) differences and advantages over conventional feature-based ML, and c) its applications to 1) separation of bones from soft tissue in chest radiographs, 2) CAD for lung nodule detection in chest radiography and thoracic CT, 3) distinction between benign and malignant nodules in CT, 4) polyp detection and classification in CT colonography, and 5) radiation dose reduction in CT and mammography.