• Home
  • |
  • Contact
Skip to content
Ubiquitous Robots 2023
  • About
    • What’s New
  • Program
    • Program Overview
    • Presentation Guideline
    • Plenary Talks
    • Special Talk
    • Keynote Speeches
    • Invited JIST Journal Papers
    • Workshops
    • Awards
  • Submission
    • Call for papers
    • Paper Submission
    • Late Breaking Result Session
    • Final Paper Submission
    • Call for Organized Sessions
    • Call for Workshops and Tutotials
  • Sponsors & Exhibitors
  • Registration
  • Travel & Accommodations
  • Committee
  • Privacy Policy

Photo Gallery

Photo Gallery
Photo Gallery Original Image Download
0.Registration view 0.Registration.zip
1.Special&Plenary Sessions view 1.Special&Plenary Sessions.zip
2.Keynote Sessions view 2.Keynote Sessions.zip
3.Award Candidates Session view 3.Award Candidates Session.zip
4.ISR(JIST) Journal Session view 4.ISR(JIST) Journal Session.zip
5.Oral Sessions view 5.Oral Sessions.zip
6.Poster Sessions view 1 view 2 6.Poster Sessions.zip
7.Lunches view 7.Lunches.zip
8.Banquet view 8.Banquet.zip
© 2023 Ubiquitous Robots 2023 • Built with GeneratePress
Korea Robotics Society (KROS) Privacy Policy
Business Registration Certificate : 214-82-07990 / President : Nak Young Chong
Address : #506, The Korea Science and Technology Center, (635-4,Yeoksam-dong) 22, 7Gil, Teheran-ro, Gangnam-gu, Seoul, Korea
SECRETARIATTEL +82-2-783-0306, FAX +82-2-783-0307, E-MAIL ur@kros.org
Copyright © "Ubiquitous Robots 2023.
Elliott J Rouse
University of Michigan
 
Auctions, preferences, and wearable robots: the development of meaningful exoskeletons and robotic prostheses
Abstract
Lower-limb wearable robots—such as exoskeletons and robotic prostheses—have struggled to have the societal impact expected from these exciting technologies. In part, these challenges stem from fundamental gaps in our understanding of how and why these systems should assist their wearer during use. Wearable robots are typically designed to meet a single, specific objective (e.g. reduction of metabolic rate); however, in reality, assistive technologies impact many aspects of gait and user experience. In this talk, I will discuss our recent work leveraging user preference as a ‘meta-criterion’ in design and control, through which the wearer is able to internally balance the quantitative and qualitative tradeoffs that accompany the use of wearable robots, including stability, comfort, exertion, or speed. I will highlight our work understanding user-preferred assistance settings in a variable-stiffness prosthesis and bilateral ankle exoskeletons, demonstrating user-preferred assistance settings are reliable yet diverse, and can be obtained in less than two minutes. In addition, I will discuss how user-preferred assistance can be optimized automatically with human-in-the-loop methods, which are able to converge on user-preferred settings with an accuracy of ~90%. Finally, I will introduce a new approach for understanding the success of assistive technologies using tools from behavioral economics. I will describe and quantify the economic value provided by ankle exoskeletons, including the cost incurred from wearing the added mass, as well as the value added by the assistance alone. Together, this talk will underscore the role of the user in the development of wearable robots, and advocate for a shift away from the conventional, single-objective assessment of these technologies.
 

Biography
Elliott Rouse is an Associate Professor in the Robotics and Mechanical Engineering Departments at the University of Michigan (U-M). He directs the Neurobionics Lab, whose vision is to reverse engineer how the nervous system regulates the mechanics of locomotion, and use this information to develop meaningful assistive technologies that leverage this perspective. To this end, his group studies the design, control, and evaluation of lower-limb exoskeletons and robotic prostheses. He has launched the careers of four doctoral students, two of which are faculty at Research 1 institutions. He is the recipient of the NSF CAREER Award and is a member of the IEEE EMBS Technical Committee on BioRobotics. In addition, he is on the Editorial Boards for IEEE Robotics and Automation Letters, IEEE Transactions on Biomedical Engineering, and Wearable Technologies. Elliott received the BS degree in mechanical engineering from The Ohio State University and the PhD degree in biomedical engineering from Northwestern University. Afterwards, he joined the Massachusetts Institute of Technology as a Postdoctoral Fellow in the MIT Media Lab. Prior to joining U-M, Elliott was faculty in the Schools of Medicine and Engineering at Northwestern University and worked in professional autoracing. In 2019 – 2020, he was a visiting faculty member at (Google) X, where he maintains an appointment.

Jinoh Lee
DLR
 
Extension of Operational Space Control Enhancing Fault-Tolerance in Actuation Failure
Abstract
Actuation failure and fault-tolerant control have drawn more attention in accordance with the recent increasing demand for reliable robot control applications for long-term and remote operations. The emergence of control torque loss, i.e., the free-swinging failure, is particularly challenging when the robot performs dynamic operational space tasks due to complexities stemming from redundancies in the kinematic structure as well as a dynamical disturbance in the under-actuated multi-body system. In this task, a dynamic analysis and control method will be introduced in line with the operational space formulation approach to address such a problem of under-actuated systems without constrained conditions, which has been overlooked yet encompasses a broader range of applications involving free-floating robots and manipulators with passive joints.
 

Biography
Jinoh Lee is currently a Research Scientist with the Institute of Robotics and Mechatronics, German Aerospace Center (DLR), Wessling, Germany, and is Adjunct Professor with Mechanical Engineering Department, Korea Advanced Institute of Science and Technology (KAIST). His professional is about robotics and control engineering, which include manipulation of highly redundant robots such as dual-arm and humanoids, robust control of nonlinear systems and compliant robotic system control for safe human-robot interaction. He received the B.S. degree in mechanical engineering from Hanyang University, Seoul, South Korea, in 2003 (awarded Summa Cum Laude), and the M.Sc. and Ph.D. degrees in mechanical engineering from KAIST, Daejeon, South Korea, in 2012. He held Postdoctoral position from 2012-2017 and Research Scientist position from 2017-2020 at the Department of Advanced Robotics, Istituto Italiano di Tecnologia (IIT), Genoa, Italy.
He is a Senior Member of IEEE (SM’17) and an Senior Editor on the Conference Paper Review Board (CPRB) of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). He also serves as the Associated Editor (AE) of IEEE International Conference on Robotics and Automation (ICRA) since 2017, AE of IEEE Ubiquitous Robots (UR) since 2019; the Technical Committee (TC) of International Federation of Automatic Control (IFAC), TC4.3 Robotics since 2014; the Program Committee of Robotics: Science and Systems (RSS) since 2017, and other robotics conferences.

Chung Hyuk Park
George Washington University
 
HRI and AI for Assistive and Healthcare Robotics
Abstract
Assistive robotics is an expanding field of research that holds potential for impacting human health and enhancing quality of life. In this presentation, I will discuss my research activities focused on three main themes. Firstly, I will explore the concept of social robots as embodied agents with multi-modal intelligent perception. Secondly, I will delve into the realm of robotic learning, specifically targeting interactive learning and socio-emotional and physical interactions for autistic individuals. Lastly, I will examine contextual and mutual learning for personalized interaction, with the ultimate goal of facilitating long-term human-robot interaction. Along with my research endeavors, I will also impart the experiences and knowledge gained from cross-disciplinary studies and translational research, aimed at advancing assistive robotics for healthcare and personalized interventions.
 

Biography
Dr. Chung Hyuk Park is an Associate Professor in the Department of Biomedical Engineering, School of Engineering and Applied Science at the George Washington University. Dr. Park directs the Assistive Robotics and Tele-Medicine (ART-Med) Lab. His current research interests are: 1) Multimodal human-robot interaction and robotic assistance for individuals with disabilities or special needs, 2) Robotic learning and humanized intelligence, and 3) Tele-medical robotic assistance and AI-based reasoning for medical perception and decision-making. He is a recipient of an NSF CAREER grant and carried out a National Robotics Initiative (NRI) NIH R01 project as the lead-PI. His work was introduced in diverse media and events such as Voice of America, Robot Trends, USA Today, Arirang TV, and TEDx Pearl Street. He received his PhD degree in Electrical and Computer Engineering from the Georgia Institute of Technology (2012) and MS degree in Electrical Engineering and Computer Science and BS degree in Electrical and Computer Engineering from Seoul National University (2002, 2000, respectively).

Dongkyu Choi
A*STAR
 
Cognitive Architecture for Collaborative Robots
Abstract
Cognitive architectures provide infrastructure for modelling general intelligence. They make commitments to specific representation of knowledge, organization of memory, and processes that work over these structures. When integrated with facilities for embodiment including sensory perception and physical manipulation, such architectures can serve as an excellent framework for robotic agents. We take this approach to develop a robotic software based on a cognitive architecture, ICARUS, that enables natural collaboration between humans and robots by integrating multi-modal perception, dialogue capabilities, data-driven action model learning, and common-sense reasoning. In this talk, we will present an overview of this software framework and show some motivating examples of its use in realistic scenarios. We will also discuss directions for further study in this context to facilitate future adaptation of this framework in industrial applications.
 

Biography
Dongkyu Choi is a senior scientist at Singapore’s Agency for Science, Technology and Research (A*STAR) leading the Cognitive AI group. His research expertise is in cognitive systems, developing a cognitive architecture that provides a computational framework for modeling general intelligence. He has a strong interest in enabling robots for collaborative interactions with humans. Dongkyu is experienced in designing and performing externally funded projects from various government agencies including Defense Advanced Research Projects Agency, Office of Naval Research, Naval Research Laboratory, Korea Institute of Science and Technology, and Agency for Science, Technology and Research. He received his M.S. and Ph.D. degrees from Stanford University and his B.S. degree from Seoul National University.

Marco Hutter
ETH Zurich
 
Towards Ubiquitous Legged Robots
Abstract
In the last years, legged and in particular quadrupedal robots have become ubiquitous – both for academia and industry. Not only robotics researchers, but also experts in computer vision and machine learning adopt quadrupedal robots as demonstrators to study and deploy new methods to make these systems more agile and autonomous.
In this presentation, I will provide insights into some of our newest research in control, perception and autonomy of quadrupedal robots. I will show how reinforcement learning has revolutionized locomotion performance, and present new ways of including perception to enable versatile navigation. Moreover, we will look at different application examples from academia and industry to provide inspiration for the future or mobile robotics.

 

Biography
Marco is a professor for robotic systems and director of the center for robotics at ETH Zurich. His research interests are in the development of novel machines and machines and their intelligence to operate in rough and challenging environments. He is part of the National Centre of Competence in Research (NCCR) Robotics and NCCR Digital Fabrication and PI in various international projects (e.g. EU Thing, NI) and challenges. Moreover, Marco is co-founder of several ETH Startups such as ANYbotics AG or Gravis Robotics AG, targeting the commercialization of legged robots and autonomous construction equipment.

Yunkyung Kim
Amazon
 
I See You: Humans with Robots in the Field
Abstract
As robots are being used in various industries with its benefit such as increasing the speed of production, reducing human error, avoiding accidents, etc., user groups who will face with robots are also diverse. Some robots are only used by the selected and highly trained small group of people while some other robots are used by wide range of people. In this talk, different approach on validating user experience with robots depending on characteristics of user groups will be introduced.
 

Biography
Dr. Yunkyung Kim is currently Senior UX designer at Amazon Robotics. She received her B.S. and Ph.D. degrees in Industrial Design from Korea Advanced Institute of Science and Technology (KAIST). She was Principal UX designer at iRobot, Human-Robot Interaction Designer at NASA, and Senior UX designer at Samsung Electronics. Her primary design focus is physical interaction between humans and robots with light, sound, movement, etc. Her interest in design process is from observing how people lives with smart/intelligent/robotic products in the real world, digging up their needs which they are not even aware of to translating those needs into design languages. She received iF Design Award as Gold in 2015 and winner in 2016 with mobile UX design with advanced display technology. As Human-Robot Interaction designer, she launched physical interaction on a free-flying robot, Astrobee, on the International Space Station when she was at NASA and designed robot behaviors at iRobot.

Tadayoshi Aoyama
Nagoya University
 
Macro-Micro Interaction Systems for Assisted Reproductive Technology
Abstract
Human augmentation is a technology that extends human capabilities through technology. The concept of human augmentation began with microscopes and telescopes that augmented human vision. So far, humans have extended their own capabilities and activity space through technology. In this talk, I will introduce our work on macro-micro interaction technology, which extends the human activity space into micro-space; then, I will describe its prospects as a support system that simplifies Assisted Reproductive Technology.
 

Biography
Tadayoshi Aoyama is currently an Associate Professor in the Department of Micro-Nano Mechanical Science and Engineering at the Nagoya University. His research interest includes macro-micro interaction, VR/AR and human interface, AI-based assistive technology, micro-manipulation, and medical robotics. He received the B.E. degree in mechanical engineering, the M.E. degree in mechanical science engineering, and the Ph.D. degree in micro-nano systems engineering from Nagoya University, Nagoya, Japan, in 2007, 2009, and 2012, respectively. He was an Assistant Professor at Hiroshima University, Japan, from 2012 to 2017, and at Nagoya University, Japan, from 2017 to 2019. He received the Best paper Award at the IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM2019), at the IEEE International Symposium on Micro-NanoMechatronics and Human Science (MHS2017, MHS2019, MHS2022).

He served as the secretary of Senior Program Committee (SPC) in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2022), the finance chair and secretary of 2021 IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO2021), the Associated Editor (AE) of IEEE International Conference on Robotics and Automation (ICRA) since 2020, the AE of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) since 2021, and other robotics conferences.

Junku Yuh
Korea Institute of Robotics and Technology Convergence (KIRO)
 
ROBOTICS: Changing the Business Landscape
Abstract
According to the World Robotics report (2022), robot installations hit a new record level in 2021 (31% increase from 2020 to 517,000 units). There are about 2.7 million industrial robots in use across the globe and roughly 400,000 new robots enter the market every year. More robots can be found in our daily lives, from cleaning robots at home, delivery robots in stores and restaurants to surgical robots in hospitals.
Digital transformation, global technology competition, and social changes, due to demographic shift associated with an aging population and social distancing during the COVID-19 pandemic, have been driving the growing demand for robotics across industries and in all business sectors.

This presentation consists of two parts. The first part will cover the timeline of innovation in the field of robotics highlighting technological milestones. It will also note critical areas for further development, which could help significantly expand the range of robotic applications. The second part will summarize the speaker’s research achievements during his time in Hawaii as well as current R&D activities at Korea Institute of Robotics and Technology Convergence (KIRO).
 

Biography
Dr. Yuh currently serves as the President of Korea Institute of Robotics and Technology Convergence (KIRO). He also served as the Founding Director of Robotics and Media Institute, Korea Institute of Science and Technology (KIST), and the 5th, 6th President of Korea Aerospace University. Prior to arriving in Korea, he worked for the U.S. National Science Foundation (NSF) as the Head of NSF East Asia and Pacific Regional Office in Tokyo, Japan and as a Program Director of Information and Intelligent Systems specializing in Robotics and Computer Vision in the NSF Headquarters in Washington, D.C., U.S.A. His program at NSF, along with NASA and NIH, co-sponsored a study on Assessment of International Research and Development in Robotics in 2004. He also organized the first U.S. Interagency Working Group Meeting in Robotics in 2005 which included representatives from 15 federal government agencies. These efforts helped successfully launch the U.S. National Robotics Initiative (NRI) in 2011. Prior to NSF, he was Professor of Mechanical Engineering as well as Information & Computer Science at the University of Hawaii (UH), Honolulu, Hawaii, U.S.A, and also the Founding Director of Autonomous Systems Laboratory at UH, where the most advanced autonomous underwater robot technology was developed, especially for intervention missions. (https://ieeexplore.ieee.org/document/8574000)

Dr. Yuh is an Elected IEEE Fellow and has received various prestigious awards including NSF Presidential Young Investigator Award from former President George Bush. He served as the Founding Editor-in-Chief (EIC) of Journal of Intelligent Service Robotics, Editorial Board Member of Journal of Intelligent Automation and Softcomputing, Associate Editor of IEEE Transaction on Robotics and Automation, Program Co-Chair of the IEEE 2001 and 2006 International Conference on Robotics and Automation (ICRA), and Program Chair of the IEEE 2003 International Conference on Intelligent Robots and Systems (IROS). He currently serves as a member of IEEE Fellow (Judge) Committee, IEEE Distinguished Lecturer, Advisory Board Member of Journal of Autonomous Robots, and VP of Korea Robotics Society. He has published 12 books and over 120 papers in Robotics, including Introduction to Autonomous Manipulation (G. Marani and J. Yuh) by Springer, 2014.

Luca Carlone
MIT
 
Spatial Perception for Robots and Autonomous Vehicles: Real-time Scene Understanding, Certifiable Robustness, and Self-training
Abstract
Spatial perception —the robot’s ability to sense and understand the surrounding environment— is a key enabler for autonomous systems operating in complex environments, including self-driving cars and unmanned aerial vehicles. Recent advances in perception algorithms and systems have enabled robots to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation, manipulation, and human-robot interaction. Despite these advances, researchers and practitioners are well-aware of the brittleness of existing perception systems, and a large gap still separates robot and human perception. This talk presents our latest results on the design of the next generation of robot perception systems and algorithms. The first part of the talk discusses spatial perception systems and motivates the need for high-level 3D scene understanding for robotics. I introduce early work on metric-semantic mapping (Kimera) and novel hierarchical representations for 3D scene understanding (3D Scene Graphs). Then, I present recent results on the development of Hydra, the first real-time spatial perception system that builds 3D scene graphs of the environment in real-time and without human supervision. The second part of the talk focuses on perception algorithms and draws connections between robustness of robot perception and global optimization. I present an overview of our certifiable perception algorithms, a novel class of algorithms that is robust to extreme amounts of noise and outliers and affords performance guarantees. I showcase applications to object pose and shape estimation and SLAM, and discuss recent results that combine learning and optimization to enable self-supervised object pose estimation.
 

Biography
Luca Carlone is the Leonardo Career Development Associate Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). He received his PhD from the Polytechnic University of Turin in 2012. He joined LIDS as a postdoctoral associate (2015) and later as a Research Scientist (2016), after spending two years as a postdoctoral fellow at the Georgia Institute of Technology (2013-2015). His research interests include nonlinear estimation, numerical and distributed optimization, and probabilistic inference, applied to sensing, perception, and decision-making in single and multi-robot systems. His work includes seminal results on certifiably correct algorithms for localization and mapping, as well as approaches for visual-inertial navigation and distributed mapping. He is a recipient of the Best Student Paper Award at IROS 2021, the Best Paper Award in Robot Vision at ICRA 2020, a 2020 Honorable Mention from the IEEE Robotics and Automation Letters, a Track Best Paper award at the 2021 IEEE Aerospace Conference, the 2017 Transactions on Robotics King-Sun Fu Memorial Best Paper Award, the Best Paper Award at WAFR 2016, the Best Student Paper Award at the 2018 Symposium on VLSI Circuits, and he was best paper finalist at RSS 2015 and RSS 2021. He is also a recipient of the AIAA Aeronautics and Astronautics Advising Award (2022), the NSF CAREER Award (2021), the RSS Early Career Award (2020), the Google Daydream (2019) and the Amazon Research Award (2020, 2022), and the MIT AeroAstro Vickie Kerrebrock Faculty Award (2020). At MIT, he teaches “Robotics: Science and Systems,” the introduction to robotics for MIT undergraduates, and he created the graduate-level course “Visual Navigation for Autonomous Vehicles”, which covers mathematical foundations and fast C++ implementations of spatial perception algorithms for drones and autonomous vehicles.

Tetsuya Ogata
Waseda University
 
Applications of Deep Predictive Learning for Real-World Robots
Abstract
“Moravec’s Paradox” is one of the greatest remaining challenges in current artificial intelligence technology. For example, it is extremely difficult for robots to perform various tasks using a common hand, even with today’s state-of-the-art technology. Our proposed model of “deep predictive learning” implements the concept of “predictive coding” in neuroscience on robots. In this talk, I will introduce the results of our robotics research using this deep predictive learning and examples of joint research with multiple companies. Also I will present an overview of our Moonshot project of smart robot “AIREC” supported by Japan Cabinet Office.
 

Biography
Tetsuya Ogata received the B.S., M.S., and D.E. degrees in mechanical engineering from Waseda University, Tokyo, Japan, in 1993, 1995, and 2000, respectively. He was a Research Associate with Waseda University from 1999 to 2001. From 2001 to 2003, he was a Research Scientist with the RIKEN Brain Science Institute, Saitama, Japan. From 2003 to 2012, he was an Associate Professor with the Graduate School of Informatics, Kyoto University, Kyoto, Japan. Since 2012, he has been a Professor with the Faculty of Science and Engineering, Waseda University. From 2017, he is a Joint-appointed Fellow with the Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tokyo