Logo

Innovation in (Soft) Robotics and Control

Robotics, Vision, and Controls Talks

Welcome to the (soft) Robotics, Vision, and Controls Talks series hosted by ETH Zürich. These open talks focus on innovations in (soft) robotics, computer vision, and control systems, and are held in a hybrid format at ETH Zürich’s main campus.

About the Series

Schedule:

Upcoming Talks

Next Talk

Abstract

“The field of multimodal learning has witnessed significant progress in recent years, mainly enabled by advances in contrastive and autoregressive learning techniques. This scientific talk aims to present the latest developments in this domain, focusing on the following areas: I will quickly recap the concept of embedding space learning, which involves mapping multimodal input data, such as images, text, and video, into a shared feature space. Based on that, I will discuss the abilities of vision-language models that can arise from this, namely the concept of spatial and spatial-temporal grounding, which involves localizing objects and actions in images and videos. Finally, the talk will close with an outlook toward the challenges and future directions in multimodal learning, including the learning of multimodal structures to improve the efficiency and scalability of future systems.”

Bio

“Prof. Dr. Hilde Kuehne is Professor at the Tuebingen AI Center at the University of Tuebingen and affiliated professor at the MIT-IBM Watson AI Lab. Her research focuses on learning without labels and multimodal video understanding. She has created several highly cited datasets and mainly works on analyzing large collections of untrimmed video data and other multimodal data sources. Her experience includes projects with various European and US universities with a focus on video and image processing. She has published various high-impact works in the field, including HMDB, which was awarded with the ICCV 2021 Helmholtz Prize and the PAMI Mark Everingham Prize in 2022. She has organized various workshops in the field and currently serves as general chair for ICCV 2025. Beyond her work, she is committed to bringing more diversity to STEM and is a board member of the Women in Computer Vision Initiative.”

Future Talks

Prof. Dr. Xiaolong Wang

Modeling Humans for Humanoid Robots

Speaker: Prof. Dr. Xiaolong Wang

Affiliation: UC San Diego

Date: May 09, 2025

Time & Location: 16:00 CET; ETH HG E 41

More info

Abstract

"Having a humanoid robot operating like a human has been a long-standing goal in robotics. The humanoid robot provides a general-purpose platform to conduct diverse tasks we do in our daily lives. In this talk, I will present a 2-level learning framework designed to equip humanoid robots with robust mobility and manipulation skills, enabling them to generalize across diverse tasks, objects, and environments. The first level focuses on training a Vision-Language-Action (VLA) model with human video data. This VLA can predict “mid-level” actions on precise robot movements and trajectories. The second level involves developing low-level robot manipulation skills through human hand imitation, and low-level humanoid whole-body control skills via human body imitation. By combining human VLA with low-level robot skills, this framework offers a scalable pathway toward realizing general-purpose humanoid robots."

Bio

"Xiaolong Wang is an Assistant Professor in the ECE department at the University of California, San Diego, and a Visiting Professor at NVIDIA Research. He received his Ph.D. in Robotics at Carnegie Mellon University. His postdoctoral training was at the University of California, Berkeley. His research focuses on the intersection between computer vision and robotics. His specific interest lies in representation learning with videos and physical robotic interaction data. These comprehensive representations are utilized to facilitate the learning of human-like robot skills, with the goal of generalizing the robot to interact effectively with a wide range of objects and environments in the real physical world. He is the recipient of the Sloan Research Fellowship, J. K. Aggarwal Prize, NSF CAREER Award, Intel Rising Star Faculty Award, Best Paper Awards at IROS and ICRA, and Research Awards from Sony, Amazon, Adobe, and CISCO."

Prof. Dr. Lingjie Liu

Towards Next-Gen 3D Reconstruction and Generation: From Visual Fidelity to Multimodal and Physical Understanding

Speaker: Prof. Dr. Lingjie Liu

Affiliation: University of Pennsylvania

Date: May 14, 2025

Time & Location: 16:00 CET; ETH HG F 26.5

More info

Abstract

"Recent years have witnessed remarkable progress in 3D reconstruction and generation. However, most existing methods primarily focus on modeling geometry and appearance. I believe the next generation of 3D reconstruction and generation should go further in two key directions. First, it should be well-aligned with other modalities—such as language and images—so that 3D representations can play an important role in the multi-modal era. Second, it should incorporate physical understanding to ensure reconstructions and generations are physically plausible, which will ultimately make them more applicable in robotics. In this talk, I will present our recent efforts toward these goals and discuss the challenges that lie ahead."

Bio

"Lingjie Liu is the Aravind K. Joshi Assistant Professor in the Department of Computer and Information Science at the University of Pennsylvania, where she leads the Penn Computer Graphics Lab. and she is also a member of the General Robotics, Automation, Sensing & Perception (GRASP) Lab. Previously, she was a Lise Meitner Postdoctoral Research Fellow at Max Planck Institute for Informatics. She received her Ph.D. degree at the University of Hong Kong in 2019. Her research interests are at the interface of Computer Graphics, Computer Vision, and AI, with a focus on Neural Scene Representations, Neural Rendering, Human Performance Modeling and Capture, and 3D Reconstruction."

Prof. Dr. Keenan Albee

Autonomy On-Orbit and Beyond: Expanding Mission Capabilities in Extreme Environment Robotics

Speaker: Prof. Dr. Keenan Albee

In Person

Affiliation: JPL / USC

Date: May 22, 2025

Time & Location: 14:15 CET; ETH LEE E 101

More info

Abstract

"Autonomy is essential to making rapid decisions in safety-critical situations and dealing with tasks too complex for a human teleoperator. Within the space robotics community, the confluence of enhanced processing power, algorithmic maturity, and growing acceptance of autonomy in risk-averse domains is leading to a renaissance in its use. This talk explores some of the enduring algorithmic and safety challenges of working with increasing complexity in space and extreme environment robotics autonomy; in particular, the problem of motion planning and control under uncertainty will be explored in the context of providing robot motion that is safe, real-time, and tailored to the needs of real robotic systems. This work is framed in the context of novel planning and control techniques in microgravity close proximity operations and planetary surface robotics, demonstrating, respectively, 1) planning, control, and state estimation for autonomous on-orbit rendezvous with an uncharacterized tumbling target; and 2) highly-constrained model predictive control for roving in unknown environments. Flight demonstrations of these techniques will be discussed for the Astrobee free-flyers aboard the ISS, and the Cooperative Autonomous Distributed Robotic Explorer (CADRE) rovers launching to the Moon."

Bio

"Keenan Albee is a Robotics Technologist in the Maritime and Multi-Agent Autonomy group at the NASA Jet Propulsion Laboratory, California Institute of Technology and an incoming Assistant Professor at the University of Southern California. He received a Ph.D. in Aeronautics and Astronautics (Autonomous Systems) from MIT in 2022 under a NASA Space Technology Research Fellowship. His research focuses on model-aware autonomy for space and extreme environment robotics, leveraging real-time tools to make autonomous robotic operations safer and more efficient. His work includes the first autonomous on-orbit rendezvous with an uncharacterized tumbling target---demonstrated on the Astrobee robots aboard the ISS---and multiple planning and multi-agent decision-making algorithms aboard the fully autonomous CADRE lunar rovers launching to the Moon. His research interests span extreme environment robotics, safe motion planning and control under uncertainty, and novel extreme environment systems development with an “algorithms to deployment” philosophy of real-world field hardware validation."

Prof. Dr. Cynthia Sung

Salp-Inspired Approach to Underwater Multi-Robot Locomotion

Speaker: Prof. Dr. Cynthia Sung

Affiliation: University of Pennsylvania

Date: May 23, 2025

Time & Location: 16:00 CET; ETH HG E 41

More info

Abstract

"Soft robots introduce new opportunities for design and control approaches that take advantage of a robot's internal mechanics to perform a task. Underwater swimming is particularly interesting in that the robot's locomotion performance depends heavily on not only the robot itself by also the complex interactions between the fluid and the robot's body. We are inspired by the biological salp, an underwater jelly that grows in colonies, and how salp colonies can produce higher speed, agility, or cost of transport than a single unit, depending on the jet coordination and physical arrangement of the units. In this talk, I will discuss our recent forays in multi-jet interaction in the SALP robot and our insights into multi-robot coordination for physically connected swimmers."

Bio

"Cynthia Sung is an Associate Professor in the Department of Mechanical Engineering and Applied Mechanics (MEAM) and a member of the General Robotics, Automation, Sensing & Perception (GRASP) lab at the University of Pennsylvania. She received a Ph.D. in Electrical Engineering and Computer Science from MIT in 2016 and a B.S. in Mechanical Engineering from Rice University in 2011. Her research interests are computational methods for robot co-design, with a particular focus on origami-inspired and compliant robots. She is the recipient of a 2024 ARO Early Career Award, 2023 ONR Young Investigator award, and a 2019 NSF CAREER award."

Prof. Dr. Perla Maiolino

Soft and Perceptive Robots

Speaker: Prof. Dr. Perla Maiolino

In Person

Affiliation: University of Oxford

Date: May 28, 2025

Time & Location: 12:00 CET; ETH HG F 26.5

More info

Abstract

"For decades, robotics has been defined by the need to avoid physical contact for safety, leading to rigid machines that interact with their environment in a limited way. However, true intelligence emerges from physical interaction between the environment and an agent’s own body, through which an awareness of both the external environment and the Self is built. In this talk, we explore how soft robotics and tactile sensing are transforming robotic intelligence by making physical interaction a fundamental part of perception. By designing robots with compliant, adaptable bodies and highly integrated sensor technologies, we enable them to perceive the world through touch, much like biological organisms. In this paradigm, the body itself becomes an active component of sensing, shaping the data received and simplifying high-level inference. This requires advances in material and manufacturing methods, embedded sensing, and computational models that extract meaning from complex sensor inputs. We will discuss how these innovations allow robots to develop self-awareness, improve adaptability, and achieve more intelligent behaviour. Embracing contact, rather than avoiding it, opens new pathways for safer, more capable, and more intuitive robotic systems."

Bio

"Perla Maiolino (Member, IEEE) received the B.Eng. degree in software engineering, the M.Eng. degree in robotics and automation, and the Ph.D. degree in robotics from the University of Genoa. She joined the Mechatronic and Control Laboratory (MACLAB), Department of Informatics, Bioengineering, Robotics and System Engineering (DIBRIS), University of Genoa, where, as a Research Fellow, carried out research about new technological solutions for the development and integration of distributed tactile sensors for providing robots with the “sense of touch.” Before joining Oxford Robotics Institutes she worked as a Post-Doctoral Researcher at the Biologically Inspired Robotics Lab (BIRL), University of Cambridge, Cambridge, U.K., where she started to be interested in soft robotics pursuing research in soft robot sensing and perception. She is currently an Associate Professor at the Engineering Science Department and a member of Oxford Robotic Institute, University of Oxford, Oxford, U.K., where she has established the ORI Soft Robotics Laboratory. Her research interests are related to the development of new technological solutions for soft robot sensors and actuators and to investigating the role of “softness” in soft robot perception for achieving autonomy and intelligent behaviors."

Past Talks

Prof. Dr. Pulkit Agrawal

Pathway to Robotic Intelligence

Prof. Dr. Pulkit Agrawal (MIT)
April 2025

Watch Recording
Prof. Dr. Guanya Shi

Building Generalist Robots with Agility via Learning and Control: Humanoids and Beyond

Prof. Dr. Guanya Shi (CMU)
April 2025

Watch Recording
Prof. Dr. Kristen Grauman

Video understanding for skill learning

Prof. Dr. Kristen Grauman (UT Austin)
April 2025

Watch Recording
Prof. Dr. Kaitlyn Becker

Desktop to Deep Sea: Mechanical Programming of Soft Machines

Prof. Dr. Kaitlyn Becker (MIT)
March 2025

Watch Recording
Prof. Dr. Jeffrey Lipton

Robots With a Twist

Prof. Dr. Jeffrey Lipton (Northeastern / BG)
March 2025

Watch Recording
Dr. Oier Mees

Embodied Multimodal Intelligence with Foundation Models

Dr. Oier Mees (UC Berkeley)
March 2025

Watch Recording
Prof. Dr. Mark R. Cutkosky

ReachBot: Locomotion and Manipulation with Exceptional Reach

Prof. Dr. Mark R. Cutkosky (Stanford University)
March 2025

Watch Recording
Prof. Dr. Kevin Chen

Insect-scale aerial robots driven by soft artificial muscles

Prof. Dr. Kevin Chen (MIT)
February 2025

Watch Recording
Prof. Dr. Angela Dai

From Quantity to Quality for 3D Perception

Prof. Dr. Angela Dai (TU Munich)
February 2025

Watch Recording