Talks on innovations in Robotics, Vision, and Control
Welcome to the MaP Robotics, Vision, and Controls Talks series hosted by ETH Zürich. These open talks focus on innovations in robotics, computer vision, and control systems, and are held in a hybrid format at ETH Zürich’s main campus.
Exceptions in day, time, or room are shown in each talk card below
Speaker: Fatma Güney
Affiliation: Koç University
Date: March 06, 2026
Time & Location: 16:00 CET; ETH HG E 41
In this talk, I’ll discuss long-term point tracking from a robotics perspective, where models have to operate online, in real time, and under strict memory constraints. I’ll begin by briefly showing how visual foundation models improve robustness and viewpoint invariance, giving us strong spatial features even in challenging settings. I’ll then introduce a simple transformer formulation, where each point is treated as a query and video frames are processed sequentially, without access to the future. The core of the talk focuses on temporal propagation in this causal regime: how a small, carefully designed memory can carry just enough information forward to maintain long-term consistency, instead of relying on heavy offline optimization. This leads to Track-On, which demonstrates that state-of-the-art long-term tracking is achievable fully online. I’ll then present our extension, Track-On2, which further improves efficiency and reaches real-time performance with reduced memory usage. In the final part, I’ll turn to a key challenge for real robots: models trained on synthetic data often fail to transfer to the real world. I’ll introduce our recent work on verifier-guided self-training, where a lightweight meta-model selects reliable predictions from multiple trackers to generate better pseudo-labels, enabling data-efficient adaptation to unlabeled real-world videos without sacrificing real-time performance.
Fatma Güney is an Assistant Professor at Koç University. She received her PhD in 2017 from the Max Planck Institute in Tübingen. Her research focuses on autonomous driving and 3D vision, with particular interests in geometry, motion, and uncertainty. Her work has been supported by TÜBİTAK, the European Research Council, and the Royal Society. She regularly serves as a reviewer and area chair at leading computer vision and machine learning conferences, including ICCV, ECCV, CVPR, and NeurIPS.
Speaker: Yulia Sandamirskaya
Affiliation: ZHAW / Auroniq
Date: March 13, 2026
Time & Location: 16:00 CET; ETH HG E 41
I will present the history, basic concepts, and applications of neuromorphic computing technology - the attempt to replicate biological brains and nervous systems not only in algorithms, but also in the structure of computing hardware. Why is it relevant for robotics? Because neuromorphic AI is power-efficient, fast, and supports fast continual learning and could be an alternative to large deep-learning models. Becides, biological solutions to perception, state estimation, motion planning, and control are elegant and inspiring.
Yulia Sandamirskaya is a Full Professor and head of a research center "Cognitive computing" at ZHAW in Wädenswil. Her Neuromorphic Computing Lab works on novel AI technology for service robots in elderly case. She was a group leader at INI (UZH/ETH) and led the Applications research team of the Neuromorphic computing lab at Intel. She is a co-founder of Auroniq-robotics, a cognitive robotics integrator.
Speaker: Kris Dorsey
Affiliation: Northeastern University
Date: March 20, 2026
Time & Location: 16:00 CET; ETH HG E 41
Speaker: Quentin Böhler
Affiliation: ETH Zurich
Date: March 27, 2026
Time & Location: 16:00 CET; ETH HG E 41
In the last fifteen years, medical and surgical robotics have surged, with thousands of clinical systems installed worldwide, and millions of procedures performed. The emergence of tethered and untethered micro-devices for performing complex surgical tasks and accessing deep regions within the human body created unprecedented opportunities to address unmet clinical needs in minimally invasive interventions and targeted drug delivery. Translating robotics research findings into clinically ready products is challenging and requires sustained collaboration between clinicians, researchers, and engineers, making it a highly interdisciplinary journey. Our work includes the design, localization, and autonomous navigation of continuum robots to increase the safety and dexterity of endoluminal and endovascular interventions, with the potential to improve procedures ranging from neurovascular interventions to fetal surgeries. In this talk, I will share some of the past and ongoing efforts we have been deploying to bring robotics to the bedside, the lessons we learned, and how they shaped our research vision.
Quentin Böhler is an Assistant Professor (Tenure Track) of Robotics at ETH Zurich in the Institute of Robotics and Intelligent Systems, where he leads the Medical Robotics Lab. He received an engineering degree in mechatronics from INSA Strasbourg in 2013, followed by an M.Sc. in 2013 and a Ph.D. in robotics in 2016 from the University of Strasbourg. His doctoral research focused on tensegrity mechanisms and variable-stiffness devices for MR-compatible robotic systems. From 2017 to 2025, he was a postdoctoral associate and later a senior researcher at the Multi-Scale Robotics Lab at ETH Zurich, working on magnetically guided devices and electromagnetic navigation technologies for medical applications. His current research addresses unmet clinical needs in surgical and medical procedures, with an emphasis on the design, control, and simulation of robotic systems for therapeutic and diagnostic interventions. He has co-authored more than 50 scientific publications in leading journals such as Science, Science Robotics, Advanced Science, and Nature Communications. He is actively engaged in the IEEE Robotics and Automation Society and currently serves as an Associate Editor for IEEE Robotics and Automation Letters.
Speaker: Nicolas Heess
Affiliation: Google Deepmind
Date: March 31, 2026
Time & Location: 12:00 CEST; ETH HG D 3.2
Speaker: Moritz Bächer
Affiliation: Disney Research Imagineering
Date: April 17, 2026
Time & Location: 16:00 CEST; ETH HG E 41
Speaker: Anna Rohrbach
Affiliation: TU Darmstadt
Date: April 24, 2026
Time & Location: 16:00 CEST; ETH HG E 41
Speaker: Michael Wray
Affiliation: University of Bristol
Date: May 08, 2026
Time & Location: 16:00 CEST; ETH HG E 41
Michael is a Senior Lecturer (Assistant Professor) in Computer Vision at the School of Computer Science at the University of Bristol. He finished his PhD titled "Verbs and Me: an Investigation into Verbs as Labels for Action Recognition in Video Understanding" in 2019 under the supervision of Professor Dima Damen. After, he stayed in the same lab as a Post-Doc working on Vision and Language and the collection of the Ego4D Dataset. Michael has led the organisation EPIC workshop series from 2021 onwards, is an organiser of the Ego4D workshop series, and is an ELLIS member.
Speaker: Patricia Alves-Oliveira
Affiliation: University of Michigan
Date: May 12, 2026
Time & Location: 16:00 CEST; ETH HG G 3 (TBC)
Patricia Alves-Oliveira is an Assistant Professor of Robotics at the University of Michigan, where she leads Robot Studio, a research lab focused on the design, development, and evaluation of social robotics. She was a Senior UX Designer for Amazon Lab126, a Postdoctoral Researcher at the University of Washington in Seattle, a visiting researcher at Cornell University, and she received her Ph.D. from the University Institute of Lisbon, in Portugal. Her research received multiple Best Paper Awards at the International Conference on Human-Robot Interaction, and she is the recipient of DARPA Young Faculty Award. Besides her academic appointment, Patricia serves in the Advisory Board for Meta.
Speaker: Antonino Furnari
Affiliation: University of Catania
Date: May 22, 2026
Time & Location: 16:00 CEST; ETH HG E 41
Antonino Furnari is an Associate Professor at the University of Catania, Italy, where he is a member of the Image Processing Laboratory. His research investigates how intelligent systems can perceive, understand, and anticipate human actions and interactions directly from an embodied, egocentric viewpoint, to enable assistive technologies on wearable devices that provide direct support to users. He is part of the EPIC-KITCHENS, EGO4D and EGO-EXO4D teams.
Speaker: Ankur Mehta
Affiliation: UCLA Electrical and Computer Engineering
Date: May 29, 2026
Time & Location: 16:00 CEST; ETH HG E 41
Prof. Ankur Mehta is an associate professor of Electrical and Computer Engineering at UCLA, and directs the Laboratory for Embedded Machines and Ubiquitous Robots (LEMUR). Pushing toward his vision of a future filled with robots, his research interests involve printable robotics, rapid design and fabrication, control systems, and multi-agent networks. He has received the DARPA Young Faculty Award, NSF CAREER Award, and a Samueli Fellowship; he has also received best paper awards in the IEEE Robotics & Automation Magazine and the International Conference on Intelligent Robots and Systems (IROS). Prior to joining the UCLA faculty, Prof. Mehta was a postdoc at MIT's Computer Science and Artificial Intelligence Laboratory investigating design automation for printable robots. Before that, he conducted research as a graduate student at UC Berkeley in wireless sensor networks and systems, small autonomous aerial robots and rockets, control systems, and micro-electro-mechanical systems (MEMS).