Plenary Lectures
Bassam Bamieh (UC Santa Barbara)
Dennice Gayme (Johns Hopkins University)
Suvrit Sra (MIT)
Wendelin Werner (University of Cambridge)
Semi-plenary Lectures
Daniel Alpay (Chapman University)
Ravi Banavar (IIT Bombay)
Silvère Bonnabel (Mines-Paris Tech)
Venkat Chandrasekaran (Caltech)
Jie Chen (City University of Hong Kong)
Yongxin Chen (Georgia Tech)
Jean-Charles Delvenne (UCLouvain)
Giulia Giordano (University of Trento)
Birgit Jacob (University of Wuppertal)
Timothy O’Leary (University of Cambridge)
Amirhossein Taghvaei (University of Washington)
Melanie Zeilinger (ETH Zürich)
ABSTRACTS
Speaker: Bassam Bamieh (UC Santa Barbara)
Title: Architectural Questions in Distributed Systems and Controls
Abstract: In the design and control of large-scale distributed systems, architectural considerations are arguably some of the most important ones. These include local versus global feedback in multi-agent systems, sensor and actuator placement in distributed control of PDEs, how far should sensor information travel in distributed controller architectures, the structure and realizations of distributed controllers, and many other related issues. Many of these questions have significant implications for plant and system co-design, which are perhaps even more important than controller design itself. We survey some of these questions using specific case studies to point out the role of optimal and robust control in determining limits of performance, which in turn inform controller and system architecture motifs. The case studies will evoke more questions than answers, indicating the need for a more developed theory of systems and controls architecture.
Bio: Bassam Bamieh is Professor of Mechanical Engineering at the University of California at Santa Barbara (UCSB). He received his B.Sc. degree in Electrical Engineering and Physics from Valparaiso University (Valparaiso, IN) in 1983, and his M.Sc. and PhD degrees in Electrical and Computer Engineering from Rice University in 1986 and 1992 respectively. Prior to joining UCSB in 1998, he was an Assistant Professor in the Department of Electrical and Computer Engineering and the Coordinated Science Laboratory at the University of Illinois at Urbana-Champaign (1991-98). His research interests are in the fundamentals of Controls and Dynamical Systems, as well as the applications of systems and feedback techniques in several physical and engineering systems. These areas include Robust and Optimal Control, distributed and networked control and dynamical systems, shear flow transition and turbulence, quantum control, and thermoacoustics. His recognitions include the IEEE Control Systems Society G. S. Axelby Outstanding Paper Award (twice), an AACC Hugo Schuck Best Paper Award, and a National Science Foundation CAREER award. He is a Fellow of the International Federation of Automatic Control (IFAC), and a Fellow of the IEEE.
Speaker: Dennice Gayme (Johns Hopkins University)
Title: A network view of wind farm modeling and control
Abstract: Wind farms comprise a network of dynamical systems coupled through interactions with the turbulent atmospheric boundary layer (ABL). The ABL dynamics govern the velocity field that enters the farm as well as the turbulent mixing that regenerates energy for extraction at downstream rows, thereby playing a fundamental role in wind farm power production. Understanding the dynamic interactions between turbines, wind farms, and the ABL is therefore critical for improving wind farm performance. This talk introduces a suite of models that leverage ABL information to provide improved models of both static and dynamic conditions in the wind farm. An example of such a dynamic model in a control setting provides a case study wherein turbines are viewed as actuators that adjust the flow field to collectively produce more efficient power tracking performance.
Bio: Dennice F. Gayme is a Professor in Mechanical Engineering at Johns Hopkins University. She received her B. Eng. & Society in Mechanical Engineering from McMaster University in 1997, an M.S. in Mechanical Engineering from the University of California at Berkeley in 1998, and her Ph.D. in Control and Dynamical Systems from the California Institute of Technology in 2010. Her research interests are in modeling, analysis and control of spatially distributed and large-scale networked systems, such as wind farms, wall-bounded shear flows, and power systems. She was a recipient of a JHU Catalyst Award in 2015, ONR Young Investigator and NSF CAREER awards in 2017, a Whiting School of Engineering Johns Hopkins Alumni Association Excellence in Teaching Award in 2020, and the Turbulence and Shear Flow Phenomena (TSFP12) Nobuhide Kasagi Award in 2022. She is a fellow of the American Physical Society (APS), serves as a Member-At-Large in the Executive Committee of the APS Division of Fluid Dynamics (APS/DFD), and is the Standing Co-Chair of the Women in Control Committee of the Control Systems Society (CSS) of the IEEE.
Speaker: Suvrit Sra (TU Munich & MIT)
Title: AI and optimization through a geometric lens
Abstract: Geometry arises in a myriad ways across the sciences, and quite naturally also within AI and optimization. In this talk I wish to share with you examples where geometry helps us understand problems in machine learning, optimization, and sampling. For instance, when sampling from densities supported on a manifold, understanding geometry and the impact of curvature are crucial; surprisingly, progress on geometric sampling theory helps us understand certain generalization properties of SGD for deep-learning! Another fascinating viewpoint afforded by geometry is in non-convex optimization: geometry can either help us make training algorithms more practical for deep learning, it can reveal tractability despite non-convexity (e.g., via geodesic convexity), or it can simply help us understand important ideas better (e.g., eigenvectors, LLM training and inference, etc.)
My hope is to offer the audience insights into geometric thinking, and to share with them some new tools that can help us make progress on modeling, algorithms, and applications. To make my discussion concrete, I will recall a few foundational results arising from our research, provide several examples, and note some open problems.
Bio: Suvrit Sra is an Alexander von Humboldt Professor of Artificial Intelligence at the Technical University of Munich (TUM), and an Associate Professor in the EECS Department at MIT, where he is also a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society (IDSS). He obtained his PhD in Computer Science from the University of Texas at Austin. Before TUM & MIT, he was a Senior Research Scientist at the Max Planck Institute for Intelligent Systems, Tübingen, Germany. He has held visiting positions at UC Berkeley (EECS) and Carnegie Mellon University (Machine Learning Department) during 2013-2014. His research bridges mathematical topics such as differential geometry, matrix analysis, convex analysis, probability theory, and optimization with machine learning. He founded the OPT (Optimization for Machine Learning) series of workshops, held from OPT2008–2017 at the NeurIPS (erstwhile NIPS) conference. He has co-edited a book with the same name (MIT Press, 2011). He is a co-founder and the Chief Scientist of Pendulum, a global, AI-driven logistics startup.
Speaker: Wendelin Werner (University of Cambridge)
Title: About Probability in the Continuum
Abstract: The usual way to think about randomness is to start from the discrete world, where one tosses a finite collection of random variables. When the randomness is in fact “spread” in a fairly natural way over space, then in some cases, rather complex and interesting random geometric physically relevant structures with some surprising features can emerge. The aim of this talk is to illustrate this using some specific examples that arose in recent works.
Bio: Wendelin Werner is Rouse Ball Professor of Mathematics and Royal Society Research Professor at the University of Cambridge. Prior to that, he was Professor at University Paris-Sud from 1997 to 2013 and at ETH Zürich from 2013 to 2023. He has received a number of awards for his research in Probability Theory and related topics, including the Fields Medal in 2006.
Speaker: Daniel Alpay (Chapman University)
Title: Infinite dimensional analysis, Hida white noise space and applications to linear systems with random coefficients
Abstract: Infinite dimensional analysis is at the crossroad of the theory of positive definite kernels, stochastic processes, complex analysis, and topological vector spaces. It has numerous applications, with connections to second quantization, stochastic linear systems, models for stochastic processes and their derivatives. In the talk we will survey some of these connections, focusing on links with the theory of linear systems. A non-commutative version of the theory will also be presented.
Bio: Daniel Alpay was born in Paris (France) and has a double formation of electrical engineer (Telecom Paris) and theoretical mathematics (Weizmann Institute, Rehovot, Israel). His research interests are in hypercomplex analysis, operator theory, stochastic processes (in particular in the setting of infinite dimensional analysis) and mathematical physics. He wrote a number of research books and more than 320 papers. Building on his research, he wrote two exercises books on complex analysis and his book entitled “Exercises in applied mathematics, with a view toward information theory, machine learning, wavelets and statistical physics” just appeared. He was a chaired professor at Ben-Gurion University (Beer-Sheva, Israel) and is now Professor at Chapman University (Orange, California), where he holds the Foster G. and Mary McGaw Professorship in Mathematical Sciences.
Speaker: Ravi Banavar (IIT Bombay)
Title: The Ubiquity and Applications of Lie Groups
Abstract: The comfort zone of most engineers is the Euclidean space. Most applications of the past century have relied on tools from the repository of this space – transforms, projections, norms and inner products. However, the intrinsic nature of many problems in engineering is not in this setting. Therefore, solutions that have been proposed in an Euclidean setting are necessary “local” and hence termed “coordinate-dependant.” This weakness has beckoned new paradigms. Not stepping too far away from the comfort zone of Euclidean space, we encounter Lie Groups. The umbilical connection of a Lie group to a Lie algebra (a vector space) brings in familiarity in terms of many tools commonly employed in Euclidean space. In this talk I shall present preliminary machinery for Lie groups, and four domains of applications – multiagent control, consensus, optimal control and observer design.
Bio: Ravi N. Banavar is currently an Institute Chair Professor in Systems and Control Engineering at IIT Bombay. He received his B.Tech. in Mechanical Engineering from IIT Madras (1986), his Masters (Mechanical, 1988) and Ph.D. (Aerospace, 1992) degrees from Clemson University and the University of Texas at Austin, respectively. He had a brief teaching stint at UCLA during 1991–92, soon after which he joined the Systems and Control Engineering group at IIT Bombay in early 1993. From 2009 onwards, during his tenure as the Convener of the group, the strength of the group grew from 5 to 9 members, with academic strengths in nonlinear control, switched systems, optimization, geometric mechanics, formation and cooperative control, robotics and adaptive control. He has spent a few sabbatical breaks at UCLA (Los Angeles), IISc (Bangalore) and LSS (Supelec, France.) His research interests are broadly in the field of geometric mechanics, nonlinear and optimal control, with applications to electromechanical and aerospace engineering problems. He is an Associate Editor of the Elsevier journal, Systems and Control Letters, on the Editorial Advisory Board of the Taylor and Francis publication, The International Journal of Control, and a Technical Associate Editor of the IEEE Control Systems Society magazine.
Speaker: Silvère Bonnabel (Mines-Paris Tech)
Title: Enhancing Kalman Filtering with Backpropagation: Theory and Examples
Abstract: The Kalman filter (KF) and its nonlinear extension (EKF) are a cornerstone in estimating the state of a dynamical system from noisy measurements, crucial for many real-world applications. However, the accuracy of those filters, whether used in a probabilistic setting or as observers, hinge on precise tuning of various parameters. The sensitivity equations, developed by Gupta and Mehra in 1974 for maximum likelihood parameter estimation, allow for computation of the effect of small variations in these parameters on the KF estimates and its internal variables. By leveraging the gradient backpropagation method, a key technique in neural networks, we introduce a much more numerically efficient method to analytically compute these derivatives for the KF and the EKF. Their applicability is also extended beyond traditional maximum likelihood estimation.
We illustrate the potential of these novel sensitivity equations through several applications: fault detection, where gradient-based insights detect structured spurious measurements; adaptive filtering, where an EKF combined with a shallow neural network, that dynamically adapts its noise parameters at all times, allows for localization of a car over hours using only inertial measurements; and active sensing, where gradients of the EKF’s covariance matrix with respect to the control inputs allow for computation of trajectories that optimize the information gathered by sensors, demonstrated with real experiments.
Bio: Silvère Bonnabel is a professor at Mines Paris PSL, Paris Sciences et Lettres Research University. His research lie in the intersection of control theory, robotics, and learning. He received his doctoral degree in Mathematics and Control from Mines Paris in 2007. He was an Invited Fellow at the University of Cambridge in 2017, and at INRIA Paris in 2022. Dr. Bonnabel was awarded the IEEE – SEE Glavieux prize in 2015, the Automatica Paper Prize in 2020, the European Control Award in 2021, and the Prix IMT Espoir from the French Academy of Sciences in 2022. He currently serves as an Associate Editor for IEEE Control Systems.
Speaker: Venkat Chandresakaran (Caltech)
Title: Any-dimensional optimization
Abstract: Optimization problems in many applications arise naturally as sequences indexed by dimension. In extremal combinatorics and information theory, sequences of problems are indexed by graph size or number of channel uses, and it is of interest to obtain bounds on the optimal values of all problems in a sequence or on the limiting value. In other domains such as inverse problems and combinatorial optimization, the objective is to design tractable relaxations that are compatible across dimension in various ways depending on the application.
We present a systematic treatment of sequences of convex programs by explicitly considering relations between problem instances of different dimensions. We show that such sequences are often described in a manner that makes their instantiation in every dimension obvious. Such ‘free’ descriptions arise from a recently identified phenomenon in algebraic topology called representation stability.
Our framework yields two types of consequences. First, we show that certain sequences of invariant convex programs that are commonly seen in applications can be solved using a number of operations that is independent of dimension. Second, we present new approaches to design convex relaxations from data – in our setting, input-output training data are provided for problem instances of low dimension and we derive a sequence of convex relaxations that can be employed for problems in any dimension, including those not in the training set.
Bio: Venkat Chandrasekaran is on the faculty at Caltech, where he is Tomiyasu Professor of Computing and Mathematical Sciences and of Electrical Engineering. He received a Ph.D. in Electrical Engineering and Computer Science from MIT (2011) and undergraduate degrees in Mathematics and in Electrical and Computer Engineering from Rice University (2005). He has received several awards including the Jin-Au Kong Prize for his dissertation in Electrical Engineering at MIT (2012), the Sloan Research Fellowship in Mathematics (2016), and the INFORMS Optimization Society Prize for Young Researchers (2016). His research interests lie in optimization and the information sciences.
Speaker: Jie Chen (City University of Hong Kong)
Title: When is a Time-Delay System Stable and Stabilizable? A Third-Eye View
Abstract: Time delays are a prevailing scene in natural and engineered systems. While a recurring subject in classical studies, modern interconnected networks are especially prone and indeed, are vulnerable to long and variable delays; systems and networks in this category are many, ranging from communication networks, sensor networks, cyber-physical systems, to biological systems. A time-delay system may or may not be stable for different lengths of delay, and further, may or may not be stabilized under a conventional feedback mechanism. When will then a delay system be stable or unstable, and for what values of delay? When can an unstable delay system be stabilized? What range of delay can a feedback system tolerate to maintain stability? Fundamental questions of this kind have long eluded engineers and mathematicians alike, yet ceaselessly invite new thoughts and solutions. In this talk I shall present a nontraditional perspective on the stability and stabilization of time-delay systems, wherein we attempt to develop tools and techniques that answer to the questions alluded to above, seeking to provide exact and efficient computational solutions to stability and stabilization problems of time-delay systems. We develop in full an operator-theoretic approach that departs from both the classical algebraic and the omnipresent LMI solution approaches, notable for both its conceptual appeal and its computational efficiency. Preceding this development we shall also develop the necessary mathematical foundation centered at operator perturbation series, which characterize the analytical and asymptotic properties of eigenvalues of matrix-valued functions or operators. Extensions to contemporary topics such as networked control and multi-agent systems may also be addressed.
Bio: Jie Chen holds the appointment of Chair Professor with the Department of Electrical Engineering, City University of Hong Kong, Hong Kong, China. Prior to joining City University, he was with The University of California, Riverside, California from 1994 to 2014, where he served as Professor and Chair for the Department of Electrical Engineering. He has also held guest positions and visiting appointments with institutions in Australia, Chile, China, France, Germany, Japan, and Sweden. His main research interests are in the areas of linear multivariable systems theory, system identification, robust control, optimization, time-delay systems, networked control, and multi-agent systems. He presently serves on the editorial boards of International Journal of Robust and Nonlinear Control, and SIAM Journal on Control and Optimization. He routinely serves on program and organizing committees of international conferences, most recently as the General Chair of the 3rd IEEE Conference on Control Technology and Applications, and the International Program Committee Chair of the 16th IFAC Workshop on Time Delay Systems. He is a Fellow of IEEE, Fellow of AAAS, Fellow of IFAC, Fellow of SIAM and a Yangtze Scholar/Chair Professor of China.
Speaker: Yongxin Chen (Georgia Tech)
Title: Stochastic Diffusions for Control, Learning, and Inference
Abstract: Diffusion processes refer to a class of stochastic processes driven by Brownian motion. They have been widely used in various applications, ranging from engineering to science to finance. In this talk, I will discuss my experiences with diffusion and how this powerful tool has shaped my research programs. I will go over several research projects in the area of control, inference, and machine learning, where we have extensively utilized tools from diffusion processes. In particular, I will present our research on three topics: i) covariance control in which we aim to regulate the uncertainties of a dynamic system; ii) diffusion models for generative modeling in machine learning; iii) and Monte Carlo Markov chain sampling for general inference tasks.
Bio: Yongxin Chen is an Associate Professor in the School of Aerospace Engineering at Georgia Institute of Technology. He has served on the faculty at Iowa State University (2017-2018). Prior to that, he spent one year (2016-2017) at the Memorial Sloan Kettering Cancer Center (MSKCC) as a postdoctoral fellow. He received his BSc from Shanghai Jiao Tong University in 2011, and Ph.D. from University of Minnesota in 2016. He is an awardee of the best Paper Award of IEEE Transactions on Automatic Control in 2017 and the best paper prize of SIAM Journal on Control and Optimization in 2023. He received the NSF Faculty Early Career Development Program (CAREER) Award in 2020, the Simons-Berkeley Research Fellowship in 2021, the A.V. Balakrishnana Award in 2021 and the Donald P. Eckman Award for outstanding young engineer in the field of automatic control in 2022. His current research interests are in the areas of control, machine learning, and robotics. He enjoys developing new algorithms and theoretical frameworks for real world applications.
Speaker: Jean-Charles Delvenne (UCLouvain)
Title: Nanoscale control theory
Abstract: Modelling and design of interconnected nanoscale systems, such as electronic circuits or biochemical networks, call for a rewriting of the rulebook of control theory. In order to agree with fundamental laws of nature, such as microscopic reversibility and ubiquitous thermal noise, one must merge dissipative systems theory on the one hand, with stochastic thermodynamics on the other hand.
An overarching theme is the trade-off between dissipation and various objective functions such as precision or speed. We introduce and illustrate those trade-offs on a range of systems, from molecular motors to ultra-low-power electronic memories.
Bio: Jean-Charles Delvenne received the M. Eng. degree (2002) and a Ph.D. (2005) in Applied Mathematics from Université catholique de Louvain, Louvain-la-Neuve, Belgium. After positions at the California Institute of Technology, the Imperial College London, and the University of Namur, he has been Professor of Applied Mathematics at Université catholique de Louvain, Louvain-la-Neuve, Belgium since 2010.
His research interests include networked dynamical systems, statistical physics, information theory, complex networks and data science. His works have found applications in neurosciences, biochemistry, social sciences, geography, biophysics and electronics.
Speaker: Giulia Giordano (University of Trento)
Title: Robustness and resilience of dynamical networks in nature
Abstract: Biological, ecological and epidemiological systems can be seen as dynamical networks, namely dynamical systems that are naturally endowed with an underlying network structure, because they are composed of several subsystems that interact according to an interconnection topology. Despite their large scale and complexity, natural systems are often able to preserve fundamental properties and qualitative behaviours even in the presence of huge perturbations and uncertainties, both intrinsic and extrinsic. We look for the source of the extraordinary robustness that often characterises systems in nature, by identifying properties and emerging behaviours that exclusively depend on the system structure (the graph topology along with qualitative information), regardless of parameter values. We focus on the parameter-free assessment of important properties, such as the stability of equilibria and the sign of steady-state input-output influences, thus allowing structural model falsification and structural comparison of alternative mechanisms proposed to explain the same phenomenon. Finally, we discuss the limitations of structural methodologies, which may be overcome by integrating and complementing them with probabilistic approaches, and we propose definitions of resilience that complement the notion of robustness and can be effectively applied to biological models.
Bio: Giulia Giordano leads the Dynamical Networks and Systems Biology group at the Department of Industrial Engineering, University of Trento, Italy. She received the B.Sc. and M.Sc. degrees summa cum laude, and the Ph.D. degree with honours in Systems and Control Theory, from the University of Udine, Italy. She was a Research Fellow at Lund University, Sweden, from 2016 to 2017, and an Assistant Professor at the Delft University of Technology, The Netherlands, from 2017 to 2019. Giulia serves as an Associate Editor for the IEEE Control Systems Letters and for Automatica. She was selected as Outstanding Reviewer by the IEEE Transactions on Automatic Control in 2016 and by the Annals of Internal Medicine in 2020, and as the Outstanding Associate Editor of the IEEE Control Systems Letters in 2021. Giulia received the EECI Ph.D. Award 2016, the NAHS Best Paper Prize 2017, and the SIAM Activity Group on Control and Systems Theory Prize 2021 for “significant contributions to the development of innovative methodologies for the structural analysis of networked control systems and their applications to biological networks”. Her main research interests include the analysis and the control of dynamical networks, with applications especially to biology and epidemiology.
Speaker: Birgit Jacob (University of Wuppertal)
Title: Port-Hamiltonian Systems: Modelling and Analysis
Abstract: The port-Hamiltonian system formulation combines several traditions from mechanics, system modelling and control. One of these is port-based modelling, in which complex systems can be represented by the interconnection of simpler blocks. By using energy as a common language for the connection, this approach enables the modelling of systems belonging to different physical domains (mechanical, electrical, thermal, …). In particular, we model interacting particle systems as port-Hamiltonian systems and investigate analytical properties of a class of port-Hamiltonian systems on a one-dimensional spatial domain.
Bio: Birgit Jacob received the M.Sc. degree in mathematics from the University of Dortmund, Germany, in 1992 and the Ph.D. degree in mathematics from the University of Bremen, Germany, in 1995. She held postdoctoral and Professor positions at the Universities of Twente, Leeds and Paderborn, and at the TU Berlin, and TU Delft. Since 2010, she has been with the University of Wuppertal, Germany, where she is a Full Professor in analysis. From 2012-2016, she has been the Vice Dean of the Faculty of Mathematics and Natural Sciences. She is the coordinator of the EU MSCA Doctoral Network ModConflex. Her current research interests include the area of infinite-dimensional systems and operator theory, particularly well-posed linear systems and port-Hamiltonian systems.
Speaker: Timothy O’Leary (University of Cambridge)
Title: Closed loop neurophysiology
Abstract: The nervous system is the epitome of a dynamic network, operating in tight feedback with the environment. Due to the closed loop relationship between neural activity and behaviour, it is difficult to establish whether a signal in the brain drives a behavioural outcome or merely reports it, and it is even harder to manipulate brain activity in a systematic way. As a result, much of what we know about nervous system function is correlative, and most of the technology we have for manipulating the brain works despite its dynamics rather than in tandem with them.
In this talk I will highlight two aspects of our recent attempts to study nervous systems in closed loop with data driven models. First, I will describe brain-machine interfaces (BMI) we developed to understand how abstract representations of the environment are used by rodents to navigate virtual environments. I will show how the neural code can be surprisingly simple in some brain areas, enabling robust decoding with no learning required by the animal. I will also show how adaptive properties in other neural circuits can make them extremely sensitive to coupling via brain-machine interfaces, to the extent that the code itself may reconfigure completely during BMI use. In the second part of the talk, I will describe our efforts to use tools from system identification to constrict data-driven predictive models of neural activity. We focus on the dynamics of small central pattern generating circuits that control movement. The internal dynamics of these circuits is extremely rich, and this presents an obstacle to building predictive, data driven models with “the right level” of detail. I will describe work we are doing to build such models and ultimately use them to control the activity of living neural circuits in real time.
Bio: Timothy O’Leary’s research interests lie at the intersection between physiology, neural computation and feedback control. Originally trained as a pure mathematician, he dropped out of a PhD in hyperbolic geometry to retrain as an experimental physiologist. After obtaining his doctorate from the University of Edinburgh in 2009, he pursued a mixture of theoretical and experimental research on homeostasis, neural circuit dynamics and neuromodulation. From 2012-2016 he worked with Prof Eve Marder in Brandeis University, USA, where he developed theory that reconciles nervous system variability with reliable function. For this work he was awarded the Gruber International Prize in 2014. In 2016 he embarked on mission to try to bring together control engineers and neuroscientists, forming an ERC-funded research group in the University of Cambridge. In 2017 he became a HFSP Young Investigator, leading an international collaboration between his group and experimentalists in the Weizmann Institute of Science and Harvard Medical School to attempt to understand how navigational information is represented in the brain. This has led to recent insights in how memories can be formed and maintained in the absence of stable synaptic connections, how neural circuit size influences learning performance, and how neural signals can be decoded to control movement in real time. He was elected a FENS-Kavli Fellow in 2021 and serves as a reviewing editor for The Journal of Neuroscience.
Speaker: Amirhossein Taghvaei (University of Washington)
Title: Towards data-driven nonlinear filtering algorithms
Abstract: In this talk, I present a new variational formulation of the Bayes’ law, that will be used for construction of a new family of nonlinear filtering algorithms. The variational formulation is based on the optimal-transportation (OT) theory, and aims at approximating the Brenier Optimal transport map from the prior to the posterior distribution, as a solution to a stochastic optimization problem. Unlike sequential importance resampling (SIR) particle filters, the OT formulation does not require the analytical form of the likelihood. Moreover, it allows us to harness the approximation power of neural networks to model complex and multi-modal distributions and employ stochastic optimization algorithms to enhance scalability. I present error analysis and numerical results that illustrate the performance of the algorithm in comparison with the SIR particle filters, and present an extension of the algorithm that is model-free and only requires recorded data from the state and observation processes.
Bio: Amirhossein Taghaei is an Assistant Professor in the William E. Boeing Department of Aeronautics and Astronautics at University of Washington Seattle. Prior to that, he was a Postdoctoral Scholar at the University of California Irvine in Prof. Tryphon Georgiou’s research lab. He received his Ph.D. degree in Mechanical Science and Engineering and the M.S. degree in Mathematics from University of Illinois at Urbana-Champaign. His current research involves nonlinear filtering, computational optimal transport, and stochastic thermodynamics.
Speaker: Melanie Zeilinger (ETH Zürich)
Title: Optimization-based Control under Uncertainty: Guarantees, Performance & Computation
Abstract: Increasing requirements on modern complex control systems amplify the challenging tradeoff between performance gains and safety concerns. Optimization-based control has a long history of providing rigorous control-theoretic guarantees, particularly for ensuring the satisfaction of critical safety constraints. Its popularity has recently surged due to its potential as a flexible framework for safe learning-based control. While a variety of methods are available to address model uncertainty, the context of learning and complex systems has emphasized the need for improved uncertainty handling to reduce conservativeness while maintaining strong controller guarantees.
The talk will begin with a holistic discussion of uncertainties arising in an optimization-based controller, ranging from model to numerical uncertainty. I will then discuss both the improved treatment of model uncertainty and concepts to reduce uncertainty through learning and data-driven techniques. The results will be accompanied by efficient computational methods, which are critical to enabling these advancements in practice.
Bio: Melanie Zeilinger is an Associate Professor at the Department of Mechanical and Process Engineering at ETH Zurich, where she is leading the Intelligent Control Systems. She received the diploma in Engineering Cybernetics from the University of Stuttgart in Germany in 2006 and the Ph.D. degree in Electrical Engineering from ETH Zurich in 2011. From 2011 to 2012 she was a postdoctoral fellow at the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. From 2012 to 2015 she was a Postdoctoral Researcher and Marie Curie fellow in a joint program with the University of California at Berkeley, USA, and the Max Planck Institute for Intelligent Systems in Tuebingen, Germany. From 2018 to 2019 she was a professor at the University of Freiburg, Germany. Her awards include the ETH medal for her PhD thesis, an SNF Professorship, the Golden Owl for exceptional teaching at ETH Zurich 2022 and the European Control Award 2023. Her research interests include learning-based control with applications to robotics and biomedical systems.