This event is held in honour of Dr Martin Clark who passed away in 2025.泭 Colleagues, friends and former students paid their own tributes to him via泭 A tribute to Martin Clark 1938 2025
Participants are asked to register for the catering purposes. There is NO registration fee.
Registration deadline: 12 May 2026
Preliminary Programme
Day 1
繚泭泭泭泭泭泭泭 09:009:15泭Registration, coffee
繚泭泭泭泭泭泭泭 09:159:30 Welcoming remark
繚泭泭泭泭泭泭泭 9:3011:00泭two talks
繚泭泭泭泭泭泭泭 11:0011:30泭唬棗款款梗梗
繚泭泭泭泭泭泭泭 11:30-13.00泭two talks
繚泭泭泭泭泭泭泭 13:0014:00泭郭喝紳釵堯
繚泭泭泭泭泭泭泭 14:0015:30泭two talks
繚泭泭泭泭泭泭泭 15:3016:00泭唬棗款款梗梗
繚泭泭泭泭泭泭泭 16:0017:30 two talks
繚泭泭泭泭泭泭泭 17:30-18:00 Remembering Martin
Day 2
繚泭泭泭泭泭泭泭 09:0010.30 two talks
繚泭泭泭泭泭泭泭 10:3011.00 Coffee
繚泭泭泭泭泭泭泭 11:0012.30 two talks
繚泭泭泭泭泭泭泭 13:0014:00泭郭喝紳釵堯泭
List of speakers and talks
Speaker: Richard Vinter
Title: Clarks Shifted Rayleigh Filter and Other Adventures in Applied Stochastic Analysis
Abstract: Martin Clark is best known among researchers for his foundational work on stochastic processes, nonlinear filtering and stochastic control. But his deep knowledge of stochastic analysis was allied with practical interests, leading to important contributions to signal processing, process control and other areas of systems engineering. This talk will give the flavour of this lesser-known aspect of Martins work. Foremost will be his Shifted Rayleigh Filter (SRF) for estimating the state of a moving target from noisy bearings-only measurements (i.e. the direction cosines of the position of the target relative to the sensor platform). The key idea behind his algorithm is that a small change to the measurement model leads to simple, exact formulae for the first and second order moments of the relevant conditional distributions. The SRF is superior, in terms of accuracy and stability, to traditional algorithms based on the Extended Kalman Filter and its refinements. In the most challenging tracking scenarios where the EKF fails altogether, the performance of the SRF is comparable to that of particle filters, while reducing the computational demands by orders of magnitude. Other examples given of his work will include his elegant application of the work of Elliot/Kalton, linking risk averse formulations of optimal exit time problems for controlled diffusions and differential games, to practical problems of flow control.
Speaker: Terry Lyons
Title: Beyond Diffusion: Clarks 1966 Thesis and the Path to Rough Models in Data Science
Abstract: In his 1966 thesis, Clark approached stochastic systems from the perspective of physically observed signals with short correlation times. He did not begin by postulating Brownian motion. What emerges from his analysis is that convergence of the system dynamics depends not only on the limiting signal, but also on additional second-order structure inherited from the noise. In modern language, he identified the need for more than just the pathalthough he encoded this structure statistically rather than pathwise. This viewpoint feels very natural today, especially in data-driven settings where signals are observed rather than assumed, and where the higher-order structure captured by rough path theory is increasingly shaping the analysis of sequential data.
Speaker: Nigel Newton
Title: Stochastic Calculus and Information Geometry in Nonequilibrium Statistical Mechanics
Abstract: The It繫 calculus is an ideal tool for the study of statistical mechanical systems in continuous time.泭泭 The talk will illustrate its use in the context of an electrical circuit containing Nyquist-Johnson resistors.泭泭 Temperature differences within the circuit lead to a nonequilibrium stationary state, in which energy flows between two components.泭 This is associated with entropy production and a flow of Shannon information called the directed information.泭 The flows are connected by a mesoscopic variant of the Second Law of Thermodynamics regarding the donor of energy as a heat bath, and the recipient as a Maxwellian demon having access to partial observations, the directed information quantifies the maximum rate at which the demon can extract work from the heat bath.泭 This is achieved under constraints on the circuit parameters, including an inequality constraint on temperatures and an equality constraint under which the demon implements a Kalman-Bucy filter.泭 The talk will discuss time-reversal, innovations and the thermodynamic cost of optimal filtering.泭 The rates of entropy production and directed information are naturally expressed in terms of quadratic variation processes on statistical manifolds, as measured in the Fisher-Rao Riemannian metric.
Based on joint work with: Henrik Sandberg (KTH), Jean-Charles Delvenne (Louvain-la-Neuve) and Sanjoy Mitter (MIT).
Speaker: Eugene Wong
Title: Conditional Expectation and Generative AI: Computing the Score Function泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭泭
Abstract: Conditional expectation and machine learning can be used to deal with the same problem, namely, regression. As capability for computation advanced, machine learning has become the unchallenged champion for solving data driven regression. Its scalability to large data vectors, ability to accommodate vast training sets, independence from mathematical models, and above all, a remarkable ability for generalization, have made it the foundation for the AI revolution that is sweeping the world economy. However, as the proliferation of data centers attests, machine learning has an insatiable appetite for computation, in both capacity and time.泭
Generative AI deals with the following problem: Given a collection of objects of the same type (e.g., images, texts), we want to generate new and interesting examples of the same type. The new examples should look like they belong to the initial collection, but not too much like any one of them. A popular approach is the Diffusion Model: In this approach a diffusion equation is used to produce a large set of data samples from the starting examples. The data samples are then used in a machine learning computation to produce a second diffusion equation which is used to generate the new examples. In this talk we propose a way to replace the machine learning operation by a direct estimation of conditional expectation. In particular, we try to identify the various functions that make machine learning so effective and replicate them in the conditional expectation approach. Our goal is to match or exceed the efficacy of machine learning in Generative Ai, and with much greater efficiency.
In this talk, we present a formulation of the generative AI problem as one of data-driven estimation of conditional expectation and some early evidence of its efficacy and computational efficiency.
Speaker: Malcolm Smith
Title: A Wiener spring theorem: the story of a collaboration with Martin Clark
Abstract: Numerical investigations carried out by my postdoc Stuart Swift in 2008 pointed towards a remarkable fact: that the mean power dissipated in a vehicle suspension was seemingly independent of suspension configuration and parameters. In due course Stuart and I managed to prove that, for a quarter-car vehicle model with a linear tyre spring, the mean power dissipated in the suspension is equal to kA/2 where k is the tyre spring constant and A is the white noise intensity for the vertical road velocity forcing. 泭The proof made use of Cauchys residue theorem to evaluate a frequency-domain formula for the mean power. A conversation with Martin at the Cancun CDC in December 2008 led to much rumination on his part, and eventually a beautiful proof of a more general version of the result using Ito calculus. 泭The process that turned Martins handwritten notes into a joint paper at the 2012 CDC in Maui will form part of the story in this talk and is almost as interesting as the proof itself.
Speaker: Peter E. Caines
Title: Mean Field Control and Games on Large Networks
泭Abstract:泭Contemporary technological systems often have a network structure of both great scale and complexity; examples are provided by the internet, electrical power grids and air traffic systems. Furthermore, the natural world reveals a vast array of complex networks which includes the human microbiome and the brain. All these networks support dynamical processes, often with feedback loops which are inherent, designed or a combination of both.
泭Results on Stochastic Control and Mean Field Games for large populations on large networks will be presented in terms of the limits of sparse and dense networks; these results include the existence and uniqueness of optima and Nash equilibria together with their approximation to source problems with finite populations problems on finite networks.
Speaker: Andrew Heunis
Title: Stochastic optimal control in mathematical finance
Abstract: Mathematical finance has, since its very inception, been much concerned with stochastic optimal control, and continues to present challenging control problems. These problems are typically of two varieties, namely utility maximisation and risk minimisation, the latter problem involving in essence the minimisation of a mean-square criterion. Standard methods of optimal control, such as dynamic programming and the maximum principle, do not apply very easily to these problems. On the other hand, the problems enjoy the very nice special properties of being convex and having a single-dimensional state space. These properties are key to the application of a variational method, due to Rockafellar and Moreau, for addressing general mathematical programming problems for the minimisation of a scalar convex function defined on an abstract (typically infinite-dimensional) vector space. We shall briefly outline the variational method, then illustrate application of the method to a succession of risk minimisation problems for which there are (a) no constraints, (b) control constraints only, and (c) a combination of control and almost-sure state constraints. The latter problem (c), which naturally leads to singular Lagrange multipliers, seems to present some particularly interesting challenges.
Speaker: Thomas Cass
Title: Solving Signature Kernels as Two-Parameter Rough Differential Equations
Abstract:泭 Signature kernels provide a principled way to compare sequential or streamed data by embedding paths into a reproducing kernel Hilbert space by using their ChenFliess series (or signature transform). This embedding provides a way of enabling kernel methods to operate directly on path-valued data. They combine the expressive power of signature features with the scalability and universality of kernel methods.
We develop a twoparameter rough path framework for rough differential equations on rectangular and simplicial domains, motivated by signature and SchwingerDyson kernel equations. Working in spaces of jointly controlled rough paths, we introduce a robust notion of twodimensional rough integration. Within this setting, the signature kernel and the SchwingerDyson kernel are shown to arise naturally as solutions of twoparameter rough differential equations, yielding wellposedness, stability, and principled numerical schemes with explicit complexity guarantees.
This joint work with Andrea Iannucci, Dan Crisan and William F. Turner.
泭
Speaker: Harry Zheng
Title: Duality Method for Multidimensional Nonsmooth Constrained Linear Convex Stochastic Control
Abstract: We discuss a general multidimensional linear convex stochastic control problem with a nondifferentiable objective function, control constraints, and random coefficients. We formulate an equivalent dual problem, prove the dual stochastic maximum principle and the relation of the optimal control, optimal state, and adjoint processes between primal and dual problems, and illustrate the usefulness of the dual approach with some examples. (Joint work with Engel Dela Vega)
Speaker: Yufei Zhang
Title: Deterministic Policy Gradient Methods in Continuous Time and Space
Abstract:泭The theory of continuous-time reinforcement learning (RL) has progressed rapidly in recent years. While the ultimate objective of RL is typically to learn deterministic control policies, most existing continuous-time RL methods rely on stochastic policies. Such approaches often require sampling actions at very high frequencies, and involve computationally expensive expectations over continuous action spaces, resulting in high-variance gradient estimates and slow convergence.
In this paper, we introduce deterministic policy gradient methods for continuous-time RL. We derive a continuous-time policy gradient formula expressed as the expected gradient of an advantage rate function and establish a martingale characterization for both the value function and the advantage rate. Building on this foundation, we propose a model-free continuous-time Deep Deterministic Policy Gradient (CT-DDPG) algorithm that enables stable learning for general reinforcement learning problems with continuous time-and-state. Numerical experiments show that CT-DDPG achieves superior stability and faster convergence compared to existing stochastic-policy methods.
泭
泭
Speaker: Mihalis Zervos
Title: A Dynamic Competitive Equilibrium Model of Irreversible Capacity Investment with Stochastic Demand and Heterogeneous Producers
Abstract: We formulate a continuous-time competitive equilibrium model of irreversible capacity investment in which a continuum of heterogeneous producers supplies a single non-durable good subject to exogenous stochastic demand. Each producer optimally adjusts both output and capacity over time in response to endogenous price signals, while investment decisions are irreversible. Market clearing holds continuously, with prices evolving endogenously to balance aggregate supply and demand through a constant-elasticity demand function driven by a stochastic base component. The model admits a mean-field interpretation, as each producers decisions both influence and are influenced by the aggregate behaviour of all others. We show that the equilibrium price process can be expressed as a nonlinear functional of the exogenous base demand, leading to a three-dimensional singular stochastic control problem for each producer. We derive an explicit solution to the associated Hamilton-Jacobi-Bellman equation, including a closed-form characterisation of the free-boundary surface separating investment and waiting regions.
泭
Speaker: Dan Crisan
Title: The identification of diffusions from imperfect observations: Martins last big sum.
Abstract: I will discuss the identification of a d-dimensional diffusion X when a running function of it is observed. A point-wise observation of the process (in other words, observing it in isolation) cannot identify unless the function is injective. However observing it on a small interval can be enough to determine exactly. I will present results that expand on this idea; in particular, a property of fine total asymmetry of twice continuously differentiable function is introduced that depends on the fine topology of potential theory and that is both necessary and sufficient for X to be adapted to a natural right-continuous filtration generated by the observations. For real-analytic h the property reduces to simple asymmetry. A second result concerns the case where X is adapted to an augmented filtration generated by the observation process.
The talk is based on the paper Martins last paper:
JMC Clark, D Crisan, The identification of diffusions from imperfect observations, Probability Theory and Related Fields, 1-29, November 2025.