In mathematical modelling it has always been common practise to construct constitutive laws, e.g., the driving force of a dynamical system

$\dot{x} = f(x)$by choosing a (well-motivated) functional form $f(x) = F(\theta, x)$ with free parameters $\theta$ and fit these parameters to theoretical predictions and/or experimental observations. With the explosion of machine learning it has now become easy to fit general functions $f$ with even $O(100k)$ or more parameters. A common situation is that $x \in \mathbb{R}^d$ with $d$ large, and where $f$ may be fitted to a *high fidelity model* $\tilde{f}$ which is very costly to evaluate. In this setting, the problem of fitting $f \approx \tilde{f}$ becomes a classical high-dimensional approximation problem. Coarse-graining may simply mean reducing the computational cost, or more ambitiously one may seek to reduce the dimensionality of the system.

The context in which these kinds of questions interest me is that we are given an electronic structure model for a potential energy surface,

$E^{\rm QM}\Big( \{ {\bf r}_j \}_{j = 1}^J \Big) = \mathcal{F}\Big( \{ {\bf r}_{j} \}_{j = 1}^J ; \{ \epsilon_i \}_i, \{ \psi_i \}_i \Big)$where the evaluation of $E^{\rm QM}$ involves the solution of a large-scale nonlinear eigenvalue problem for the energy levels $\epsilon_i$ occupied by the electrons $\psi_i$ with at least $O(J^3)$ scaling cost, but depending on the level of theory, it could be even higher up to exponential scaling for the full Schrödinger equation.

There has been tremendous progress in the computational physics and chemistry community in developing machine learning models, usually based on GP regression or ANNs, though even classical polynomials provide excellent models: MTPs or ACE. The overarching strategy (with some variations) is to first write a model PES as

$E\Big( \{ {\bf r}\_j \}_{j = 1}^J \Big) = \sum_{i = 1}^J V\Big( \{ {\bf r}_j - {\bf r}_i \}_{j \neq i} \Big)$where $V = V( \{ {\bf r}_{ij} \} )$ is a site energy potential that must be invariant under isometries and permutations of like atoms. There are several challenges that make the modelling of $V$ a rich mathematical problem, ranging from the analysis of the quantum PES, to the optimal exploitation of symmetries in approximation and regression schemes and finally the fact that the data from which the potentials are fitted is always incomplete which makes this a difficult inverse problem (even in the absence of noise).

Some concrete topics that interest me at the moment:

Construction of an efficient symmetric polynomial basis (incorporating permutation and isometry invariance), and its sparsification; (approximation theory, optimisation, HPC) [84], [82]

Construction and analysis of symmetric feature maps e.g. for ANNs [85]

Approximation theory for high-dimensional symmetric functions. (Numerical analysis, approximation theory) (in preparation)

Rigorous as well as experimental justification of the spatial decomposition of the PES into site energies; (Applied analysis, electronic structure theory) [80], [56], [58]

More generally finding low-rank features in the quantum PES that can be exploited in the approximation theory; (Applied analysis, electronic structure theory)

Study of the associated inverse problem: the site energy $V$ is not a physical quantity but can only be fitted from global observations $E^{\rm QM} \approx E$. This makes fitting the parameters for $V$ an inverse problem and it is unclear whether it is well-posed. (Applied analysis, inverse problems)

Regularisation of the inverse problem, especially when the data does not fully cover the configuration space. (Numerical analysis, inverse problems) [82]

Error propagation: how do approximation errors in the approximate PES $E$ propagate into errors in material or molecular properties, i.e., errors in quantities of interest? (Numerical analysis) (in prep.)