Smoothed Moreau-Yosida Tensor Train Approximation of State-constrained Optimization Problems under Uncertainty

Akwum Onwunta, Sergey Dolgov, Harbir Antil

Research output: Contribution to journalArticlepeer-review

Abstract

We propose an algorithm to solve optimization problems constrained by ordinary or partial differential equations under uncertainty, with additional almost sure inequality constraints on the state variable. To alleviate the computational burden of high-dimensional random variables, we approximate all random fields by the tensor-train (TT) decomposition. To enable efficient TT approximation of the state constraints, the latter are handled using the Moreau-Yosida penalty, with an additional smoothing of the positive part (plus/ReLU) function by a softplus function. We propose a practical recipe for selecting the smoothing parameter as a function of the penalty parameter, and develop a second-order Newton-type method with a fast matrix-free action of the approximate Hessian to solve the smoothed Moreau-Yosida problem. This algorithm is tested on benchmark elliptic problems with random coefficients, optimization problems constrained by random elliptic variational inequalities, and a real-world epidemiological model with 20 random variables. These examples demonstrate mild (at most polynomial) scaling with respect to the dimension and regularization parameters.
Original languageEnglish
JournalNumerical Linear Algebra with Applications
Early online date2 Jul 2025
DOIs
Publication statusE-pub ahead of print - 2 Jul 2025

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Funding

This work was supported by the Office of Naval Research, Engineering and Physical Sciences Research Council, Air Force Office of Scientific Research, and National Science Foundation.

FundersFunder number
Engineering and Physical Sciences Research Council

Fingerprint

Dive into the research topics of 'Smoothed Moreau-Yosida Tensor Train Approximation of State-constrained Optimization Problems under Uncertainty'. Together they form a unique fingerprint.

Cite this