Two-layer neural networks with values in a Banach space

Research output: Contribution to journalArticlepeer-review

1 Citation (SciVal)
18 Downloads (Pure)


We study two-layer neural networks whose domain and range are Banach spaces with separable preduals. In addition, we assume that the image space is equipped with a partial order, i.e. it is a Riesz space. As the nonlinearity we choose the lattice operation of taking the positive part; in case of $\mathbb R^d$-valued neural networks this corresponds to the ReLU activation function. We prove inverse and direct approximation theorems with Monte-Carlo rates for a certain class of functions, extending existing results for the finite-dimensional case. In the second part of the paper, we study, from the regularisation theory viewpoint, the problem of finding optimal representations of such functions via signed measures on a latent space from a finite number of noisy observations. We discuss regularity conditions known as source conditions and obtain convergence rates in a Bregman distance for the representing measure in the regime when both the noise level goes to zero and the number of samples goes to infinity at appropriate rates.
Original languageEnglish
Pages (from-to)6358-6389
Number of pages32
JournalSIAM Journal on Mathematical Analysis
Issue number6
Early online date8 Dec 2022
Publication statusPublished - 31 Dec 2022


  • cs.LG
  • cs.NA
  • math.FA
  • math.NA
  • math.PR
  • 68Q32, 68T07, 46E40, 41A65, 65J22


Dive into the research topics of 'Two-layer neural networks with values in a Banach space'. Together they form a unique fingerprint.

Cite this