Two-layer neural networks with values in a Banach space

Research output: Working paper / PreprintPreprint

Abstract

We study two-layer neural networks whose domain and range are Banach spaces with separable preduals. In addition, we assume that the image space is equipped with a partial order, i.e. it is a Riesz space. As the nonlinearity we choose the lattice operation of taking the positive part; in case of $\mathbb R^d$-valued neural networks this corresponds to the ReLU activation function. We prove inverse and direct approximation theorems with Monte-Carlo rates for a certain class of functions, extending existing results for the finite-dimensional case. In the second part of the paper, we study, from the regularisation theory viewpoint, the problem of finding optimal representations of such functions via signed measures on a latent space from a finite number of noisy observations. We discuss regularity conditions known as source conditions and obtain convergence rates in a Bregman distance for the representing measure in the regime when both the noise level goes to zero and the number of samples goes to infinity at appropriate rates.
Original languageEnglish
Publication statusPublished - 5 May 2021

Keywords

  • cs.LG
  • cs.NA
  • math.FA
  • math.NA
  • math.PR
  • 68Q32, 68T07, 46E40, 41A65, 65J22

Fingerprint

Dive into the research topics of 'Two-layer neural networks with values in a Banach space'. Together they form a unique fingerprint.

Cite this