# Approximation by superpositions of a sigmoidal function

@article{Cybenko1989ApproximationBS, title={Approximation by superpositions of a sigmoidal function}, author={George V. Cybenko}, journal={Mathematics of Control, Signals and Systems}, year={1989}, volume={2}, pages={303-314} }

In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well… Expand

#### Tables and Topics from this paper

#### Paper Mentions

#### 5,423 Citations

A Single Hidden Layer Feedforward Network with Only One Neuron in the Hidden Layer Can Approximate Any Univariate Function

- Mathematics, Computer Science
- Neural Computation
- 2016

This work constructs algorithmically a smooth, sigmoidal, almost monotone activation function providing approximation to an arbitrary continuous function within any degree of accuracy. Expand

Approximation properties of a multilayered feedforward artificial neural network

- Mathematics, Computer Science
- Adv. Comput. Math.
- 1993

We prove that an artificial neural network with multiple hidden layers and akth-order sigmoidal response function can be used to approximate any continuous function on any compact subset of a… Expand

NEURAL NETWORKS FOR OPTIMAL APPROXIMATION OF SMOOTH

- 1996

We prove that neural networks with a single hidden layer are capable of providing an optimal order of approximation for functions assumed to possess a given number of derivatives, if the activation… Expand

Approximation of functions on a compact set by finite sums of a sigmoid function without scaling

- Mathematics, Computer Science
- Neural Networks
- 1991

This paper proves existentially that a linear combination of unscaled shifted rotations of any sigmoid function can approximate uniformly an arbitrary continuous function on a compact set in Rd. Expand

Neural Networks for Optimal Approximation of Smooth and Analytic Functions

- Computer Science, Mathematics
- Neural Computation
- 1996

We prove that neural networks with a single hidden layer are capable of providing an optimal order of approximation for functions assumed to possess a given number of derivatives, if the activation… Expand

Universal approximation properties of shallow quadratic neural networks

- Computer Science, Mathematics
- ArXiv
- 2021

The efficiency of this new approach numerically for simple test cases is investigated, by showing that it requires less numbers of neurons that standard affine linear neural networks. Expand

Approximation of polynomials by a neural network having rather a small number of units

- 2007

Abstract–We remark that most neural approximation theorems may not reflect the way of realizing approximation by actual neural networks, and propose an idea that neural networks generally realize… Expand

Universal Approximation Using Radial-Basis-Function Networks

- Mathematics, Computer Science
- Neural Computation
- 1991

It is proved thatRBF networks having one hidden layer are capable of universal approximation, and a certain class of RBF networks with the same smoothing factor in each kernel node is broad enough for universal approximation. Expand

General approximation theorem on feedforward networks

- Mathematics
- Proceedings of ICICS, 1997 International Conference on Information, Communications and Signal Processing. Theme: Trends in Information Systems Engineering and Wireless Multimedia Communications (Cat.
- 1997

We show that standard feedforward neural networks with as few as a single hidden layer and arbitrary bounded nonlinear (continuous or noncontinuous) activation functions which have two unequal limits… Expand

Approximations of continuous functionals by neural networks with application to dynamic systems

- Mathematics, Computer Science
- IEEE Trans. Neural Networks
- 1993

The paper gives several strong results on neural network representation in an explicit form that are a significant development beyond earlier work, where theorems of approximating continuous functions defined on a finite-dimensional real space by neural networks with one hidden layer were given. Expand

#### References

SHOWING 1-10 OF 26 REFERENCES

Constructive approximations for neural networks by sigmoidal functions

- Mathematics
- 1990

A constructive algorithm for uniformly approximating real continuous mappings by linear combinations of bounded sigmoidal functions is given. G. Cybenko (1989) has demonstrated the existence of… Expand

On the approximate realization of continuous mappings by neural networks

- Mathematics, Computer Science
- Neural Networks
- 1989

It is proved that any continuous mapping can be approximately realized by Rumelhart-Hinton-Williams' multilayer neural networks with at least one hidden layer whose output functions are sigmoid functions. Expand

Multilayer feedforward networks are universal approximators

- Mathematics, Computer Science
- Neural Networks
- 1989

Abstract This paper rigorously establishes that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel… Expand

Construction of neural nets using the radon transform

- Mathematics
- International 1989 Joint Conference on Neural Networks
- 1989

The authors present a method for constructing a feedforward neural net implementing an arbitrarily good approximation to any L/sub 2/ function over (-1, 1)/sup n/. The net uses n input nodes, a… Expand

On Nonlinear Functions of Linear Combinations

- Mathematics
- 1984

Projection pursuit algorithms approximate a function of p variables by a sum of nonlinear functions of linear combinations: \[ (1)\qquad f\left( {x_1 , \cdots ,x_p } \right) \doteq \sum_{i = 1}^n… Expand

Classification capabilities of two-layer neural nets

- Computer Science
- International Conference on Acoustics, Speech, and Signal Processing,
- 1989

The authors show that two-layer nets are capable of forming disconnected decision regions as well and derive an expression for the number of cells in the input space that are to be grouped together to form the decision regions. Expand

What Size Net Gives Valid Generalization?

- Mathematics, Computer Science
- Neural Computation
- 1989

It is shown that if m O(W/ ∊ log N/∊) random examples can be loaded on a feedforward network of linear threshold functions with N nodes and W weights, so that at least a fraction 1 ∊/2 of the examples are correctly classified, then one has confidence approaching certainty that the network will correctly classify a fraction 2 ∊ of future test examples drawn from the same distribution. Expand

An introduction to computing with neural nets

- Computer Science
- IEEE ASSP Magazine
- 1987

This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification and exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components. Expand

Neural Net and Traditional Classifiers

- Computer Science
- NIPS
- 1987

It is demonstrated that two-layer perceptron classifiers trained with back propagation can form both convex and disjoint decision regions. Expand

On the Representation of Continuous Functions of Several Variables as Superpositions of Continuous Functions of one Variable and Addition

- Mathematics
- 1991

The aim of this paper is to present a brief proof of the following theorem: Theorem. For any integer n ≥ 2 there are continuous real functions ψ p q (x) on the closed unit interval E 1 = [0;1] such… Expand