Universal approximation theorem
Feed-forward neural network with a 1 hidden layer can approximate continuous functions / From Wikipedia, the free encyclopedia
Dear Wikiwand AI, let's keep it short by simply answering these key questions:
Can you list the top facts and stats about Universal approximation theorem?
Summarize this article for a 10 year old
In the mathematical theory of artificial neural networks, universal approximation theorems are theorems[1][2] of the following form: Given a family of neural networks, for each function from a certain function space, there exists a sequence of neural networks
from the family, such that
according to some criterion. That is, the family of neural networks is dense in the function space.
![]() | This article may be too technical for most readers to understand. (July 2023) |
The most popular version states that feedforward networks with non-polynomial activation functions are dense in the space of continuous functions between two Euclidean spaces, with respect to the compact convergence topology.
Universal approximation theorems are existence theorems: They simply state that there exists such a sequence , and do not provide any way to actually find such a sequence. They also do not guarantee any method, such as backpropagation, might actually find such a sequence. Any method for searching the space of neural networks, including backpropagation, might find a converging sequence, or not (i.e. the backpropagation might get stuck in a local optimum).
Universal approximation theorems are limit theorems: They simply state that for any and a criteria of closeness
, if there are enough neurons in a neural network, then there exists a neural network with that many neurons that does approximate
to within
. There is no guarantee that any finite size, say, 10000 neurons, is enough.