OU Portal
Log In
Welcome
Applicants
Z6_60GI02O0O8IDC0QEJUJ26TJDI4
Error:
Javascript is disabled in this browser. This page requires Javascript. Modify your browser's settings to allow Javascript to execute. See your browser's documentation for specific instructions.
{}
Close
Publikační činnost
Probíhá načítání, čekejte prosím...
publicationId :
tempRecordId :
actionDispatchIndex :
navigationBranch :
pageMode :
tabSelected :
isRivValid :
Record type:
stať ve sborníku (D)
Home Department:
Ústav pro výzkum a aplikace fuzzy modelování (94410)
Title:
Remarks on the Universal Approximation Property of Feedforward Neural Networks
Citace
Kupka, J., Števuliáková, P. a Alijani, Z. Remarks on the Universal Approximation Property of Feedforward Neural Networks.
In:
25th Conference ITAT: 25th Conference Information Technologies – Applications and Theory 2025-09-26 Telgárt.
s. 186-192. ISSN 1613-0073.
Subtitle
Publication year:
2025
Obor:
Number of pages:
7
Page from:
186
Page to:
192
Form of publication:
Elektronická verze
ISBN code:
neuvedeno
ISSN code:
1613-0073
Proceedings title:
25th Conference Information Technologies – Applications and Theory
Proceedings:
Mezinárodní
Publisher name:
neuvedeno
Place of publishing:
neuvedeno
Country of Publication:
Sborník vydaný v zahraničí
Název konference:
25th Conference ITAT
Místo konání konference:
Telgárt
Datum zahájení konference:
Typ akce podle státní
příslušnosti účastníků:
Evropská akce
WoS code:
EID:
Key words in English:
Universal Approximation Theorem;Neural Network;Activation Function
Annotation in original language:
This paper presents a structured overview and novel insights into the universal approximation property of feedforward neural networks.We categorize existing results based on the characteristics of activation functions — ranging from strictly monotonic to weakly monotonicand continuous almost everywhere — and examine their implications under architectural constraints such as bounded depth andwidth. Building on classical results by Cybenko [ 1], Hornik [ 2], and Maiorov [ 3 ], we introduce new activation functions that enableeven simpler neural network architectures to retain universal approximation capabilities. Notably, we demonstrate that single-layernetworks with only two neurons and fixed weights can approximate any continuous univariate function, and that two-layer networkscan extend this capability to multivariate functions. These findings refine the known lower bounds of neural network complexityand offer constructive approaches that preserve strict monotonicity, improving upon prior work that relied on relaxed monotonicityconditions. Our results contribute to the theoretical foundation of neural networks and open pathways for designing minimal yetexpressive architectures.
Annotation in english language:
This paper presents a structured overview and novel insights into the universal approximation property of feedforward neural networks.We categorize existing results based on the characteristics of activation functions — ranging from strictly monotonic to weakly monotonicand continuous almost everywhere — and examine their implications under architectural constraints such as bounded depth andwidth. Building on classical results by Cybenko [ 1], Hornik [ 2], and Maiorov [ 3 ], we introduce new activation functions that enableeven simpler neural network architectures to retain universal approximation capabilities. Notably, we demonstrate that single-layernetworks with only two neurons and fixed weights can approximate any continuous univariate function, and that two-layer networkscan extend this capability to multivariate functions. These findings refine the known lower bounds of neural network complexityand offer constructive approaches that preserve strict monotonicity, improving upon prior work that relied on relaxed monotonicityconditions. Our results contribute to the theoretical foundation of neural networks and open pathways for designing minimal yetexpressive architectures.
References
Reference
R01:
Complementary Content
Deferred Modules
${title}
${badge}
${loading}
Deferred Modules