OU Portal
Log In
Welcome
Applicants
Z6_60GI02O0O8IDC0QEJUJ26TJDI4
Error:
Javascript is disabled in this browser. This page requires Javascript. Modify your browser's settings to allow Javascript to execute. See your browser's documentation for specific instructions.
{}
Close
Publikační činnost
Probíhá načítání, čekejte prosím...
publicationId :
tempRecordId :
actionDispatchIndex :
navigationBranch :
pageMode :
tabSelected :
isRivValid :
Record type:
kapitola v odborné knize (C)
Home Department:
Ústav pro výzkum a aplikace fuzzy modelování (94410)
Title:
Randomness Versus Backpropagation in Machine Learning
Citace
Perfiljeva, I., Artiemjew, P. a Niemczynowicz, A. Randomness Versus Backpropagation in Machine Learning.
In:
Leszek Rutkowski, Rafał Scherer, Marcin Korytkowski, Witold Pedrycz, Ryszard Tadeusiewicz, Jacek M. Zurada (ed.).
Artificial Intelligence and Soft Computing 23rd International Conference, ICAISC 2024, Zakopane, Poland, June 16–20, 2024, Proceedings, Part I.
1. vyd. Cham: Springer Cham, 2025. s. 222-233. Lecture Notes in Computer Science. ISBN 978-3-031-84352-5.
Subtitle
Publication year:
2025
Obor:
Form of publication:
Tištená verze
ISBN code:
978-3-031-84352-5
Book title in original language:
Artificial Intelligence and Soft Computing 23rd International Conference, ICAISC 2024, Zakopane, Poland, June 16–20, 2024, Proceedings, Part I
Title of the edition and volume number:
Lecture Notes in Computer Science
Place of publishing:
Cham
Publisher name:
Springer Cham
Issue reference (issue number):
1:
Published:
v zahraničí
Author of the source document:
Leszek Rutkowski, Rafał Scherer, Marcin Korytkowski, Witold Pedrycz, Ryszard Tadeusiewicz, Jacek M. Zurada (ed.)
Number of pages:
12
Book page count:
392
Page from:
222
Page to:
233
Book print run:
EID:
Key words in English:
Extreme Learning Machine; Single Layer Feedforward Neural Network; Activation function
Annotation in original language:
In neural network theory, we analyze two strategies for learning weights: backpropagation and random selection. The former is common in ANNs with one or more hidden layers, while the latter is becoming popular in ANNs with exactly one hidden layer and weights chosen based on the ``extreme learning machine'' (ELM) paradigm proposed in \cite{Huang06}.We show that despite the empirical success of ELM, its theoretical platform, proposed in \cite{Huang06}, has no sound mathematical basis. We demonstrate a dataset on which ELM training and backpropagation strategies cannot obtain satisfactory accuracy.
Annotation in english language:
In neural network theory, we analyze two strategies for learning weights: backpropagation and random selection. The former is common in ANNs with one or more hidden layers, while the latter is becoming popular in ANNs with exactly one hidden layer and weights chosen based on the ``extreme learning machine'' (ELM) paradigm proposed in \cite{Huang06}.We show that despite the empirical success of ELM, its theoretical platform, proposed in \cite{Huang06}, has no sound mathematical basis. We demonstrate a dataset on which ELM training and backpropagation strategies cannot obtain satisfactory accuracy.
References
Reference
R01:
Complementary Content
Deferred Modules
${title}
${badge}
${loading}
Deferred Modules