OU Portal
Log In
Welcome
Applicants
Z6_60GI02O0O8IDC0QEJUJ26TJDI4
>
Publ3 search
Error:
Javascript is disabled in this browser. This page requires Javascript. Modify your browser's settings to allow Javascript to execute. See your browser's documentation for specific instructions.
{}
Zavřít
Publikační činnost
Probíhá načítání, čekejte prosím...
publicationId :
tempRecordId :
actionDispatchIndex :
navigationBranch :
pageMode :
tabSelected :
isRivValid :
Typ záznamu:
kapitola v odborné knize (C)
Domácí pracoviště:
Ústav pro výzkum a aplikace fuzzy modelování (94410)
Název:
Randomness Versus Backpropagation in Machine Learning
Citace
Perfiljeva, I., Artiemjew, P. a Niemczynowicz, A. Randomness Versus Backpropagation in Machine Learning.
In:
Leszek Rutkowski, Rafał Scherer, Marcin Korytkowski, Witold Pedrycz, Ryszard Tadeusiewicz, Jacek M. Zurada (ed.).
Artificial Intelligence and Soft Computing 23rd International Conference, ICAISC 2024, Zakopane, Poland, June 16–20, 2024, Proceedings, Part I.
1. vyd. Cham: Springer Cham, 2025. s. 222-233. Lecture Notes in Computer Science. ISBN 978-3-031-84352-5.
Podnázev
Rok vydání:
2025
Obor:
Forma vydání:
Tištená verze
Kód ISBN:
978-3-031-84352-5
Název knihy v originálním jazyce:
Artificial Intelligence and Soft Computing 23rd International Conference, ICAISC 2024, Zakopane, Poland, June 16–20, 2024, Proceedings, Part I
Název edice a číslo svazku:
Lecture Notes in Computer Science
Místo vydání:
Cham
Název nakladatele:
Springer Cham
Označení vydání
(číslo vydání):
1:
Vydáno:
v zahraničí
Autor zdrojového dokumentu:
Leszek Rutkowski, Rafał Scherer, Marcin Korytkowski, Witold Pedrycz, Ryszard Tadeusiewicz, Jacek M. Zurada (ed.)
Počet stran:
12
Počet stran knihy:
392
Strana od:
222
Strana do:
233
Počet výtisků knihy:
EID:
Klíčová slova anglicky:
Extreme Learning Machine; Single Layer Feedforward Neural Network; Activation function
Popis v původním jazyce:
In neural network theory, we analyze two strategies for learning weights: backpropagation and random selection. The former is common in ANNs with one or more hidden layers, while the latter is becoming popular in ANNs with exactly one hidden layer and weights chosen based on the ``extreme learning machine'' (ELM) paradigm proposed in \cite{Huang06}.We show that despite the empirical success of ELM, its theoretical platform, proposed in \cite{Huang06}, has no sound mathematical basis. We demonstrate a dataset on which ELM training and backpropagation strategies cannot obtain satisfactory accuracy.
Popis v anglickém jazyce:
In neural network theory, we analyze two strategies for learning weights: backpropagation and random selection. The former is common in ANNs with one or more hidden layers, while the latter is becoming popular in ANNs with exactly one hidden layer and weights chosen based on the ``extreme learning machine'' (ELM) paradigm proposed in \cite{Huang06}.We show that despite the empirical success of ELM, its theoretical platform, proposed in \cite{Huang06}, has no sound mathematical basis. We demonstrate a dataset on which ELM training and backpropagation strategies cannot obtain satisfactory accuracy.
Seznam ohlasů
Ohlas
R01:
Complementary Content
Deferred Modules
${title}
${badge}
${loading}
Deferred Modules