OU Portal
Log In
Welcome
Applicants
Z6_60GI02O0O8IDC0QEJUJ26TJDI4
Error:
Javascript is disabled in this browser. This page requires Javascript. Modify your browser's settings to allow Javascript to execute. See your browser's documentation for specific instructions.
{}
Zavřít
Publikační činnost
Probíhá načítání, čekejte prosím...
publicationId :
tempRecordId :
actionDispatchIndex :
navigationBranch :
pageMode :
tabSelected :
isRivValid :
Typ záznamu:
stať ve sborníku (D)
Domácí pracoviště:
Katedra informatiky a počítačů (31400)
Název:
Effective black box adversarial attack with handcrafted kernels
Citace
Dvořáček, P., Števuliáková, P. a Hurtík, P. Effective black box adversarial attack with handcrafted kernels.
In:
IWANN: Advances in Computational Intelligence. IWANN 2023. Lecture Notes in Computer Science, vol 14135 2023-06-19 Ponta Delgada, Portugal.
Springer Cham, 2023. s. 169-180. ISBN 978-303143077-0.
Podnázev
Rok vydání:
2023
Obor:
Informatika
Počet stran:
12
Strana od:
169
Strana do:
180
Forma vydání:
Elektronická verze
Kód ISBN:
978-303143077-0
Kód ISSN:
03029743
Název sborníku:
Advances in Computational Intelligence. IWANN 2023. Lecture Notes in Computer Science, vol 14135
Sborník:
Mezinárodní
Název nakladatele:
Springer Cham
Místo vydání:
neuvedeno
Stát vydání:
Název konference:
IWANN
Místo konání konference:
Ponta Delgada, Portugal
Datum zahájení konference:
Typ akce podle státní
příslušnosti účastníků akce:
Celosvětová akce
Kód UT WoS:
001155317100014
EID:
2-s2.0-85174494876
Klíčová slova anglicky:
Black box,Adversarial attack,Handcrafted kernel
Popis v původním jazyce:
We propose a new, simple framework for crafting adversarial examples for black box attacks. The idea is to simulate the substitution model with a non-trainable model compounded of just one layer of handcrafted convolutional kernels and then train the generator neural network to maximize the distance of the outputs for the original and generated adversarial image. We show that fooling the prediction of the first layer causes the whole network to be fooled and decreases its accuracy on adversarial inputs. Moreover, we do not train the neural network to obtain the first convolutional layer kernels, but we create them using the technique of F-transform. Therefore, our method is very time and resource effective.
Popis v anglickém jazyce:
We propose a new, simple framework for crafting adversarial examples for black box attacks. The idea is to simulate the substitution model with a non-trainable model compounded of just one layer of handcrafted convolutional kernels and then train the generator neural network to maximize the distance of the outputs for the original and generated adversarial image. We show that fooling the prediction of the first layer causes the whole network to be fooled and decreases its accuracy on adversarial inputs. Moreover, we do not train the neural network to obtain the first convolutional layer kernels, but we create them using the technique of F-transform. Therefore, our method is very time and resource effective.
Seznam ohlasů
Ohlas
R01:
RIV/61988987:17310/23:A2402L6D
Complementary Content
Deferred Modules
${title}
${badge}
${loading}
Deferred Modules