OU Portal
Log In
Welcome
Applicants
Z6_60GI02O0O8IDC0QEJUJ26TJDI4
Error:
Javascript is disabled in this browser. This page requires Javascript. Modify your browser's settings to allow Javascript to execute. See your browser's documentation for specific instructions.
{}
Close
Publikační činnost
Probíhá načítání, čekejte prosím...
publicationId :
tempRecordId :
actionDispatchIndex :
navigationBranch :
pageMode :
tabSelected :
isRivValid :
Record type:
stať ve sborníku (D)
Home Department:
Katedra informatiky a počítačů (31400)
Title:
Effective black box adversarial attack with handcrafted kernels
Citace
Dvořáček, P., Števuliáková, P. a Hurtík, P. Effective black box adversarial attack with handcrafted kernels.
In:
IWANN: Advances in Computational Intelligence. IWANN 2023. Lecture Notes in Computer Science, vol 14135 2023-06-19 Ponta Delgada, Portugal.
Springer Cham, 2023. s. 169-180. ISBN 978-303143077-0.
Subtitle
Publication year:
2023
Obor:
Informatika
Number of pages:
12
Page from:
169
Page to:
180
Form of publication:
Elektronická verze
ISBN code:
978-303143077-0
ISSN code:
03029743
Proceedings title:
Advances in Computational Intelligence. IWANN 2023. Lecture Notes in Computer Science, vol 14135
Proceedings:
Mezinárodní
Publisher name:
Springer Cham
Place of publishing:
neuvedeno
Country of Publication:
Název konference:
IWANN
Místo konání konference:
Ponta Delgada, Portugal
Datum zahájení konference:
Typ akce podle státní
příslušnosti účastníků:
Celosvětová akce
WoS code:
001155317100014
EID:
2-s2.0-85174494876
Key words in English:
Black box,Adversarial attack,Handcrafted kernel
Annotation in original language:
We propose a new, simple framework for crafting adversarial examples for black box attacks. The idea is to simulate the substitution model with a non-trainable model compounded of just one layer of handcrafted convolutional kernels and then train the generator neural network to maximize the distance of the outputs for the original and generated adversarial image. We show that fooling the prediction of the first layer causes the whole network to be fooled and decreases its accuracy on adversarial inputs. Moreover, we do not train the neural network to obtain the first convolutional layer kernels, but we create them using the technique of F-transform. Therefore, our method is very time and resource effective.
Annotation in english language:
We propose a new, simple framework for crafting adversarial examples for black box attacks. The idea is to simulate the substitution model with a non-trainable model compounded of just one layer of handcrafted convolutional kernels and then train the generator neural network to maximize the distance of the outputs for the original and generated adversarial image. We show that fooling the prediction of the first layer causes the whole network to be fooled and decreases its accuracy on adversarial inputs. Moreover, we do not train the neural network to obtain the first convolutional layer kernels, but we create them using the technique of F-transform. Therefore, our method is very time and resource effective.
References
Reference
R01:
RIV/61988987:17310/23:A2402L6D
Complementary Content
Deferred Modules
${title}
${badge}
${loading}
Deferred Modules