首页 > TAG信息列表 > Attacks

Evaluation of Machine Learning Algorithms in Network-Based Intrusion Detection System

本文提出了一种更好的对测试集进行有效评估的方法,从而防止训练造成过拟合现象。经过实验表明,SVM和ANN对过拟合的免疫能力是最强的。链接为:https://arxiv.org/abs/2203.05232。 Cybersecurity has become one of the focuses of organisations. The number of cyberattacks ke

Mind the Box: $\ell_1$-APGD for Sparse Adversarial Attacks on Image Classifiers

目录概主要内容 Croce F. and Hein M. Mind the box: \(\ell_1\)-APGD for sparse adversarial attacks on image classifiers. In International Conference on Machine Learning (ICML), 2021. 概 以往的\(\ell_1\)攻击, 为了保证 \[\|x' - x\|_1 \le \epsilon, x' \

Broadbandits 网络盗匪 | 经济学人中英双语对照精读笔记

文 / 王不留(微信公众号:王不留)     选自20210619,Leaders     Broadbandits   网络盗匪   broadbandit: 改自单词broadband(宽带)+bandit(强盗;土匪)   The new age of cyber-attacks could have huge economic costs.   网络攻击的新时代可能会带来巨大的经济代价。   Tw

Blocking Brute Force Attacks

Blocking Brute Force Attacks A common threat web developers face is a password-guessing attack known as a brute force attack. A brute-force attack is an attempt to discover a password by systematically trying every possible combination of letters, numbers

后门触发器之频域角度——Rethinking the Backdoor Attacks’ Triggers A Frequency Perspective

Rethinking the Backdoor Attacks’ Triggers A Frequency Perspective 尚未发布,收录于arxiv—— 论文链接 本文指出,现有的后门攻击在频域领域上的研究不足。因此本文提出通过频域信息来辨别后门样本,并以此构建了频域不可见的后门样本。 一个直观的想法就是,后门样本与自然

R-SNN: An Analysis and Design Methodology for Robustifying Spiking Neural Networks against Adversari

郑重声明:原文参见标题,如有侵权,请联系作者,将会撤销发布     To appear at the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021).   Abstract   脉冲神经网络(SNN)旨在在具有基于事件的动态视觉传感器(DVS)的神经形态芯片上实现时提供

【李宏毅2020 ML/DL】P43-44 More about Adversarial Attack | Images & Audio

我已经有两年 ML 经历,这系列课主要用来查缺补漏,会记录一些细节的、自己不知道的东西。 已经有人记了笔记(很用心,强烈推荐):https://github.com/Sakura-gh/ML-notes 本节内容综述 本节课由助教黄冠博讲解。将分为影像与语音两部分讲解。 One Pixel Attack,仅仅改变一个像素,就进行了**

OS L8-4: Buffer Overflow Attacks

                                     

论文笔记:Adversarial Attacks and Defenses in Deep Learning 对抗训练部分

  5.对抗防御 通常包括对抗训练、基于随机的方案、去噪方法、可证明防御以及一些其他方法。 5.1对抗训练 对抗训练:通过与对抗样本一起训练,来尝试提高神经网络的鲁棒性。 通常情况下,可视为如下定义的最大最小游戏:      其中,代表对抗代价,θ代表网络权重,x‘代表对抗输入,y代表真

一文看懂Python的控制结构:for、while、if…都有了

传统Python语言的主要控制结构是for循环。然而,需要注意的是for循环在Pandas中不常用,因此Python中for循环的有效执行并不适用于Pandas模式。一些常见控制结构如下。for循环while循环if/else语句try/except语句生成器表达式列表推导式模式匹配所有的程序最终都需要一种控制执行流的方

[Node] Install packages correctly and avoid attacks

Read Dependency Confusion: How I Hacked Into Apple, Microsoft and Dozens of Other Companies Yarn: yarn install –immutable --immutable-cache --checkcache to ensure matching packages are present.   Npm: npm ci to install matching packages without perfo

<Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks>阅读笔记

Abstract 提出了第一个针对后门攻击的有效的方案。我们根据之前的工作实现了三种攻击并使用他们来研究两种有希望的防御,即Pruning和fine-tuning。我们的研究表明,没有一个可以抵御复杂的攻击。我们然后评估了fine-pruning,这是结合了Pruning和fine-tuning,结果表明它可以削弱或

[论文笔记]Delving into Transferable Adversarial Examples and Black-box Attacks(ICLR2017)

这篇文章的主要成果: 1.提出了一种集合方法来生成对抗样本(感觉并不是本文的主要贡献,也不是特别有新意,就是把集合思想融入进了对抗样本) 2.正如论文的标题,作者深入分析了对抗样本的迁移性,并做了几何特性的分析,得出了一些比较有意思、且有点反直觉的结论。(几何部分真的太难懂了,人

Slide_Attacks(滑动攻击)(论文翻译)(未完成)

作者 Alex Biryukov  David Wagner  摘要      大多数的分组密码设计者都认为,只要使用了多轮加密,即使使用了弱密钥也是安全的。本文将展示一种新的已知(选择)明文攻击,我称之为Slide_Attacks,在许多情况下,它与密码的轮数无关。我们通过以下记几种密码算法来对其进行说明:TREYFER

《Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review》阅读笔记

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review 最初的后门攻击主要集中在计算机视觉领域,但是也已经扩展到其他领域。例如文本、音频、基于ML的计算机辅助设计、基于ML的无线信号分类等。 区分 1.与Adversarial Example的区别 1)攻击的pipel

decision-based adversarial attacks_reliable attacks against black-box machine learning models

Decision-based adversarial attacks: Reliable attacks against black-box machine learning models Decision-based adversarial attacks: Reliable attacks against black-box machine learning models 本文提出了一种boundary-based的攻击方法,本方法不需要模型的梯度信息

【论文笔记】DELVING INTO TRANSFERABLE ADVERSARIAL EXAMPLES AND BLACK-BOX ATTACKS

【摘要】 深层神经网络的一个有趣的特性是存在对抗性样本,它可以在不同的体系结构之间进行转换。这些可转移的通用示例可能严重阻碍基于神经网络的深层次应用。以往的工作主要是利用小规模数据集来研究可移植性。在这项工作中,我们首先对大模型和大规模数据集的可转移性进行了

【文章思路、算法分析】Membership Inference Attacks Against Machine Learning Models

白菜苗 1、成员推理攻击目的2、阴影模型构建3、攻击模型的构造4、算法分析5、总结 如果你不小心又亦或是专门寻找这篇文献相关知识,那么很高兴能和你们分享这篇文章。 何谓成员推理攻击,陌生而又抽象?莫慌,现在我们就来探讨下针对机器学习的成员推理攻击是如何实现的,其实

Penetration Test - Select Your Attacks(20)

Persistence and Stealth PERSISTENCE Scheduled jobs Cron or Task Manager Scheduled Task Same as above Daemons Background processes or services Back doors Bypass standard security controls Trojan Malware that looks like it does something usefu

Penetration Test - Select Your Attacks(15)

Privilege Escalation(Windows) WINDOWS-SPECIFIC PRIVILEGE ESCALATION Cpassword - Group Policy Preference attribute that contains passwords SYSVOL folder of the Domain Controller (encrypted XML) Clear text credentials in LDAP(Lightweight Directory Acces

Penetration Test - Select Your Attacks(14)

Privilege Escalation(Linux) Linux user ID is 'root'. LINUX-SPECIFIC PRIVILEGE ESCALATION SUID/SGID programs Permission to execute a program as executable's owner/group ls shows 's' in executable bit of permissions -r-sr-sr-x(SU

Penetration Test - Select Your Attacks(9)

Application Exploits, Part II AUTHENTICATION EXPLOITS Credential brute forcing Offline cracking(Hydra) Session hijacking Intercepting and using a session token(generally) to take over a valid distributed (web) session Redirect Sending the user to

Penetration Test - Select Your Attacks(8)

SQL Injection Demo Tools: Kali Linux Target Application: DVWA(Damn Vulnerable Web App) Login the DVWA website:http://10.0.0.20/dvwa/login.php Set the Security Level to low and submit. If the application's not sanitizing input, you can use single qu

Penetration Test - Select Your Attacks(1)

Remote Social Engineering SOCIAL ENGNEERING Tricking or coercing people into violating security policy Depends on willingness to be helpful Human weaknesses can be leveraged May rely on technical aspects Bypasses access controls and most detection contro

Advances in Adversarial Attacks

We will overview the recent advances in the topic of adversarial attacks   This section   This section     References - Liu Y, Chen X, Liu C, Song D. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770