mirror of
https://github.com/carlospolop/hacktricks
synced 2024-11-15 01:17:36 +00:00
Merge pull request #140 from Mezareph/master
Additional Tutorial for A.I. exploiting
This commit is contained in:
commit
da15c8b3de
3 changed files with 44 additions and 0 deletions
|
@ -0,0 +1,13 @@
|
|||
#BRUTEFORCER CORE SCRIPT WITH BIM ATTACK
|
||||
|
||||
This time we introduce a new type of gradient based attack, in order to brute force an image classification app (can be shaped and used for any input of course), the BIM, or Basic Iteration Method.
|
||||
|
||||
It's reccomended to see at least the explanation in the [**introduction challenge colab Notebook**](//https://colab.research.google.com/drive/1lDh0oZ3TR-z87WjogdegZCdtsUuDADcR)
|
||||
|
||||
To go deeper on the BIM topic:
|
||||
https://arxiv.org/pdf/1607.02533.pdf
|
||||
|
||||
As usual we will provide only the A.I. attack core part, it's up to you to complete the tool and blending it with PT techniques, depending on the situations.
|
||||
|
||||
Please Note:
|
||||
Remeber, in those kind of scenarios, in order to mime real-based attack applications, we don't have the exact model to fool or the image target in which we would like to transform our image. That's why, in order to overcome this issue, we must blend our core script, with a bruteforcer logic, accordingly to the application responses we want to fool.
|
|
@ -0,0 +1,13 @@
|
|||
|
||||
# BRUTEFORCER IMAGE CORRUPTION SCRIPT
|
||||
|
||||
The purpose here is to introduce the user to some basic concepts about **A.I. apps exploiting**, via some easy to follow scripts, which represents the core for writing useful tools.<br>
|
||||
In this example (which can be used to solve the easy labs of BrainSmasher) by recalling also what is written in the solution for the introduction challenge, we will provide a simple yet useful way, in order to iteratively produce some corrupted images, to bruteforce the face recon easy labs (and thus also real applications that relies on the same principles)
|
||||
|
||||
Of course we will not provide the full code but only the core part for the exploiting of the model,**instead some exercises will be left to the user (the pentesting part)**, in order to complete the tool. We will provides also some hints, just to give an idea of what can be done.
|
||||
|
||||
The script can be found at [**IMAGE BRUTEFORCER**](https://colab.research.google.com/drive/1kUiWGRKr4vhqjI9Xgaqw3D5z3SeTXKmV)
|
||||
|
||||
Try it on our labs [**BrA.I.Smasher Website**](https://beta.brainsmasher.eu/)
|
||||
<br>
|
||||
Enjoy and stay safe!
|
|
@ -0,0 +1,18 @@
|
|||
#A.I. HYBRID MALWARE CLASSIFIER
|
||||
##INTERMEDIATE PYTHON SKILL, INTERMEDIATE MACHINE LEARNING SKILLS (Part 1)
|
||||
|
||||
In this series of notebook we are going to build an **hybrid malware classifier.**
|
||||
|
||||
For the **First part** we will focus on the scripting that involves dynamic analysis. Any steps of this series will come useful in order to detect malwares, and in this piece we will try to classify them based on their behaviour, utilizing the logs produced by running a program.
|
||||
|
||||
In the **Second Part** we will see how to manipulate the logs files in order to add robustness to our classifier and adjust the code to counter the more advanced methods of A.I. Malware Evasion.
|
||||
|
||||
In the **Third Part** we will create a Static Malware Classifier.
|
||||
|
||||
For the **Fourth Part** For the Fourth Part we will add some tactics to add robustness to our Static classifier and merge the latter with our Dynamic Classifier.
|
||||
|
||||
**PLEASE NOTE:** This Series strongly relies on building a dataset on your own, even if it's not mandatory.<br>
|
||||
There are also many available datasets for Static and/ or Dynamic Malware analysis on several sites for this type of classification, like Ember, VirusShare, Sorel-20M, but i strongly encourage that you build one or your own.
|
||||
|
||||
Here's the link to our [**colab notebook**](https://colab.research.google.com/drive/1nNZLMogXF-iq-_78IvGTd-c89_C82AB8#scrollTo=lUHLMl8Pusrn) enjoy and stay safe :)
|
||||
|
Loading…
Reference in a new issue