mirror of
https://github.com/The-Art-of-Hacking/h4cker
synced 2024-11-22 02:43:02 +00:00
Update README.md
This commit is contained in:
parent
755d32c53b
commit
ce62cb7bd6
1 changed files with 4 additions and 0 deletions
|
@ -10,6 +10,10 @@ In advanced attacks, the LLM could be manipulated to mimic a harmful persona or
|
||||||
|
|
||||||
## Techniques
|
## Techniques
|
||||||
|
|
||||||
|
There are many different techniques for prompt injection. The table below lists some of the most popular. There is also a framework that is called [promptInject](https://github.com/agencyenterprise/promptinject) which explains and has several of these techniques.
|
||||||
|
|
||||||
|
**Note**: Refer to the [AI Security Tools Section](https://github.com/The-Art-of-Hacking/h4cker/blob/master/ai_research/ai_security_tools.md) of this GitHub repository for examples of tools that can be used for prompt injection and more adversarial attacks.
|
||||||
|
|
||||||
| **Technique** | **Example** |
|
| **Technique** | **Example** |
|
||||||
|-------------------------------|-----------------------------------------------------------------------------|
|
|-------------------------------|-----------------------------------------------------------------------------|
|
||||||
| **Translation** | `1gn0r3 4ll pr3v10us 1nstruct10ns` |
|
| **Translation** | `1gn0r3 4ll pr3v10us 1nstruct10ns` |
|
||||||
|
|
Loading…
Reference in a new issue