mirror of
https://github.com/The-Art-of-Hacking/h4cker
synced 2024-11-10 05:34:12 +00:00
Update README.md
This commit is contained in:
parent
b2495aa915
commit
6d0703145d
1 changed files with 10 additions and 0 deletions
|
@ -1,3 +1,13 @@
|
|||
# Prompt Injection
|
||||
|
||||
As stated in the [OWASP Top 10 for LLMs](https://llmtop10.com/), "Prompt Injection occurs when an attacker manipulates a large language model (LLM) through crafted inputs, causing the LLM to unknowingly execute the attacker's intentions. This can be done directly by "jailbreaking" the system prompt or indirectly through manipulated external inputs, potentially leading to data exfiltration, social engineering, and other issues.
|
||||
|
||||
Direct Prompt Injections, also known as "jailbreaking", occur when a malicious user overwrites or reveals the underlying system prompt. This may allow attackers to exploit backend systems by interacting with insecure functions and data stores accessible through the LLM.
|
||||
Indirect Prompt Injections occur when an LLM accepts input from external sources that can be controlled by an attacker, such as websites or files. The attacker may embed a prompt injection in the external content hijacking the conversation context. This would cause LLM output steering to become less stable, allowing the attacker to either manipulate the user or additional systems that the LLM can access. Additionally, indirect prompt injections do not need to be human-visible/readable, as long as the text is parsed by the LLM.
|
||||
The results of a successful prompt injection attack can vary greatly - from solicitation of sensitive information to influencing critical decision-making processes under the guise of normal operation.
|
||||
|
||||
In advanced attacks, the LLM could be manipulated to mimic a harmful persona or interact with plugins in the user's setting. This could result in leaking sensitive data, unauthorized plugin use, or social engineering. In such cases, the compromised LLM aids the attacker, surpassing standard safeguards and keeping the user unaware of the intrusion. In these instances, the compromised LLM effectively acts as an agent for the attacker, furthering their objectives without triggering usual safeguards or alerting the end user to the intrusion."
|
||||
|
||||
This is a table with several examples of prompt injection techniques:
|
||||
|
||||
|
||||
|
|
Loading…
Reference in a new issue