mirror of
https://github.com/The-Art-of-Hacking/h4cker
synced 2024-11-10 05:34:12 +00:00
Update README.md
This commit is contained in:
parent
0dcaaf9207
commit
ea1042098b
1 changed files with 10 additions and 1 deletions
|
@ -1,4 +1,4 @@
|
||||||
# AI Risk Management Frameworks and Resources
|
# AI Risk Management Frameworks and AI Security Resources
|
||||||
|
|
||||||
## NIST Resources
|
## NIST Resources
|
||||||
- [NIST Risk Management Framework](https://www.nist.gov/itl/ai-risk-management-framework): used to to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
|
- [NIST Risk Management Framework](https://www.nist.gov/itl/ai-risk-management-framework): used to to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
|
||||||
|
@ -21,5 +21,14 @@
|
||||||
## Cloud Security Alliance
|
## Cloud Security Alliance
|
||||||
- [CSA's Securing LLM Backed Systems: Essential Authorization Practices](https://github.com/The-Art-of-Hacking/h4cker/blob/master/ai_research/ai_risk_management/Securing%20LLM%20Backed%20Systems%20-%20Essential%20Authorization%20Practices%2020240806.pdf)
|
- [CSA's Securing LLM Backed Systems: Essential Authorization Practices](https://github.com/The-Art-of-Hacking/h4cker/blob/master/ai_research/ai_risk_management/Securing%20LLM%20Backed%20Systems%20-%20Essential%20Authorization%20Practices%2020240806.pdf)
|
||||||
|
|
||||||
|
## Additional Securing AI Resources
|
||||||
|
|
||||||
|
- [NSA/DoD - Joint Guidance on Deploying AI Systems Securely](https://media.defense.gov/2024/Apr/15/2003439257/-1/-1/0/CSI-DEPLOYING-AI-SYSTEMS-SECURELY.PDF)
|
||||||
|
- [MITRE ATLAS](https://atlas.mitre.org/)
|
||||||
|
- [OWASP Top 10 for LLM Applications](https://genai.owasp.org/)
|
||||||
|
- [OWASP AI Security and Privacy Guide](https://owasp.org/www-project-ai-security-and-privacy-guide/)
|
||||||
|
- [Securing Your AI: A Step-by-Step Guide for CISOs](https://hiddenlayer.com/research/how-well-do-you-know-your-ai-environment/)
|
||||||
|
- [Securing Your AI: A Step-by-Step Guide for CISOs PT 2](https://hiddenlayer.com/research/securing-your-ai-a-step-by-step-guide-for-cisos-pt2/)
|
||||||
|
- [CSA Securing LLM Backed Systems](https://github.com/The-Art-of-Hacking/h4cker/blob/master/ai_research/ai_risk_management/Securing%20LLM%20Backed%20Systems%20-%20Essential%20Authorization%20Practices%2020240806.pdf)
|
||||||
## Academia
|
## Academia
|
||||||
- [MIT AI Risk Database](https://airisk.mit.edu/)
|
- [MIT AI Risk Database](https://airisk.mit.edu/)
|
||||||
|
|
Loading…
Reference in a new issue