Update README.md

This commit is contained in:
Omar Santos 2024-08-18 20:12:54 -04:00 committed by GitHub
parent 0dcaaf9207
commit ea1042098b
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -1,4 +1,4 @@
# AI Risk Management Frameworks and Resources
# AI Risk Management Frameworks and AI Security Resources
## NIST Resources
- [NIST Risk Management Framework](https://www.nist.gov/itl/ai-risk-management-framework): used to to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
@ -21,5 +21,14 @@
## Cloud Security Alliance
- [CSA's Securing LLM Backed Systems: Essential Authorization Practices](https://github.com/The-Art-of-Hacking/h4cker/blob/master/ai_research/ai_risk_management/Securing%20LLM%20Backed%20Systems%20-%20Essential%20Authorization%20Practices%2020240806.pdf)
## Additional Securing AI Resources
- [NSA/DoD - Joint Guidance on Deploying AI Systems Securely](https://media.defense.gov/2024/Apr/15/2003439257/-1/-1/0/CSI-DEPLOYING-AI-SYSTEMS-SECURELY.PDF)
- [MITRE ATLAS](https://atlas.mitre.org/)
- [OWASP Top 10 for LLM Applications](https://genai.owasp.org/)
- [OWASP AI Security and Privacy Guide](https://owasp.org/www-project-ai-security-and-privacy-guide/)
- [Securing Your AI: A Step-by-Step Guide for CISOs](https://hiddenlayer.com/research/how-well-do-you-know-your-ai-environment/)
- [Securing Your AI: A Step-by-Step Guide for CISOs PT 2](https://hiddenlayer.com/research/securing-your-ai-a-step-by-step-guide-for-cisos-pt2/)
- [CSA Securing LLM Backed Systems](https://github.com/The-Art-of-Hacking/h4cker/blob/master/ai_research/ai_risk_management/Securing%20LLM%20Backed%20Systems%20-%20Essential%20Authorization%20Practices%2020240806.pdf)
## Academia
- [MIT AI Risk Database](https://airisk.mit.edu/)