h4cker/ai_research/monitoring.md
2024-01-05 12:53:41 -05:00

29 lines
1.2 KiB
Markdown

# AI monitoring tools
1. **Model Monitoring Tools**
- [MLflow](https://mlflow.org/)
- [TensorFlow Extended (TFX)](https://www.tensorflow.org/tfx)
- [Seldon](https://www.seldon.io/)
2. **Data Quality Tools**
- [Great Expectations](https://greatexpectations.io/)
- [Deequ](https://github.com/awslabs/deequ)
3. **Explainability and Interpretability Tools**
- [SHAP (SHapley Additive exPlanations)](https://shap.readthedocs.io/en/latest/)
- [LIME (Local Interpretable Model-agnostic Explanations)](https://github.com/marcotcr/lime)
4. **Ethical and Bias Monitoring Tools**
- [IBM's AI Fairness 360](https://www.ibm.com/opensource/open/projects/ai-fairness-360/)
- [Google's What-If Tool](https://pair-code.github.io/what-if-tool/)
5. **Performance Monitoring Tools**
- [Nagios](https://www.nagios.org/)
- [Prometheus](https://prometheus.io/)
6. **Security Monitoring, Red Teaming, and Prompt Injection**
- [CleverHans](https://github.com/cleverhans-lab/cleverhans)
- [IBM Adversarial Robustness Toolbox (ART)](https://research.ibm.com/projects/adversarial-robustness-toolbox)
- [ReBuff](https://github.com/protectai/rebuff)
- [LMQL](https://lmql.ai/)
- [Robust Intelligence](https://www.robustintelligence.com/)