What Is This Project Kavach
Project Kavach is a fully automated system designed to test and verify whether AI models like ChatGPT, Claude, or LLaMA are following safety rules, ethics, and legal guidelines. It simulates risky user inputs, evaluates AI outputs, and flags dangerous or unethical behavior.
Our Goals
- Test AI for bias, safety, and ethics
- Enable trust in AI
- Make AI Responsible, Ethical, Safe
- Create open-source tool to verify AI guardrails
Our Outcome
We aim to build an open, scalable, and security-first toolset designed to evaluate and audit AI models. Our focus is on verifying whether AI systems follow essential safety, ethical, and privacy standards before deployment or release. This ensures that responsible AI development is measurable and enforceable, not left to assumption or chance.
Why Project Kavach?
‘Kavach’ means ‘shield’ in Sanskrit. We aim to shield users and systems from AI misuse and mistakes.