Syllabus: Government policies and interventions for development in various sectors and issues arising out of their design and implementation.
Current Regulatory Posture
- India regulates AI indirectly through IT Act and IT Rules, expecting platform-level due diligence.
- Financial sector regulation addresses AI risks via RBI model-risk expectations and SEBI accountability norms.
- Privacy and data protection rules cover data misuse but not AI-specific consumer safety.
- India lacks a dedicated duty-of-care framework addressing AI-related psychological or emotional harms.
Comparative Perspective: China
- China proposed a consumer safety regime for AI, targeting emotionally interactive services.
- Draft rules require usage warnings and intervention during extreme emotional states.
- These rules address risks beyond unlawful content, including psychological dependence.
- However, they may incentivise intimate user monitoring, raising surveillance concerns.
Gaps in India’s AI Governance
- India’s approach is less intrusive but incomplete, relying heavily on existing legal frameworks.
- Regulation covers adjacent risks, not explicit AI product safety obligations.
- No articulated state duty of care for AI-induced psychological harm.
- MeitY actions remain largely reactive, responding to deep

