Syllabus: Awareness in the fields of IT
Context
- Rapid advances in Artificial Intelligence (AI) have intensified concerns around privacy, surveillance, and online abuse.
- India has a normative privacy framework — Puttaswamy (2017), the IT Act, 2000, Intermediary Guidelines, and the Digital Personal Data Protection Act, 2023 — yet real protection remains limited.
Understanding the ‘Fishbowl Society’
- Society faces constant visibility, where privacy harms extend beyond data exposure.
- Over-reliance on technology creates vulnerabilities, as noted by Meredith Broussard.
- The shift from anchoring privacy in “dignity” to loss of obscurity is significant.
Rising Threat of NCII
- Non-Consensual Intimate Image Abuse (NCII) and deepfake pornography are major emerging harms.
- AI-generated deepfakes expose victims to anxiety, stigma, and loss of autonomy, far beyond conventional privacy violations.
- NCII is aggravated by lack of control, invisibility of harms, and limited legal recourse.
Gaps in Data and Governance
- No contemporary national data on NCII; NCRB categories merge multiple cybercrimes without granular classification.
- Centre deflects data responsibility to States under the State List, exposing administrative gaps.
- Limited awareness among citizens — especially women — about offences such as voyeurism and deepfake porn.
Recent Government Measures
- SOP on NCII (November 2025) requires platforms to remove reported content within 24 hours.
- Provides multiple complaint channels and seeks to protect digital dignity.
- However, SOP effectiveness depends on capacity building, stakeholder engagement, and enforcement strength.
Limitations of Current Approach
- SOP lacks a gender-neutral framework despite high targeting of transgender persons.
- No explicit accountability norms, punishment standards, or deepfake-specific regulations.
- Absence of traceability norms, procedural safeguards, and independent oversight weakens enforcement.
Way Forward
- Need for a dedicated NCII law with explicit responsibilities for platforms, AI developers, and intermediaries.
- Strengthening of police training, cyber-investigative capacity, platform accountability, and victim-centric procedures.
- Public awareness and digital literacy must accompany legal reforms to address deepfake harms effectively.

