Regulation of Artificial Intelligence in India

Syllabus: Awareness in the fields of IT, Space, Computers, Robotics, Nano-technology, Bio-technology and issues relating to intellectual property rights.

Context

  • Ministry of Electronics and Information Technology (MeitY) released India AI Governance Guidelines.

Purpose

  • Establish consistent regulation as India becomes the second largest user of LLMs after the U.S.
    • LLM stands for Large Language Model, a type of artificial intelligence that uses deep learning to understand and generate human-like text.
  • Aim: use AI for inclusive development while addressing risks to society and individuals.

Key Objectives

  • Promote responsible, people-centric, transparent, and accountable AI usage.
  • Classify risks, assign responsibility, and push AI safety research.

Institutional Framework

  • Proposes an AI Governance Group linking ministries, regulators, and standards bodies.
  • Private sector expected to ensure legal compliance, transparency reports, and grievance redressal.
  • Relies on AI Safety Institute (AISI) under IndiaAI Mission.

Key Recommendations

  • Build AI infrastructure, increase access to computing and datasets.
  • States encouraged to expand AI adoption through data and compute availability.
  • Suggests legal changes to copyright law to address AI-related IP disputes.
  • Focus on AI models for Indian languages using culturally relevant datasets.

Alignment with Government Initiatives

  • Consistent with procurement of GPUs and enabling access to compute infrastructure.
  • Supports integration of Digital Public Infrastructure (e.g., Aadhaar) with AI.
  • Government retains flexibility to enact stricter laws, especially related to deepfake content authentication.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top