Regulation of Artificial Intelligence in India
India regulates the use of Artificial Intelligence (AI) through existing structures such as the IT Act and Rules, financial sector regulations, and privacy and data protection rules. However, a specific consumer safety regime for AI that addresses the state's duty of care is not yet established.
Comparison with China's Approach
- China has proposed a consumer safety regime targeting emotionally interactive services, requiring companies to warn against excessive use and intervene during extreme emotional states.
- While justified in addressing psychological dependence, these rules may encourage invasive monitoring.
- India's approach is less intrusive but lacks completeness in addressing AI product safety, especially psychological harms.
Current Regulatory Measures in India
- Ministry of Electronics and Information Technology (MeitY) uses IT Rules to curb deepfakes, fraud, and define "synthetically generated" content.
- RBI and SEBI have implemented measures to govern AI usage, focusing on model risk and accountability.
- MeitY's approach is largely reactive, while some measures are preemptive.
Challenges and Recommendations
- India lags behind the U.S. and China in developing its own frontier AI models.
- Concerns: "Regulate first, build later" might hinder domestic AI capacity.
- Recommendations:
- Improve access to computational resources and upskill the workforce.
- Enhance public procurement and translate research into industry applications.
- Regulate downstream use more assertively without hindering upstream innovation.
- Expect companies to submit incident reports instead of monitoring user emotions.
Conclusion
India should strategically nurture its AI capabilities while ensuring regulation does not stifle innovation. The focus should be on how AI models are used within Indian markets, rather than attempting to influence the global technology trajectory.