Privacy and Technology in the Age of AI
The advancement of Artificial Intelligence (AI) presents a paradox where convenience often conflicts with privacy. The current legal frameworks in India, such as the Puttaswamy judgment (2017), the Information Technology Act, 2000, and the Digital Personal Data Protection Act, 2023, are insufficient in addressing the evolving privacy concerns brought on by AI technologies.
Issues of Privacy and Obscurity
There is a societal shift towards living in a 'fishbowl' existence, where privacy is scrutinized narrowly. As noted by Meredith Broussard in "Artificial Unintelligence", excessive reliance on technology leaves us vulnerable to systems we created, posing risks such as data breaches and pushing individuals into obscurity through threats like Non-Consensual Intimate Image Abuse (NCII).
Challenges in Legal and Policy Framework
- The existing legal frameworks are inadequate in addressing new forms of cyber abuses.
- Surveillance risks are not just about privacy loss but include anxiety, societal stigma, and loss of autonomy.
- Lack of detailed data: The National Crime Records Bureau (NCRB) does not specifically classify cybercrimes related to NCII.
Data and Reporting Challenges
A request for specific data on cyberbullying and cybervoyeurism revealed a lack of centralized records, with the responsibility placed on state governments.
Importance of Digital Literacy and Social Awareness
- Younger demographics, especially women, often lack awareness of their rights concerning offences like voyeurism and deepfake pornography.
- The fear of stigma or shame discourages victims from reporting incidents.
Recent Initiatives
The Ministry of Electronics and Information Technology issued Standard Operating Procedures (SOPs) to address NCII, mandating content removal within 24 hours and offering multiple complaint platforms, aiming to protect "digital dignity."
Limitations of Current Measures
- These SOPs lack a gender-neutral approach, failing to address the issues faced by transgender individuals.
- There is no defined accountability, punishment, or specific regulations for deepfake creation and dissemination.
The Need for Comprehensive Legislation
- A dedicated law on NCII should go beyond traditional legal concepts to include duties for platforms, AI developers, and intermediaries.
- Technological threats to privacy necessitate specific legal protections rather than relying solely on capability-based measures.
Conclusion
While the SOP is a step forward, addressing NCII and deepfake harms effectively requires gender-neutral reforms, police training, platform accountability, AI-specific safeguards, and robust victim-centric legal mechanisms to ensure justice and protection in the digital age.