India's Initiative for a Local Language AI Foundation Model
India is set to back an industry-academia consortium to develop an indigenous artificial intelligence (AI) foundation model focused on local languages. This move aims to place India among a select group of nations, including the US and China, that are developing prominent AI models.
Background and Rationale
- The decision comes in the wake of the US government's restrictions on exporting AI compute infrastructure and limitations on using language models, which are largely dominated by American tech firms.
- Foundation models are AI models trained on extensive data repositories, serving as a base for various applications.
- The debate on whether India should develop its own foundational model or rely on open-source models has been ongoing, but it is now concluded that both are necessary.
Implementation Strategy
- A technical committee will be formed to engage startups and academic institutions, providing technical and financial support.
- The model, based on Indian languages and datasets, will eventually be open-sourced for broader industry use.
- This necessity arises from recent US restrictions on AI technology exports.
Challenges and Opportunities
- The Biden administration limited the import of GPUs to India, while new restrictions were supported by President Trump for a $500 billion AI initiative in the US.
- Indian-origin tech leaders highlighted that training such models is costly but necessary.
Global Comparisons and Aspirations
- China's AI startup, DeepSeek, has launched a free, open-source AI reasoning model, raising questions about cost-effectiveness and efficiency compared to US models.
- There is a call for India to focus on building globally competitive models, not only for Indic languages.