The Indian government has issued directives to all AI platforms, urging them to obtain official permission before introducing their products to the public. As excitement grows globally about the multidimensional benefits of Artificial Intelligence (AI), new applications are emerging daily. However, concerns about misuse and potential risks are also escalating. There is an ongoing international effort to regulate AI expansion and usage, in which India plays a significant role.
Taking a bold step in this direction, the Indian government has instructed all AI platforms to seek government approval before unveiling their products to the public. The Ministry of Electronics and Information Technology emphasizes the need for maintaining a record of AI-generated content and identifying those responsible for creating fake material. Information Technology Minister Rajeev Chandrasekhar stated that unlike other products entering the market, thorough testing processes are not currently applied to AI products, whether from major tech companies or startups.
With the increasing number of active AI platforms, ensuring data security for users is essential. The government’s guidelines also stress that AI platforms should not promote bias or discrimination and should not allow activities that compromise the integrity of electoral processes. These directives, effective immediately since March 1, anticipate AI platforms informing the government about compliance with the regulations within 15 days. Notably, discussions between Rajeev Chandrasekhar and industry representatives in November and December last year paved the way for these guidelines. It was agreed that AI platforms would provide clear guidance to users about the potential legal consequences of using their platforms for unlawful information, imposing restrictions, and even legal penalties in certain cases. It is hoped that these new directives will contribute to making AI technology secure and effective.