EU Artificial Intelligence Act and borrowable practices for India

Amidst the back-and-forth questions of how and what regarding AI regulations, the European Union has taken a step closer to providing a legislative approach to the latter questions. In May 2023, the European Parliament gave the green signal to the EU Artificial Intelligence Act (AIA) proposal. Now the EU AI Act is moved into the hands of the European Council for final approval. European Council either accepts the proposal or proposes its amendments. However, there is a high possibility that the proposal will get approval by the European Council by the end of 2023. When it gets the approval, it becomes the second AI regulation in the world.

The first one will be China, with its deep synthesis regulation enacted on January 10, 2023, and the draft regulations on generative AI on 11 April 2023. Apart from China, the US released its Artificial Intelligence Risk Management Framework 1.0 on January 2023. As the EU’s legislation is the first among the democracies, the EU parliament passing the AIA calls for a more nuanced understanding. As India aims to become one of the front runners in the global race of AI, regulation of AI becomes one of the pillars on which further development stands. However, India is not inclined towards AI regulation. Conflicting the latter position of India, this piece aims to provide a nuanced understanding of the AIA and initiates the discussion on how India can adopt some aspects of the EU’s AI regulation.

European Union’s Artificial Intelligence Act

The European Union Artificial Intelligence Act (AIA) aims to promote legislative action ensuring a well-functioning regional market for artificial intelligence systems (‘AI systems’) where both benefits and risks of AI are adequately addressed at the Union level. The legal basis for this proposal is Article 14 of the Treaty of the Functioning of the European Union (TFEU). This treaty provides a framework for the smooth functioning of the common market of the EU. Further, this AIA proposal is one of the core pillars of the EU digital single market strategy. This shows that the approach considered by the EU is to make a single homogenous market for the development and deployment of AI in the European continent. As this is one of the first legislations on AI regulation among democracies, other countries will likely consider developing their legislation based on the AIA. This provides an edge to the companies that design their AI systems by the AIA, as they are likely to be universal.

Another interesting aspect of the AIA is the definition of high-risk AI. As per the Consulting, high-risk AI mentioned in the AIA includes transport, education, employment and welfare applications, among others. Before putting a high-risk AI system on the market or in service in the EU, companies must conduct a prior “conformity assessment” and meet a long list of requirements to ensure the system is safe. These high-risk AI systems are required to have a risk-management process. The risk management process provided by the US’s AIA and Artificial Intelligence Risk Management Framework is similar. Table 2 compares the EU and the US risk management approaches.

Table 3 Comparision of EU and the US AI risk management framework

EU AI Act (Article 9) – European UnionAIRMF 1.0 – United States
1. Identification and analysis of the possible risksMap the risks emanating from the use of AI
2. Estimate and Evaluate the risk of AI systemsMeasure the risk
3. Adoption of the suitable risk management processesManage the risk by adopting specific processes
4. the process of risk management runs throughout the life cycle of the AI systemGovern – Policies, processes, procedures, and practices related to mapping, measuring, and managing AI risks are in place, transparent, and implemented effectively.

Apart from the risk management procedures, Article 10 of AIA mentions the data governance requirements for the training, validation, and testing of datasets. Article 12 mandates logging of the events while the high-risk AI system is operating. This logging ensures traceability. Article 13 advises on the transparency of the AI operation to the users. The AI system has to display the intended use, accuracy, and foreseeable misuses. Human oversight details are provided in Article 14 of the Act. Human oversight is responsible for risk minimising. This provision creates a new job role in almost every AI company. A team will be responsible for risk assessment, management, and impact assessment of the AI system.  Such a team will come from the background of AI + Social sciences.  It will be interesting to see how the education sector picks this up and shape new curricula.

Aspects that India should adopt

The major and much-hyped arguments on AI regulation are discussed in the above paragraphs. However, what does the EU parliament’s acceptance of AIA mean for India? Contrasting the legislative approach to AI regulation by the EU, IT and Telecom Minister Ashwini Vaishnav asserted that the government would not consider any law that regulates AI as it hampers AI innovation. EU’s approach to AI regulation should be comparatively analysed to India’s data and internet regulation, as India has no AI regulation.

India’s draft telecommunication bill provides an undue advantage to the government in tapping public communications. As per the current situation, unregulated AI systems will benefit the draft telecommunications bill as automated communications interception will be possible for the state. This is in contrast to the approach of AIA. EU-mandated prior permission even for the usage of facial recognition cameras in public spaces. Even if used, they should be for tracing a particular person or entity.

In the absence of AI regulation and loose data protection laws, Indian AI systems will be more business-friendly than citizen-friendly. Data protection bill 2022 gives users the right to withdraw their compliance but there is no provision to restrict their consent. Once the consent form is floated, a single consent is deemed to apply to a whole set of business processes. This allows AI systems to use the data in an incomprehensible way to the user. This contrasts with the AIA’s prohibited practices in Article 5(a).
While defining the term ‘gain,’ the data protection bill 2022 does not include political gains, coercive controls, or knowledge about others without monetary benefit. That means that personal data can be used for the latter mentioned advantages. Further, the term ‘harm’ does not include psychological harm. From the above definitions, it is clear that the bill is anchored to the financial benefits of the companies and completely neglects the psychological and social well-being of the public.

The risk management procedures detailed in the earlier paragraphs are much needed for Indian AI regulations if the government is inclined towards the legislative approach. However, the risk management processes should be added to the series of AI approach documents released by NITI Aayog.

All the above analysis shows that the Indian approach towards data and AI is more business-centric. The sharp contrast to the AIA is that the Indian approach is not to the citizen rights. To uphold democratic values and practices, India should consider making AI regulations and borrow the best and relevant practices of the EU’s AIA. As the AIA explicitly conveys that the regulations should be dynamic, civil society organisations have a more significant influence in reshaping it as per the time. In the case of India, there are many civil society organisations, like Digital Empowerment Foundation (DEF), which can shape AI regulations. It is hoped that even if the state does not take the initiative in producing citizen-centric AI regulation, civil society organisations will find a way to pressurise the state.

Arun Teja Polcumpally, Associate Fellow, Center for Development Policy and Practice, Hyderabad

Back to top button