Inclusion and anti-discrimination: Impact of effective regulation on Artificial Intelligence

ICT-Insider-Artificial-Intelligence

 

Authors: Wendy Kuyoh, Eleonora Margherita Auletta, Andrea Strippoli

 

Background: Global regulatory response towards regulation of Artificial Intelligence

Artificial Intelligence (AI) driven technology has become a part of every person’s life, from the use of social media applications and marketing chatbots. Its use has over the years increased to make decisions which may have serious consequences for human rights.[1] As the deployment of AI applications and algorithmic systems is growing in relevance, algorithmic discrimination has become a matter of concern that has brought about regulatory responses across the globe, including in the European Union (EU).[2]

In the EU for instance, the Council presidency and the European Parliament’s negotiators reached on 8 December 2023 a political agreement concerning the draft AI Act, which will shortly be approved in its final version. Similarly, in the US, there has been efforts towards regulation of AI. For instance, on 30 October 2023, President Biden issued a landmark Executive Order on Safe, Secure & Trustworthy Artificial Intelligence that sets new standards for AI safety and security while promoting innovation, competition and more.[3]

It is noteworthy to recognize that several EU Data Protection Authorities have also issued guidelines relating to the data protection compliant use of AI. For example, the French Data Protection Authority (CNIL) has published a range of comprehensive resources to help entities to overcome the major data protection challenges in the use of AI.[4]

In general, there has been increased efforts towards the effective regulation of AI by multiple actors due to the increasing need and deployment of AI systems for purposes such as evaluating people’s personality or skills, and otherwise making decisions that may have a serious impact for the rights of the individuals.

A look into effective deployment of AI on inclusion and anti-discrimination

Algorithms and AI models are designed to predict real life and over the recent years, their potential to greatly benefit people has been shadowed by the growing awareness of the potential risks such as discrimination and inequality.[5] For example, in the Netherlands, thousands of people were subjected to the consequence of a biased self-learning algorithm in the popular “toeslagenaffaire”, or the childcare benefits scandal. In this case, the algorithm created risk profiles in an effort to detect childcare benefits fraud. A parliamentary report of the case showed that the victims in the case of algorithmic profiling experienced distress and increased poverty, even leading to a case of attempted suicide.[6]

Similarly, in 2018, a report by Reuters depicted that Amazon tried to use AI to build a tool to screen CVs that were collected by the company over a couple of years. Most of the CVs were submitted by men, and the consequences of that were not initially taken into consideration. This resulted to the discrimination of women in the screening of the CVs and thus the system had to be discarded.[7]

Al systems are built by historic datasets and models that may pose a risk of creating stereotypes and false assumptions about a certain group of people based on certain classifications such as gender, race, sexual orientation, age, and religion. However, with a dedicated effort towards regulation of such systems, awareness of risks by entities seeking to deploy such technology, the discussions will revolve around innovation, inclusion, and anti-discrimination models.

Implications of algorithmic and data driven discrimination

Many AI systems process personal data which may be used in the train of the machine learning systems. Personal data can thus be used to analyze and influence human behavior and thus there are a couple of challenges that may arise from algorithmic discrimination.[8]

One challenge is that the deployment of AI applications and algorithmic systems make the sources of discrimination difficult to identify and address due to decisions that are largely supported by machines. AI systems are often non-transparent and might not offer explanation to the individual affected by the deployment of such systems.[9]

It is evident that addressing algorithmic discrimination and disadvantages requires a much greater degree of scrutiny from multiple actors including regulators and organizations seeking to deploy such systems.

Conclusion

Some of the challenges arising as a result of deployment of AI systems have brought about regulatory responses across the world as an opportunity to improve AI systems from how they are developed, deployed, and used. Several public and private actors have demonstrated efforts in anticipation of incoming regulation and have built AI governance structures that address some of the issues. Some of the considerations that can be taken to ensure a positive impact on the deployment of AI systems by companies include implementing guidance issued by Data Protection Authorities relating to the deployment of AI systems and embedding fundamental data protection principles such as transparency in the building of such applications. In general, effective compliance with proposed and current legal and regulatory framework promotes an environment of inclusion.

 

 

[1] Council of Europe, ‘AI & discrimination’, https://www.coe.int/en/web/inclusion-and-antidiscrimination/ai-and-discrimination.

[2] Ivana Bartoletti and Raphaële Xenidis, ‘Study on the impact of artificial intelligence systems, their potential for promoting equality, including gender equality, and the risks they may cause in relation to non-discrimination’, Council of Europe https://rm.coe.int/prems-112923-gbr-2530-etude-sur-l-impact-de-ai-web-a5-1-2788-3289-7544/1680ac7936

[3] The White House, ‘Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence’ (US, 30 October 2023) https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[4] French Data Protection Authority (CNIL), ‘Self-assessment guide for artificial intelligence (AI) systems’ https://www.cnil.fr/en/self-assessment-guide-artificial-intelligence-ai-systems

[5] Ivana Bartoletti and Raphaële Xenidis, see note 2, p. 9.

[6] Melissa Heikkila, ‘Dutch scandal serves as a warning for Europe over risks of using algorithms’, (Politico, 29 March 2022) https://www.politico.eu/article/dutch-scandal-servesas-a-warning-for-europe-over-risks-of-using-algorithms/

[7] Jeffrey Dastin, ‘Insight – Amazon scraps secret AI recruiting tool that showed bias against women’, (Reuters, 11 October 2018) https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G/

[8] European Parliament, ‘The Impact of the General Data Protection Regulation (GDPR) on Artificial Intelligence’, EPRS – European Parliamentary Research Service, June 2020 www.europarl.europa.eu/RegData/etudes/STUD/2020/641530/EPRS_STU(2020)641530_EN.pdf

[9] Gabriele Spina Alì and Ronald Yu, ‘Artificial Intelligence between Transparency and Secrecy: From the EC Whitepaper to the AIA and Beyond’, European Journal of Law and Technology, Vol 12 No.3 (2021), https://www.ejlt.org/index.php/ejlt/article/download/754/1044/3716

ICTLC Italy
[email protected]