A A+ A++
28/12/2022

FRA: Test algorithms for bias to avoid discrimination

Artificial intelligence (AI)-based algorithms affect people everywhere: from deciding what content people see on their social media feeds to determining who will receive State benefits. AI technologies are typically based on algorithms that make predictions to support or even fully automate decision-making. Among the main goals of using AI in this way are increasing efficiency and operating systems at a large scale. But, a central question, and a fundamental rights concern, is what happens if algorithms become biased against certain groups of people, such as women or immigrants?

The EU Agency for Fundamental Rights (FRA) highlights in his report ‘Bias in algorithms- Artificial intelligence and discrimination’, that the use of AI can affect many fundamental rights. While algorithms can be a force for good, they can also violate the right to privacy or lead to discriminatory decision-making, which has a very real impact on people’s lives.
The report aims to inform policymakers, human rights practitioners and the general public about risk of bias when using AI, thereby feeding into ongoing policy developments. It is acknowledged that bias can be understood in different ways. This analysis investigates bias in the context of non-discrimination, which is one of the key concerns regarding fundamental rights-compliant AI. The findings pinpoint ways to detect and counteract forms of bias that may lead to discrimination, with the ultimate goal of using AI algorithms in a way that respects fundamental rights.

The report is divided into 4 parts:
-artificial intelligence and bias: what is the problem?
-feedback loops: how algorithms can influence algorithms
-ethnic and gender bias in offensive speech detection
-looking forward: sharpening the fundamental rights focus on artificial intelligence to mitigate bias and discrimination.

Attachments & Resources

Last update

28/12/2022