Solutional new logo (1)

Algorithmic bias

Deel dit
" Terug naar Woordenlijst Index

Algorithmic bias is a term that refers to the systematic and repeatable errors in a computer[4] system that create unfair outcomes, such as privileging one group to the detriment of others. This type of bias can be introduced at various stages, including during data collection, programming, and when making design choices. It can mirror pre-existing social and institutional biases, or emerge uniquely within digital contexts. Notable examples include biases in search engine[2] results, social media[3] algorithms, and machine learning[1] systems. Algorithmic bias can also manifest in more specific forms, such as gender, political, or technical biases. Furthermore, it has significant impacts in various sectors, including commercial spheres, legal systems, and online platforms, leading to discriminatory practices and violations of rights.

Terms definitions
1. machine learning. Machine learning, a term coined by Arthur Samuel in 1959, is a field of study that originated from the pursuit of artificial intelligence. It employs techniques that allow computers to improve their performance over time through experience. This learning process often mimics the human cognitive process. Machine learning applies to various areas such as natural language processing, computer vision, and speech recognition. It also finds use in practical sectors like agriculture, medicine, and business for predictive analytics. Theoretical frameworks such as the Probably Approximately Correct learning and concepts like data mining and mathematical optimization form the foundation of machine learning. Specialized techniques include supervised and unsupervised learning, reinforcement learning, and dimensionality reduction, among others.
2. search engine. A search engine is a vital tool that functions as part of a distributed computing system. It's a software system that responds to user queries by providing a list of hyperlinks, summaries, and images. It utilizes a complex indexing system, which is continuously updated by web crawlers that mine data from web servers. Some content, however, remains inaccessible to these crawlers. The speed and efficiency of a search engine are highly dependent on its indexing system. Users interact with search engines via a web browser or app, inputting queries and receiving suggestions as they type. The results may be filtered to specific types, and the system can be accessed on various devices. This tool is significant as it allows users to navigate the vast web, find relevant content, and efficiently retrieve information.
Algorithmic bias (Wikipedia)

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

A flow chart showing the decisions made by a recommendation engine, c. 2001

Bias can emerge from many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. For example, algorithmic bias has been observed in search engine results and social media platforms. This bias can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity. The study of algorithmic bias is most concerned with algorithms that reflect "systematic and unfair" discrimination. This bias has only recently been addressed in legal frameworks, such as the European Union's General Data Protection Regulation (2018) and the proposed Artificial Intelligence Act (2021).

As algorithms expand their ability to organize society, politics, institutions, and behavior, sociologists have become concerned with the ways in which unanticipated output and manipulation of data can impact the physical world. Because algorithms are often considered to be neutral and unbiased, they can inaccurately project greater authority than human expertise (in part due to the psychological phenomenon of automation bias), and in some cases, reliance on algorithms can displace human responsibility for their outcomes. Bias can enter into algorithmic systems as a result of pre-existing cultural, social, or institutional expectations; by how features and labels are chosen; because of technical limitations of their design; or by being used in unanticipated contexts or by audiences who are not considered in the software's initial design.

Algorithmic bias has been cited in cases ranging from election outcomes to the spread of online hate speech. It has also arisen in criminal justice, healthcare, and hiring, compounding existing racial, socioeconomic, and gender biases. The relative inability of facial recognition technology to accurately identify darker-skinned faces has been linked to multiple wrongful arrests of black men, an issue stemming from imbalanced datasets. Problems in understanding, researching, and discovering algorithmic bias persist due to the proprietary nature of algorithms, which are typically treated as trade secrets. Even when full transparency is provided, the complexity of certain algorithms poses a barrier to understanding their functioning. Furthermore, algorithms may change, or respond to input or output in ways that cannot be anticipated or easily reproduced for analysis. In many cases, even within a single website or application, there is no single "algorithm" to examine, but a network of many interrelated programs and data inputs, even between users of the same service.

" Terug naar Woordenlijst Index
Scroll naar boven