SOCINFO2020 TUTORIAL – DISCOVERING GENDER BIAS AND DISCRIMINATION IN LANGUAGE

Today we presented our tutorial ‘DISCOVERING GENDER BIAS AND DISCRIMINATION IN LANGUAGE’ at the online Socinfo2020 conference. It was a great experience for all of us, hope you all enjoyed it! The live session was recorded by the organizers and will probably be shared online very soon.

The tutorial focuses on the issue of digital discrimination, particularly towards gender. Its main goal is to help participants improve their digital literacy by understanding the social issues at stake in digital (gender) discrimination, and learning about technical applications and solutions. The tutorial is divided in four parts: it basically iterates twice through the social and technical dimensions. We make use of our own research in language modelling and Word Embeddings in order to clarify how human gender biases may be incorporated into AI/ML models. We first offer a short introduction to digital discrimination and (gender) bias. We give examples of gender discrimination in the field of AI/ML, and discuss the clear gender binary (M/F) that is presupposed when dealing with a computational bias towards gender. We then move to a technical perspective, introducing the DADD Language Bias Visualiser which allows us to discover and analyze gender bias using Word Embeddings. Finally, we show how computational models of bias and discrimination are built on implicit binaries, and discuss with participants the difficulties pertaining to these assumptions in times of post-binary gender attribution.

Discovering and Categorising Language Biases in Reddit

Our article “Discovering and Categorising Language Biases in Reddit” was accepted last week at the International Conference on Web and Social Media 2021 (ICWSM 2021). Although the proceedings will not be ready until early 2021, you can find the author’s version of the paper here.

We present a method to explore language bias in various Reddit communities by comparing the words most closely correlated with different concepts leveraging the embedding space of each community. In this way, we study gender bias in r/TheRedPill, religion bias in r/Atheism, and ethnicity bias in r/The_Donald, and discover important biases that, for instance in r/TheRedPill, picture women with words related to externality and physical appearance such as flirtatious and fuckable, and men through descriptive adjectives serving as indicators of subjectivity such as visionary and tactician (see our Bias Visualisation Tool).

Also, in case you are interested in analysing language biases in your own datasets, we are also sharing the code here. Please use it and feel free to ask any questions or report any errors/problems you find on Github!

Abstract. We present a data-driven approach using word embeddings to discover and categorise language biases on the discussion platform Reddit. As spaces for isolated user communities, platforms such as Reddit are increasingly connected to issues of racism, sexism and other forms of discrimination. Hence, there is a need to monitor the language of these groups. One of the most promising AI approaches to trace linguistic biases in large textual datasets involves word embeddings, which transform text into high-dimensional dense vectors and capture semantic relations between words. Yet, previous studies require predefined sets of potential biases to study, e.g., whether gender is more or less associated with particular types of jobs. This makes these approaches unfit to deal with smaller and community-centric datasets such as those on Reddit, which contain smaller vocabularies and slang, as well as biases that may be particular to that community. This paper proposes a data-driven approach to automatically discover language biases encoded in the vocabulary of online discourse communities on Reddit. In our approach, protected attributes are connected to evaluative words found in the data, which are then categorised through a semantic analysis system. We verify the effectiveness of our method by comparing the biases we discover in the Google News dataset with those found in previous literature. We then successfully discover gender bias, religion bias, and ethnic bias in different Reddit communities. We conclude by discussing potential application scenarios and limitations of this data-driven bias discovery method.

A Normative Approach to Attest Digital Discrimination

Our new article “A Normative Approach to Attest Digital Discrimination” has been accepted in the Advancing Towards the SDGS Artificial Intelligence for a Fair, Just and Equitable World Workshop (AI4EQ) of the 24th European Conference on Artificial Intelligence (ECAI’20)!

In the paper we formalise non-discrimination norms in the context of ML systems and propose an algorithm to check whether ML systems violate these norms. The code is publicly available here.

Abstract. Digital discrimination is a form of discrimination whereby users are automatically treated unfairly, unethically or just differently based on their personal data by a machine learning (ML) system. Examples of digital discrimination include low-income neighbourhood’s targeted with high-interest loans or low credit scores, and women being undervalued by 21% in online marketing. Recently, different techniques and tools have been proposed to detect biases that may lead to digital discrimination. These tools often require technical expertise to be executed and for their results to be interpreted. To allow non-technical users to benefit from ML, simpler notions and concepts to represent and reason about digital discrimination are needed. In this paper, we use norms as an abstraction to represent different situations that may lead to digital discrimination. In particular, we formalise non-discrimination norms in the context of ML systems and propose an algorithm to check whether ML systems violate these norms.



New article on transparent AI

Tom, Xavi, Jose and Mark wrote an article on transparency in AI and how the concept means different things for different stakeholders and disciplines. It will be published in IEEE Computer Magazine; the pre-print version is now available on Research Gate.

https://www.researchgate.net/publication/342082930_Transparency_for_whom_Assessing_discriminatory_AI

Abstract: AI decision-making can cause discriminatory harm to many vulnerable groups. Redress is often suggested through increased transparency of these systems. But who are we implementing it for? This article seeks to identify what transparency means for technical, legislative and public realities and stakeholders.

DADD x SGL app

Unfortunately, the DADD workshop at Science Gallery London was canceled due to COVID-19. We would however like to share the wonderful app that Steve Brown built for the exhibit, which uses Word Embeddings models built by DADD. Users play a small game to explore language bias in a Google News dataset, and The Red Pill, a notorious community on Reddit.

See http://sgl.stevebrown.co/dadd to play the game and learn more.

DADD Video Lectures: Bias and Discrimination, Interviewing

We have uploaded a web lecture introducing you to issues of bias and discrimination in machine learning, with a particular focus on gender. Dr. Mark Cote and dr. Xavier Ferrer explain digital discrimination and gender bias. They also introduce the DADD Language Bias Visualiser, a Word Embeddings-powered tool to explore language biases towards gender in several Reddit datasets and the Google News dataset.

Our research student Héloïse Eloi-Hammer has also made a video explaining interviewing as a method: how to go about interviewing, and how to analyze interviews using software such as NVivo.

DADD Language Bias Visualiser

The DADD Language Bias Visualiser is online! The team has used Word Embeddings to connect target concepts such as `male’ or `female’ to evaluative attributes found in online data, which are then categorised through clustering algorithms and labelled through a semantic analysis system into more general (conceptual) biases. Categorising biases allows us to give a broad picture of the biases present in discourse communities, such as those on Reddit.

Check it out at https://xfold.github.io/WE-GenderBiasVisualisationWeb/

4Chan and the alt-right workshop at King’s College London

DADD was present at a workshop on digital methods to analyse web platforms such as 4Chan. The workshop was organised by the Department of War Studies at King’s College London. It was widely attended by staff and industry professionals, as the questions raised about radicalisation, weaponisation, and anonymity on such web platforms are in the center of attention.

DADD Workshop at King’s College London

On March 4, 2019, the DADD team was joined by José Ortega from BigML for a workshop on machine learning and digital discrimination.

BigML offers a web-based Machine Learning platform that includes a selection of robustly-engineered Machine Learning algorithms, both supervised and unsupervised, such as classification and regression, cluster analysis, anomaly detection, and topic modeling.

Students at the Department of Digital Humanities worked with BigML’s platform to discover discriminatory patterns in different, real-world datasets. They also presented their findings in one-minute presentations. The key takeaway of the day: datasets rarely are obviously discriminatory. Pruning the dataset and tracing proxy variables are key practices!