{"id":2177,"date":"2021-03-09T10:15:06","date_gmt":"2021-03-09T10:15:06","guid":{"rendered":"http:\/\/dadd-project.org\/?page_id=2177"},"modified":"2021-03-09T11:18:08","modified_gmt":"2021-03-09T11:18:08","slug":"research-streams","status":"publish","type":"page","link":"https:\/\/dadd-project.org\/research-streams\/","title":{"rendered":"Research Streams"},"content":{"rendered":"\n
Addressing and attesting digital discrimination and remedying its corresponding deficiencies is a problem that must be faced from a cross-disciplinary perspective; including the technical, legal and social dimensions of the problem. In this stream, we study the relationship between these dimensions and how these can be combined to better understand discrimination.<\/p>\n\n\n\n
Language carries implicit human biases, functioning both as a reflection and a perpetuation of stereotypes that people carry with them. ML-based NLP methods such as word embeddings have been shown to learn such language biases with striking accuracy. This capability of word embeddings has been successfully exploited as a tool to quantify and study human biases. Here we create a data-driven approach to automatically discover and help interpret conceptual biases towards different concepts encoded in the language from online communities.<\/p>\n\n\n\n
Using the Data-driven discovery of biases we created, we explore the biases in social media and online communities and present them in a visually pleasing and interactive website.<\/p>\n\n\n\n
<\/p>\n<\/div><\/div>\n\n\n\n
Biases and discrimination in models and datasets pose a significant challenge to the adoption of ML by companies or public sector organisations, despite ML having the potential to lead to significant reductions in cost and more efficient decisions. Here, we use norms as an abstraction to represent different situations that may lead to digital discrimination to allow non-technical users to benefit from ML. In particular, we formalise non-discrimination norms in the context of ML systems and propose an algorithm to check whether ML systems violate these norms.<\/p>\n\n\n\n
4. Automated assessment of discrimination based on norms Biases and discrimination in models and datasets pose a significant challenge to the adoption of ML by companies or public sector organisations, despite ML having the potential to lead to significant reductions in cost and more efficient decisions. Here, we use norms as an abstraction to represent … <\/p>\n