Research
Research in Diverse AI aims to further socio technical understandings of how to democratise the purpose, use and access to AI and data driven technologies. Research projects are inter-disciplinary and collaborative and aim to create breakthroughs in new AI algorithms, supporting architectures, processes, input datasets, evaluation and governance.
Research Project: Building Diverse Datasets using Community Participatory Research
Using community participatory research methods, we’re working on addressing human rights violations, bias, representational harms and stereotypes associated with AI systems by creating diverse datasets that reflects key characteristics of peoples and cultures from the Global South, and other underrepresented groups - who are usually mis-represented in AI datasets. These groups include: BAME, LGBTQ+, Disabled, different age groups, religious groups, and women. Datasets will be built responsibly, adhering to the Open Data Institute’s (ODI’s) data ethics guidelines and will include data ethics practices from Toju Duke’s Responsible AI framework. On conclusion of each project, each dataset will be open-source and freely available, further contributing to the AI research and practitioner communities. This project will start with re-creating an image dataset.
Interested in joining our research projects? Send an email to research@diverse-ai.org.
Developing Critical AI Cultures
Diverse AI in partnership with the University of West England (UWE) and the University of Sheffield co-designed and hosted an online dialogue on the cultural implications of AI technologies as part of the “Patterns in Practice“ research project. This project is funded by the Arts and Humanities Research Council (UKRI).
Read the full report here: Developing Critical AI Cultures - Dialogue Report