Shifting Power: Artificial Intelligence Researchers’ Perspectives from the Margins
Dr Venetia Brown from the Shifting Power team presented Read More
Artificial Intelligence and “Justice” are currently brought together under several headings, namely, AI for Social Good, Ethical AI, Responsible AI, Fair, Accountable and Transparent AI, Cyberjustice and AI for Democracy. Each of these domains has its conceptualisation of the potential benefits and harms of AI, which are not necessarily informed by marginalised voices or those who may experience potential harms. In addition, these domains are located within a wider context of AI research and governance that has pervasive challenges of representation and accessibility. The primary aim of the fellowship, therefore, is to surface other ways of thinking about AI and its impacts that do not centre (primarily) White, Western European notions of innovation, morality and justice. The project utilises theoretical contributions from queer, intersectional feminist scholarship to focus on two key factors in how AI is likely to impact questions of justice: power and marginalisation.
We have named our research group after a quote from Pratyusha Kalluri of the Radical AI Network, who proposed that asking whether AI is good or fair is not the right question. We have to look at how it “shifts power”. Power relationships preserve inequality within our society in real and material terms. How will AI contribute to those inequalities? Is there any chance AI can help to foster new balances of power and if so, what will this look like in practice? Those are sub-questions with which our research team is concerned, and they require new datasets, new ways of looking at the data we do have, and long-term views of power over time.
Dr Venetia Brown from the Shifting Power team presented Read More
Dr Dan McQuillan talk titled “Resisting AI” for The 2nd Ecology of AI Hybrid-Workshop 2024 @ HHAI 2024 Conference, Malmo, Sweden
https://youtube/WCnSCB19l7A
On Friday May 17th, the Shifting Power team in KMi, alongside Dr Justin Hunt from Queen Mary University and Arts Council England, and Dr Ben Sweeting from the Radical Research
In this project, we are working to develop a framework for thinking about the impacts of AI that is broader, more interconnected in terms of short-, medium- and long-term beneficiaries, and entangled with issues such as power, wealth, and the resulting influence on economic, political and legal factors which make certain applications of AI more likely. One of the challenges of thinking about an "ethical" AI or an "AI for Social Good" is the tendency to move conversations into the cultural realm, where goodness and badness may be viewed as culturally subjective terms. One approach to resolve this problem is to unify different definitions and arrive at a set of "universal" principles. Another approach is to align work with a set of goals that has already been vetted by an international community of stakeholders. In our view, these approaches do not adequately consider longer-term, indirect impacts that are likely to influence matters of justice, nor do they include a wide-enough scope for potential beneficiaries and potential harms to anticipate how this technology will impact our world. In this project, we are working with collaborators Soraya Kouadri Mostéfaoui and Syed Mustafa Ali to explore ways of considering the impacts of AI and its subfields that include decolonial perspectives of world systems thinking
Complementing a broader, ecological perspective on the current and future impacts of AI, within the current paradigm of racialised, industrial capitalism, we also wish to identify what might emerge under different circumstances or directed toward goals that are not normatively connected to AI. Queerness, as a world-building concept, involves disrupting, dismantling and dissolving normative constructs that exclude and oppress. What are the ways in which queer people are engaging with this technology? What does the project of "queering AI" mean thus far? These are some of the questions we will be exploring with artists, writers, researchers and activists associated with the queer community.
One important strand of our research is concerned with how AI Researchers and Developers come to hold the viewpoints they have on the impacts of AI and their role in determining those impacts. What levels of impact do AI researchers consider? How do they conceptualise the harms and benefits of AI? What principles do they follow, if any, to mitigate potential harms? Where did they learn this? Through this line of research we hope to understand more about where and how those working on AI begin to formulate their ideas about what AI should become.
Bayer, V., Mulholland, P., Hlosta, M., Farrell, T., Herodotou, C. and Fernandez, M., (2024) Co‐creating an equality diversity and inclusion learning analytics dashboard for addressing awarding gaps in higher education, British Journal of Educational Technology
Farrell, T., Alani, H. and Mikroyannidis, A., (2024) Mediating learning with learning analytics technology: guidelines for practice, Teaching in Higher Education
Kwarteng, J., Farrell, T., Third, A. and Fernandez, M., (2023) Annotators’ Perspectives: Exploring the Influence of Identity on Interpreting Misogynoir, ASONAM 2023: The 2023 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining
Farrell, TKouadri Mostéfaoui, S (2023) False Hopes in Automated Abuse Detection (Short Paper), CEUR Workshop Proceedings of the Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI 2023)
Sides, T., Farrell, T. and Kbaier, D., (2023) Understanding the Acceptance of Artificial Intelligence in Primary Care, 25TH INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION
Mensio, M., Burel, G., Farrell, T. and Alani, H., (2023) MisinfoMe: A Tool for Longitudinal Assessment of Twitter Accounts’ Sharing of Misinformation, Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization
Kwarteng, J., Farrell, T., Third, A., Burel, G. and Fernandez, M., (2023) Understanding Misogynoir: A Study of Annotators’ Perspectives, 15th ACM Web Science Conference 2023 (WebSci ’23)
Alani, H., Burel, G. and Farrell, T., (2022) Elon Musk could roll back social media moderation – just as we’re learning how it can stop misinformation
Kbaier, D., Ismail, N., Farrell, T. and Kane, A., (2022) Experience of Health Professionals with Misinformation and Its Impact on Their Job Practice: Qualitative Interview Study., JMIR formative research
Kwarteng, J., Coppolino Perfumi, S., Farrell, T., Third, A. and Fernandez, M., (2022) Misogynoir: Challenges in Detecting Intersectional Hate, Social Network Analysis and Mining (SNAM)
Kwarteng, J., Coppolino Perfumi, S., Farrell, T. and Fernandez, M., (2021) Misogynoir: Public Online Response Towards Self-Reported Misogynoir, ASONAM '21: Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining
Burel, G., Farrell, T. and Alani, H., (2021) Demographics and topics impact on the co-spread of COVID-19 misinformation and fact-checks on Twitter, Information Processing & Management
Piccolo, L., Blackwood, A., Farrell, T. and Mensio, M., (2021) Agents for Fighting Misinformation Spread on Twitter: Design Challenges, CUI 2021 - 3rd Conference on Conversational User Interfaces
Piccolo, L., Bertel, D., Farrell, T. and Troullinou, P., (2021) Opinions, Intentions, Freedom of Expression, ... , and Other Human Aspects of Misinformation Online, CHI EA '21: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems
Farrell, T., Araque, O., Fernandez, M. and Alani, H., (2020) On the use of Jargon and Word Embeddings to Explore Subculture within the Reddit’s Manosphere, WebSci '20: 12th ACM Conference on Web Science
De Lange, P., Göschlberger, B., Farrell, T., Neumann, A. and Klamma, R., (2020) Decentralized Learning Infrastructures for Community Knowledge Building, IEEE Transactions on Learning Technologies
Burel, G., Farrell, T., Mensio, M., Khare, P. and Alani, H., (2020) Co-Spread of Misinformation and Fact-Checking Content during the Covid-19 Pandemic, Proceedings of the 12th International Social Informatics Conference (SocInfo)
Piccolo, L., Puska, A., Pereira, R. and Farrell, T., (2020) Pathway to a Human-Values Based Approach to Tackle Misinformation Online, HCI International 2020
Farrell, T., Fernandez, M., Novotny, J. and Alani, H., (2019) Exploring Misogyny across the Manosphere in Reddit, WebSci19
Farrell, T., Piccolo, L., Perfumi, S. and Alani, H., (2019) Understanding the Role of Human Values in the Spread of Misinformation, Truth and Trust Online
Piccolo, L., Joshi, S., Karapanos, E. and Farrell, T., (2019) Challenging Misinformation: Exploring Limits and Approaches, Lecture Notes in Computer Science
Farrell, T., (2018) Affordances of Learning Analytics for Mediating Learning
de Lange, P., Göschlberger, B., Farrell, T. and Klamma, R., (2018) A Microservice Infrastructure for Distributed Communities of Practice, 13th European Conference on Technology Enhanced Learning, EC-TEL 2018
Farrell-Frey, T., Scavo, B., Ardito, A. and De Liddo, A., (2018) Augmented Reality in Activism: Go March
Farrell, T., Mikroyannidis, A. and Alani, H., (2017) “We’re Seeking Relevance”: Qualitative Perspectives on the Impact of Learning Analytics on Teaching and Learning, EC-TEL 2017: Data Driven Approaches in Digital Education
de Lange, P., Farrell-Frey, T., Göschlberger, B. and Klamma, R., (2017) Transferring a Question-Based Dialog Framework to a Distributed Architecture, EC-TEL 2017: Data Driven Approaches in Digital Education
Farrell Frey, T., Iwa, K. and Mikroyannidis, A., (2017) Scaffolding Reflection: Prompting Social Constructive Metacognitive Activity in Non-Formal Learning, International Journal of Technology Enhanced Learning
Farrell Frey, T., Gkotsis, G. and Mikroyannidis, A., (2016) Are you thinking what I'm thinking? Representing Metacognition with Question-based Dialogue, CEUR Workshop Proceedings
Mikroyannidis, AFarrell-Frey, T (2015) Developing Self-Regulated Learning through Reflection on Learning Analytics in Online Learning Environments, ascilite2015: Globally connected, digitally enabled
The Shifting Power team presented "ChatGPT, your new neoliberal friend", a reflection on the Whiteness of the internet and its impact on informational searches using ChatGPT, at the open forum session entitled "ChatGPT and Friends: How Generative AI is Going to Change Everything" (this could link to John's website) on March 23rd. Slides will be made available on the event website.