Participatory AI – what now, what next? Charlotte O’Neill

Participatory AI – what now, what next? 

Charlotte O’Neill, MSc

Participatory AI in the spotlight

As stakeholders across the globe grapple with questions about how AI should be governed, calls for ‘Participatory AI’ are growing. Generally speaking, the challenge currently facing governments, industry and civil society is to balance AI’s potential opportunities and risks, and to do so in a timely, proportionate way. This raises a fundamental question: what does good governance look like for a technology that’s evolving so rapidly? Before looking at what AI governance mechanisms are put in place, we need to consider how we arrive at them. After all, evidence-based policy making does not imply making decisions based on scientific evidence and expert advice alone. Rather, it combines the best available scientific evidence with policy-makers’ understanding of a society’s needs (European Parliamentary Research Service, 2021). This requires contextualising evidence in terms of citizens’ expectations, values, and preferences. With this in mind, integrating citizen’s perspectives into ongoing AI governance debates is essential for effective policy decisions to be reached. Against this backdrop, it’s unsurprising that ‘Participatory AI’ is a burgeoning topic. 

What is ‘Participatory AI’? 

Participatory AI has been defined as “the involvement of a wider range of stakeholders than just technology developers in the creation of an AI system, model, tool or application.” (Berditchevskaia et al., 2021). The central idea is that including the perspective of users and impacted stakeholders leads to more responsible and inclusive AI. Participatory AI practices intersect multiple disciplines. In fact, Delgado et al. (2023) highlight nine schools of thought relevant to the field, including but not limited to user-centred design, value-sensitive design, participatory action research, participatory democracy and deliberation theory. This multidisciplinarity is reflected in the range of actors experimenting with participatory processes related to AI, as well as their varying approaches and goals. For example, Belgium has recently launched the beEU citizen panel which will see 60 participants discussing AI in the context of EU politics over three weekends, with results set to inform the European strategic agenda 2024-2029 (beEU, n.d.). The independent research centre Ada Lovelace Institute previously conducted a Citizen’s Biometrics Council, convening 50 members of the U.K. public to discuss the use of biometric technologies with results used to better inform public debate (Ada Lovelace Institute, 2021). The Collective Intelligence Project, an incubator for new governance models, has partnered with government, industry and non-government organisations. One of its initiatives, ‘Alignment Assemblies’, are designed to connect public input to AI development and deployment decisions (“The Collective Intelligence Project – Alignment Assemblies”, 2023). 

Nuances in Participatory AI initiatives     

There is currently a lack of consensus on the standards and dimensions that should be used to evaluate participatory mechanisms (Birhane et al, 2022). In response, research is emerging to facilitate shared understanding and analysis. For example, Delgado et al.’s Parameters of Participation framework (2023) provides a useful descriptive schema to map participatory AI interventions. It outlines different dimensions of participation (the goal, scope and form of participation) and the modes of participation, which exist on a spectrum (consult, include, collaborate or own). Similarly, Berditchevskaia et al. (2021) proposed four levels of participatory AI: consultation, contribution, collaboration and co-creation. In simple terms, work like this helps to distinguish between, for example: people being asked about how they believe an AI system should function in order to indirectly inform policy, people collaborating with developers to co-design an AI application, or people being invited to shape the design of a participatory AI initiative. This is important because, as per Birhane et al. (2022), a contextual and nuanced understanding of Participatory AI, as well as consideration of who the primary beneficiaries are, is critical to realise the opportunities and benefits that participation brings. Indeed, critical questions have been raised about participatory AI initiatives, including queries over transparency (e.g., deciding what decisions are put to participants in the first place) and accountability (e.g., how the results of a participatory process are actioned, if at all). Given the wide range of disciplines Participatory AI draws from, these issues are certainly not novel. However, the rapid proliferation of AI and the pressing need to develop effective governance mechanisms means they warrant ongoing consideration, especially given the broad appeal that Participatory AI currently holds. While a comprehensive critical analysis is beyond the scope of this piece, some preliminary observations are outlined in the following section. 

Participatory AI’s evolving ecosystems and practices 

As mentioned, the entire Participatory AI ecosystem involves actors spanning government, industry, academia and civil society, often working in collaboration. It is an emerging space, so tracing relative roles and responsibilities of different actors, as well as how they might evolve over time, will be key to ensure promises of participation are realised. Krick (2022) points out that governance increasingly depends on specialised expertise at the same time as calls for participation grow. Organisations that bring academic and professional expertise like Stanford’s Deliberative Democracy Lab and The Behavioural Insights Team partner with technology companies; a recent example was to gather community perspectives on the future development of AI chatbots (Stanford Deliberative Democracy Lab, 2024). Depending on the design and implementation of initiatives in the future, we may see expert organisations like these increasingly play an intermediary role between impacted stakeholders and the AI industry. Krick (2022) also notes that advocacy groups play a key role due to the combination of their experience based expertise and mandate to speak on behalf of others. This underscores the importance of civil society organisations in the ecosystem to represent communities affected by AI. Relating to impacted communities, design choices in participatory initiatives are important. Belgium’s previously mentioned BeEU panel notes that citizen selection accounted for linguistic and geographical distribution as well as maximising representativeness of the Belgian population in terms of gender, age, profession and level of education. However it also noted “a distinct inclusion of young people who will be voting in the European elections for the first time in 2024” (beEU, n.d.). The Ada Lovelace Institute’s report on their Citizen’s Biometrics Council states that selection criteria purposefully accounted for “the disproportionate and biased impacts of biometric technologies on underrepresented and marginalised groups”. (Ada Lovelace Institute, 2021, p.15). If Participatory AI continues to scale and more actors begin to engage in its practices, we might expect normative transparency measures to emerge around the design of engagements, given the value-based choices that may be required. The evolution of internet platforms’ transparency reporting offers a reference point; they originally emerged to shed light on how platforms responded to government requests for user data and content removals, areas that similarly required them to engage with values-laden topics, in that instance digital privacy and freedom of expression (“TSPA – History of Transparency Reports”, n.d.). While transparency is by no means a silver bullet, all actors fostering a culture of openness from the earliest stages can support knowledge sharing and the development of best practices in the Participatory AI field.  

Looking ahead: 

It’s worth keeping in mind that policies governing new technologies evolve over decades. Drawing on recent history, the privacy laws playing a central role in internet governance today were decades in the making. Far from beginning with the EU’s General Data Protection Regulation in 2018, countries enacted different privacy laws as early as the 1970s and 1980s in response to the growth of personal computing (World Privacy Forum, 2023). So, while the AI governance sphere is currently experiencing a flurry of activity, we are still in the early stages. With the conversation around Participative AI likely to mature in line with continual AI development and evolving public preferences, there needs to be space for ongoing openness, dialogue and questions – most importantly, from those whose lives will be impacted by AI. 

 

 

References 

Ada Lovelace Institute (2021). The Citizens’ Biometrics Council, recommendations and findings of a public deliberation on biometrics technology, policy and governancehttps://www.adalovelaceinstitute.org/wp-content/uploads/2021/03/Citizens_Biometrics_Council_final_report.pdf 

beEU (n.d.) Retrieved from https://belgian-presidency.consilium.europa.eu/en/programme/citizen-participation/ 

Berditchevskaia, A., Peach, K., & Malliaraki, E. (2021). Participatory AI for humanitarian innovation: a briefing paper. Nesta. https://media.nesta.org.uk/documents/Nesta_Participatory_AI_for_humanitarian_innovation_Final.pdf 

Birhane, A., Isaac, W., Prabhakaran, V., Díaz, M., Elish, M.C., Gabriel, I. & Mohamed, S. (2022, October 6-9). Power to the People? Opportunities and Challenges for Participatory AI. Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’22), Arlington, VA, USA. https://dl.acm.org/doi/pdf/10.1145/3551624.3555290 

Delgado, F., Yang, S., Madaio, M. & Yang, Q. (2023, October 30-November 01). The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’23), Boston, MA, USA.  https://dl.acm.org/doi/fullHtml/10.1145/3617694.3623261

European Union, European Parliamentary Research Service. (2021). Evidence for policy-making, foresight based scientific advice. https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/690529/EPRS_BRI(2021)690529_EN.pdf 

Krick, E. (2022). Participatory Governance Practices at the Democracy-Knowledge-Nexus. Minerva, 60, 467–487. https://link.springer.com/article/10.1007/s11024-022-09470-z 

Stanford Deliberative Democracy Lab (2024). Meta Community Forum Results Analysis. https://fsi9-prod.s3.us-west-1.amazonaws.com/s3fs-public/2024-03/meta_ai_final_report_2024-04_v28.pdf 

The Collective Intelligence Project – Alignment Assemblies. (2023). Retrieved from https://cip.org/alignmentassemblies 

TSPA – History of Transparency Reports. (n.d.). Retrieved from https://www.tspa.org/curriculum/ts-fundamentals/transparency-report/history-transparency-reports/ 

World Privacy Forum. (2023). Risky Analysis: Assessing and Improving AI Governance Tools. https://www.worldprivacyforum.org/wp-content/uploads/2023/12/WPF_Risky_Analysis_December_2023_fs.pdf 

—————————————————————————————————————————————————-

Charlotte O’Neill is a multi-disciplinary researcher, exploring emerging digital technologies through a socio-technical lens. She holds an MSc. Digital Policy from University College Dublin and a B.A. Business and Economics from Trinity College Dublin. She is a researcher at The Dock, Accenture’s flagship R&D hub and global innovation centre. All views expressed are the author’s own and do not represent the view of her employer.

Skip to content