A recap of our discussion on leveraging AI for democratic discourse
Since ChatGPT was introduced in late 2022, there's been no shortage of conjecture and concern over how Large Language Models (LLMs) can accelerate online harms. Less attention has been paid to the work of civic groups, technologists and researchers who are exploring how LLMs can also be harnessed to improve our online experiences.
At our latest Pro-Social we spoke with two such researchers, Lisa Argyle and Ethan Busby, about a tool they've developed and tested to help users have more productive conversations on difficult topics.
Built pre-ChatGPT using GPT3, their chat assistant suggests ways users can rephrase comments based on principles and insights from deliberative democracy and social psychology that say conversations work best when participants feel heard and understood. Notably their chat assistant does not change the core content of a comment or help a user be more persuasive. It also gives users agency; they can take or leave the bot's suggestions or even edit them. True to the principles behind the chat assistant, in studies Argyle and Busby find that participants report feeling heard and having more respect for their conversation partner when that partner used their chatbot .
In the Pro-Social interview, Q&A and breakouts, much of the conversation focused on how the chat assistant would work in practice - either as a browser extension or integrated into text-based social media platforms. In addition to excitement about its potential, attendees cautioned that a chat bot would have to be rolled out carefully, ensuring for example that it was not skewing political content.
To learn more about Argyle and Busby's work, read their paper or check out their interview with us below.
The Prosocial Design Network researches and promotes prosocial design: evidence-based design practices that bring out the best in human nature online. Learn more at prosocialdesign.org.
A donation for as little as $1 helps keep our research free to the public.