Q&A
April 24, 2022

A conversation with Chris Messina

Hashtag Inventor, Chris Messina, shares his thoughts on prosocial design.

John Fallot
, & al.
Hashtag Inventor, Chris Messina, shares his thoughts on prosocial design.

Chris Messina is the Inventor of the Hashtag. 

In June 2019, he gave a talk at TEDxBend, in which he said “The challenge for social technology designers isn’t to replace the phone but it’s instead to augment existing technology to promote prosocial behaviors.” 

In this conversation from June 2020, co-founders John and Joel touched base with Chris to learn more about his vision for a prosocial web. They also shared their reasons for creating the Prosocial Design Network, after which Chris shares his thoughts on prosocial design.

An edited transcript of the interview follows.

Prosocial Design Network: You’ve been involved with tech for many years, and were there for some of Web 2.0’s more promising developments. Most notably, there’s the hashtag—which you were instrumental in transforming into the symbol as we all understand it now. What were some other design developments in the mid-2000s that went comparatively unnoticed?

Chris Messina: Yahoo’s Design Pattern Library is exhaustive when it comes to showcasing social media interfaces. But [the library] is from around 2007, 2008, so it’s a lot of dated ideas: you’d need to visit the Wayback Machine to see it. Its ontology was pretty good. Christian Crumlish [who was behind the Design Pattern Library at Yahoo] did a whole book on this with Erin Malone: Designing Social Interfaces.

PDN: How might these earlier interventions prove insightful for prosocial design?

CM: I’d suggest checking it out as a way of seeing design patterns just as they were starting to become part of the environment [to better get a sense of why some ideas took off while others did not].

Video: Chris Messina giving his talk, The Technology of Better Humans at TEDxBend.

PDN: What does prosocial design mean to you?

CM: Prosocial design, as I understand it, is a bit like the architecture of social software.

In the architecture of buildings, you have windows, doors, and [door] jambs, and Accessibility has to go through and say, ‘Well, here's the ADA guidelines on wheelchair ramps, &c.’ In a similar way, I think what we’re talking about is: ‘If you're going to add these components to your web service, or your internet enabled products, here are the ways in which it will be exploited, and here are the sort of ways to combat those things.’ And they’d be adjustable for different kinds of settings.

It’s almost like you could imagine a spectrum of communities: on one end are the free-for-all spaces, and on the other end are highly moderated, curated kinds of spaces. The interface, the kind of scaffolding in architecture, may cause or allow people to behave in certain ways.

PDN: Given that, what might be some shortcomings for prosocial design?

CM: There are just all sorts of attack vectors for cultural power to be asserted against different people. [In the case of symbols and emojis], whether they're hearts or stars, that can also mean different things in different cultures. So it’s insufficient, of course, to just think about things from a pixel or interface level. You also have to think about the culture. 

One of the things that I wonder about is like, even if you were to remove all the trolls online, disable them, where would they go? [Banning them] doesn’t actually reform them. That said, I’m really supportive of what you guys are doing. I think that the framing of ‘creating prosocial software, or technology, or systems’, I think that that’s the right direction. But, we also have to think about prosocial constructs [in society]. Who is empowered and who is disempowered? And is it reinforcing disenfranchisement, or is it allowing for the pro- in pro-social to become truly positive?

PDN: How might disagreements online be a little more productive?

CM: Part of our social evolution was like: if you want to participate in a way that’s conducive to positive outcomes for everybody—including yourself—then here are the ways of getting right with yourself. That way, when you come back and participate in this ecosystem, it’s going to be in a way that’s not hurting yourself or others.

I think one thing that’s very important. If it’s not overlooked, it’s just not always top of mind, but it’s the act of providing witness for someone else. Of being seen and being heard and being acknowledged. It is so powerful and important. 

And sometimes, it doesn’t even matter about the content of what’s being expressed: just as long as it can be expressed, and then be received. But that requires an enormous amount of mindfulness and slowing down, and being present. 

In a moment with someone, and in [past experiences I’ve had with guided] nonviolent communication, it was an incredible sort of thing to encounter. And it’s highly useful. Just the mechanisms that are described there, which are all pre-internet, are just incredibly useful to bring awareness to yourself and how you’re showing up in a conversation. And that lack of awareness, and other-side-ness, gets lost when you're on social media. It’s like there’s no mirror.

In some ways, I’m reminded about how pharmacies will put a mirror on the ceiling so that people who are going to steal or shoplift will see themselves, which creates cognitive dissonance, and so they won’t actually steal things. In a similar way, you could imagine social software doing that. 

And actually, Twitter launched something today on Android, where if you retweet something—without having clicked the link, and read it first—it actually pauses you and says, ‘Are you sure you want to retweet this without having read it first’? It’s almost like the software is being designed to force us into a moment of mindfulness and thoughtfulness, because we’re not doing that work ourselves. It’s almost like ‘Well, too bad’, but then again, ‘Well, it’s ameliorative’—for now. 

PDN: It sounds like you have some reservations about that style of nudge.

CM: I don’t know if it’s always going to be the [ideal solution]. As in, whenever we use an interface, do we want our devices sort of checking in on us to be like, ‘Are you sure you really want to do that because you’re in a bad place right now?’

I’m worried that some of these solutions will just disempower people that are already not powerful: that this will just descend into an arms race [between algorithmic filters and the human drive to have one’s pain recognized], as opposed to actually addressing the underlying tensions in society and resolving what have effectively become generations of trauma. Because we’re not actually able to escape into these [virtual] environments. They’ll just amplify [the feelings of powerlessness and anger] that are already inside of us.

PDN: Part of our impetus for starting Prosocial Design was not that we were going to make some sort of utopia—where all those problems would go away online through censorship or discouragement—but that, “If we’re gonna be having these conversations online anyway, make it so they work better. Make it so they’re more respectful. Make it so people can walk away from them without feeling hurt”.

CM: Yeah. What you guys are doing—I think directionally—feels impactful. It’s not just lists of values and principles. If there ends up being a library of designed things that encourage product people—like the consequences of certain design decisions and how to evaluate them in practice—I think that’s an important thing.

When it comes to adopting any sort of humane tech lens for startups early on, you need to be pragmatic. The reality is that you have to build growth mechanics into a product, or it’s going to die. And so, [for those companies], are you willing to show up to this knife fight in a potato sack? Idealism just doesn’t feel realistic. 

That's kind of where I'm like, ‘Well, if you're going to use these mechanisms, provide people with a little bit more choice for awareness.’ It’s a trade off.

PDN: Walk us through that idea of trade offs a bit more.

CM: The way I look at it—I don't know if I'm totally generalizing, but I had this experience last fall when I was in Scotland.

I was walking up this hill path, and it was raining. And on the side of the path there was this bramble bush—light gray bush. I looked down and one of the thorns had gone into my jeans or something. But these were really great blackberries and there was a range of them. Some of which were really ripe, some were not, some were rotting and falling apart. 

And at that moment—I don't know why, I had probably listened to a podcast about Facebook and advertising or something—but it occurred to me that the Blackberry bush was [nature’s form of an] advertisement. 

Nature itself is constantly manipulating everything around it to do its bidding. So the blackberry bush has two things: one, it has the thorns, obviously, to protect its root structure, so that a fox or something doesn’t [eat] the thing that allows it to grow. But then it produces these really beautiful little succulent juice bombs of sugar that a fox will want to eat, and then go someplace else and poop them out. 

So how does a blackberry bush achieve mobility? Well, it actually hitches a ride on something else. And to do that, it actually gets into a consensual, customer relationship with the fox, which is: ‘I will give you this and will give you sweetness and sugar and calories, which I have extracted from the Sun and from the ground, if you do something for me, which is to eat my berries, and then to go put them down someplace else, and then I will be able to propagate my species.’ Great! That’s what’s called an ecosystem. So that’s a form of manipulation.

PDN: One of the examples we think about a lot, in terms of the debate around ‘should we be manipulating people at all?’, ‘is manipulation permissible?’ is the high school cafeteria. The idea being that—if you study that the order of the food that you present at a high school culture influences what foods students take, and how much of them? There is no way to avoid that happening. Instead, there's just a question of what you do about the order of the foods, because you can do it totally randomly to say, ‘oh, we're not going to manipulate you’, and yet there's still an outcome. And you're like, ‘Well, [in that case] let’s just take advantage of the fact we put the peas up front, then more people will eat them’.

CM: The problem, though, in the internet age, is that we are in a world of coercion where we are being manipulated in ways that we don't understand. They’re not visible to us, and where there is no consensual agreement. And so essentially, we are being forced to do things that are oftentimes against our own interests—for example, voting for Trump—and because we are being preyed upon for the weaknesses of our psychology, and so that is the thing that needs to be corrected for. 

It isn't that manipulation is bad. When I go to a horror movie, and I watch a horror movie, I am consensually asking to be horrified and to feel fear, but for a brief window of time, and if I get too scared, I can leave and that is a consensual arrangement that feels good and I'm willing to pay for the experience. 

But when it comes to the internet, we are constantly bombarded with information to coerce us to behave in certain ways that we are not aware of, which then causes us to feel out of control and disconnected from ourselves and from our community. And ultimately, those are the things that we need to correct for. 

So, I think that it's okay, for social apps are the products to use some of the manipulations and the growth mechanics whether it's like—call it variable rewards, or incentives and things like that. But I think it's important to disclose that, if you use this app, we are going to manipulate you to keep using it, and actually will create a habit, which is actually something that you want in your life.

PDN: Disclosure makes a lot of sense. Although wouldn’t that risk becoming a bit like a product’s Terms and Conditions, which become just another thing to click and approve without taking it in?

CM: I think that’s the thing, right? We kind of throw all the terms and conditions up front, to disclaim all responsibility for what we're about to do to you. But we don't really make it clear that these are the things that we're pushing on to get you to behave in a certain way. 

I think that, increasingly, though it may take some time, people, especially younger generations, are going to grow up where they are distrustful of everything. That makes it harder to get them to buy into anything at all, which will require a reset of trust. And so, there will be a set of apps and products that come out—that are willing to actually take a pledge or just be transparent. 

[Right now] the way that our businesses work is that ‘we get you to use our app every day, and then we put ads in front of you that people are paying for’ and that's essentially how we make things free. And we've tried the thing where we charge for it. 

Tim Ferriss did this famous podcast experiment, where, essentially people are like ‘oh, I'll totally pay, I don't want the ads’. And then it was making no money at all [and this was happening] over and over again. There have been experiments that have tried to allow people to say, ‘Stop abusing me, I will pay for this’, and then they never do. So at the very least, you're like, ‘Well, we are going to exploit you because you've shown us that you're wanting to be exploited, or are willing to be exploited, to get the benefit that we're offering you, in order for us to get the calories that we need. 

In order for us to propagate our blackberry bushes, so to speak, we actually need you to do something for us, which is to stare at our ads for some period of time.’ 

I think that part of the work is going to be figuring out how to disclose manipulation without, like, awkward push networks —we'd like to access your microphone, and we'd like to access your camera—those are just capabilities. 

But increasingly, it’s gonna become a lot more subtle. For example, my Nest thermostat wants to know the location of my phone to be able to assess whether at home or not, so that I can actually run programs to change my environment. That's pretty straightforward, right? But that’s the type of thing where it’s like, ‘we are going to manipulate your environment because that’s why you bought this product’, and then you can turn it off if you don't like it. So there's also a set of controls that relate to the manipulations, and I think it’s not about having a wall of text, Terms of Service, fiasco, but about saying here are the ways in which we [the product and the user] can work together.

John Fallot (he/him) is a user experience and graphic designer based in the New York City Metro Area. He co-founded the Prosocial Design Network with colleague Joel Putnam in late 2019, in order to better explore ways that the web could be optimized for prosocial behaviors.

About the Prosocial Design Network

The Prosocial Design Network researches and promotes prosocial design: evidence-based design practices that bring out the best in human nature online. Learn more at prosocialdesign.org.

Lend your support

A donation for as little as $1 helps keep our research free to the public.

Be A Donor