Panel 3: Engaging Online

Panel_03-EngagingOnline

Amy Choi [AC]: Now it’s time to talk about how we talk to each other. You know so we all live online and on social media to a certain extent. And I would hazard a guess that most of us here have conversations and share pictures and stories about ourselves online. The thing is, literally anybody anywhere in the world can access all of us with just a little know-how. Both good guys and bad guys. As a laywoman, I try to avoid thinking about it too hard because it’s just uh too terrifying? But I’m glad that there are people who think about this all the time. And we’ve gotten a few of them together here for you.

Aminatou Sow, co-founder of Tech LadyMafia and co-host of Call Your Girlfriend is moderating a great discussion between Ethan Zuckerman, Director of the MIT Center for Civic Media and Erin McKean, founder of Wordnik, the largest online dictionary.

Coming up first, a few designers on their personal social media policies. Coming up first – a few designers on their personal social media policies and the lines they draw. Maria Giudice of Autodesk will lead us off with why she doesn’t draw on the lines.

Maria Giudice: One of the things about me is I’m very comfortable with privacy and lack of privacy which is kind of odd being a woman. And a lot of people in my age grew up and they’re really afraid of giving up like personal information. I feel like my – as long as I can go through life and be incredibly human and honest, I don’t change the way I talk between an intern and a CEO, I have a lot of integrity in my life. I feel like I have the freedom to be as open and honest as possible. So I quite over share on Facebook. I do it for the benefit of my kids cause when I – when I die, they’ll have that whole chronology of their crazy mom. You know, I’m pretty open online and I don’t make a distinction between my online persona and my real life persona.

Dian Holton: Hi, my name is Dian Holton, I’m the Deputy Art Director with AARP Media. I post my interests when it comes to social media. And those interests are pop culture, fashion, design, education, and diversity as it relates to the arts. I don’t usually discuss government politics on any of my feeds. Mainly because I don’t wanna get into a cyber rant with anyone or have something I stated be misinterpreted. Given this current political climate, I don’t wanna be depressed. We live in dark times, and I’d rather use the majority of my social media empowering, educating and inviting people to escape to a land of rainbows and unicorns.

Jenn Maer: Hi I’m Jenn Maer. I’m a Portfolio Director at IDEO. My personal social media policy is to never post or comment while inebriated. I learned that lesson the hard way. I also take social media breaks pretty regularly.

Ti Chang: Hi my name is Ti, designer and co-founder of Crave. My personal social media policy is basically – if I hesitate before posting, that’s when I kind of pause and I think about hm, maybe I shouldn’t post this. Or if it – if anything that I write in the heat of the moment that gives me a little bit of pause, I kind of let that marinate a little bit and often times I delete it. And usually it’s the right call.

Aminatou Sow [AS]: Hello listeners, it’s Aminatou, and today, we’re talking about well – conversation. How can we make online engagement and interaction more civil? How do we design tools and experiences that facilitate social engagement? How do we develop a language that is more nuanced and less exclusive so we can talk more deeply about all of these issues? All big, meaty subjects, so I’m really excited to have with me today, Erin McKean, Lexicographer and Founder of the online Dictionary, Wordnik, and Ethan Zuckerman, Director of the Center for Civic Media at MIT Media Lab, Welcome to the show, both of you.

Ethan Zuckerman [EZ]: Great to be with you.

Erin McKean [EM]: Thank you.

AS: Great. Erin, can you tell us more about Wordnik and how that came to be and what the reaction to it has seemed to be online?

EM: Sure, how much time do you have? So um –

AS: 20 seconds.

EM: So Wordnik – [laughs] Wordnik is an online native dictionary. It’s the world’s biggest dictionary by number of words. Not by number of users sadly. And the goal of Wordnik is to collect and share every single word of English. And most people aren’t aware that more than half of the unique words in the English language don’t actually have traditional dictionary definitions. So we’re trying to not write definitions for all these words but collect really good example sentences that show how these words are used in context. Because most of the words you learn in your life you probably learn them in context rather than reading a dictionary definition. And I would say that the reaction to Wordnik is, if this is the kind of thing you like, you really like Wordnik.

AS: Then we really like Wordnik. How do you think your work with that dovetails into this conversation that we’re having today about being more inclusive and designing spaces that facilitate the healthy online engagement that we’re looking for?

EM: I think – well Wordnik’s policy is basically one of radical inclusion. If someone uses a word somewhere and they intend to be speaking English, we want – we wanna record that. And I think a lot of the discussion that goes on around – some kinds of inclusive languages, is people get derailed about whether or not a term is a word. And honestly, they’re all words. What people are really arguing about is whether they want to include that word in their own vocabularies.

AS: That’s fair. Ethan, what does the MIT uh lab’s research real is kind of the most – or some of the most pressing issues around online harassment right now?

EZ: So a lot of the work that we’re doing at the Media Lab around online harassment really looks at this role of moderators. It asks this question of who is setting the rules of the road of the community that you’re based in. There’s a lot of tension in communities like Twitter that I think err towards being radically inclusive and in ways that sometimes mean that we’re not actually taking care of users within the space. And within systems like that, you’ll see people getting together and essentially saying, could we make this a safer space, could we make this a more speech friendly space? Can we find a way to share blacklists or sort of block people who we know to be harassing. In many ways we’re much more interested in spaces like Reddit where moderators are able to say, here are the rules of the road for our space and we’re gonna try to listen to you and evolve those roles with you so that we can get to a place where people can express themselves but we also have a space where people don’t feel attacked and chased away.

AS: That’s really interesting, what you’re saying about the roles of the moderators. What do you – I guess you know like how do we make sure that the moderators themselves are you know diverse or that the pool that we’re choosing from is inclusive?

EZ: It’s a really good question. And in a community like Reddit which tends to skew young and male you’re likely to have a lot of communities that may well have a rule set that favors jocularity or you know favors outrage in one set or another. The trick is that communities where people can choose whether to participate or not, people can gravitate towards the moderators that they want to support and they want to work with. And so you know there are spaces on Reddit like TwoXChromosomes which tend to be women talking to one another that have female moderators, that tend to have very different rules of the road than some of the other communities where people might be supporting harassment, egging people on. I think the real issue is that a lot of communities don’t think about the fact that they are moderating. They see themselves as platforms. And they essentially say look, everyone’s here on an equal basis, but of course there’s moderation happening behind the scenes. Those moderators in many cases aren’t aware of their own biases. They aren’t aware of the rules that they’re bringing to the road. And this is where we end up with situations on Facebook where Facebook decides across the board that they’re going to block all images of female breasts, not considering breast feeding communities or not considering breast cancer survivor communities. There really needs to be a lot more thought about who gets to moderate these spaces. Who they’re listening to, and what are the rules that they end up adhering to.

AS: I mean on some level, it’s almost kind of ridiculous that some of these platforms have been around for so long – and questions like that are not baked into the process. You know I think – I think that that example that you gave about the breastfeeding communities is real. We’ve seen that a lot also around communities of people who discuss cancer and mastectomies. And really I think you know something that is so apparent is that this is – it’s never the – these questions of what community moderation will look like or community conversations or these various []. They’re never part of the initial design of the platform whatever the platform is. And so some of the fallout that you see around those conversations or frustration that the users have is because it’s an afterthought as opposed to something that’s baked into the actual product.

EZ: So one of the things that I think happens and I’d love to hear Erin what you think about this, but I think people tend to assume that tech is hard and humans are easy. And in my experience, tech is easy and humans are hard. And so I think what happens is companies hire – you know badass highly paid hyper confident engineers and assume that they’re going to build a platform that is perfect and expand and grow forever. And then at some point when they realize they need community managers you know with the rounding error in their budget, that’s who they end up hiring. I think in many cases it probably needs to be the other way around. All of these social media systems at their heart, depend on human beings interacting with one another in productive and interesting ways. And it drives me nuts that we hire the engineers first and never the anthropologists.

EM: I think that’s really rue and I also think that the structure of most tech companies is that they very – they very much value what they consider to be scalable. Like platforms. And not what they consider to be not-scalable like human moderators. And I think you can’t overlook the fact that that kind of human labor is often very feminized so it’s valued even less.

EZ: I think in many cases, moderation has really strong aspects of emotional labor associated with it. This concept that only are you doing the job, but there’s this added layer of work whereas you’re being polite and caring to people who are often being awful and difficult. And that emotional labor tends absolutely to be feminized. It’s an idea that came out of customer service, came out of airline flight attendants. But it absolutely has moved on to community moderation. And my experience is it does end up being a profession when people are hiring for it that is disproportionately female and often ends up being a path into a company for women who are not coming in with a strong technical background but does end up feminizing it in exactly that everyone’s describing.

AS: Yeah I think it’s interesting too Ethan that early on we touched on Reddit and we talked about you know the kind of the young male moderators on that platform and other platform. All of you know – I think a lot of us have – or a lot of women have anecdotal knowledge and personal experience that people who troll us online are also young and male. And I was wondering does the Lab have any conclusive research on who online trolls are?

EZ: It’s hard because it’s very difficult to get trolls to stand up and identify themselves. And almost everybody is working through a scrim of anonymity. I think this is why the recent story on This American Life about – and I’m blanking on Lindsay’s last name. Do –

AS: About Lindy – Lindy West confronting –

EZ: Thank you.

AS: her troll who had been –

EZ: Thank you so much.

AS: harassing her online, yeah.

EZ: Right so the –

AS: It’s such a great example –

EZ: So – so when

AS: of that.

EZ: Lindy West wrote that brilliant essay about confronting this troll who had done just horrific things – had literally seized her deceased father’s account to make fun of her online. And then had a conversation with the guy and discovered that it was really about his loneliness and self-loathing – but had been taken out on her in just this absolutely horrible fashion. I do think there is an intuition that there’s a lot of young, lonely men involved with this. I think what’s hard is that trolling means so many different things these days. In some ways it really has become a weaponized tool of political discourse in certain corners of the internet. And then I think you’re really asking the question of who is being led to that form of alt-right politics. Who is really engaged in that sort of speech. But it’s really hard to get a troll census. It’s much easier to get troll behavior. It’s actually a lot easier to get evidence of people who have been trolled and have been harassed because they reach out to each other. And there we know that it’s disproportionately female. It’s disproportionately women of color. And if anything, what it really is is strong women who are willing to take a stand and voice their opinions online.

AS: Erin, what are your insights on how language becomes weaponized online?

EM: I think people are quick to – to talk about hurtful terms and blame the language. But what it really comes down to is respect for like the personhood of other people. And so coming up with term-

AS: Can you expand more on that?

EM: Sure. Like – I – I wanna talk a little bit about like you know choosing what pronouns to use. And a lot of people deliberately misgender folks online as a way to demonstrate their disrespect. Or they dead name trans people online as a way to demonstrate their disrespect for that person and what their preferences are. And of course there are a lot of derogatory terms. I think probably cuck is the big – biggest example right now. And the whole premise of that word it that you have a woman in your life who is your partner but no, she’s really a thing. And disrespect for that thing. Reflects poorly on your manhood. And so it all comes down to how can we use language to disrespect and other some person either by disregarding their preferences, disregarding their beliefs, disregarding kind of their personhood.

EZ: Erin, I’m wondering are you finding yourself doing any work on the etymology of some of the terms that are emerging around online harassment? I find myself thinking of something like sea-lioning. Which I understand to be this practice of sort of going in and seizing someone else’s conversation. Essentially sticking around to sort of force them into pseudo civil conversation with you. Do you find yourself you know sort of documenting the coinage of new terms around some of these types of harassment?

EM: I wish that – I wish that at Wordnik we had the resources to do some of that research. Etymology is really the only thing that’s – that’s hard to do with techmining and to automate. But I think it’s really interesting and kind of the only bright spot in this about how creative people are in picking up these terms and spreading them. Like I think mansplaining is probably like a – a triumph of being added to the English language because it went from kind of being a very quick throwaway term. But it resonated so much with people that now it’s almost – I would say entirely –

AS: But but it really wasn’t

EM: standard English.

AS: right? I don’t know it’s – I feel that – I think that that word mansplaining is really important for example, as that – I think it’s interesting that you – you kind of coded it that way. But the truth is that there has – there is a thriving feminist community online that has really come out of whatever we think the third wave feminist moment is. And – and these online feminism communities have their –  they have their own language, they have their own codes of conduct. And really their own arguments for what skin in the game, kind of online is. And mansplaining is something that yes, seemed like such a throwaway term but really captured the core of what we experience online everyday. You know it’s like there’s not a single woman on Twitter who doesn’t have an experience of telling a joke and a man like telling that joke back to her trying to explain to them or doing these – you know these like just very gender dynamics that seem really silly when they’re one-offs but really when you look at how pervasive they are and how much just time and energy they take away from you – I think that it is – it’s kind of a testament to the fact that like women are fed up online, that we come up with this language together.

EM: Oh totally like I don’t – I – I don’t mean to indicate that I think that it is a throwaway term. I think that when it was – first started to be covered in like non-feminist media it was kind of treated like ha-ha. But women’s – pushed back and said no, this is real thing. This is not a joke.

AS: Yeah you know it’s like I think about the – you know you and I are both members of the Tech LadyMafia. One of the things that’s really interesting about TLM is that when we started it, we – there – my cofounder Erie Meyer and I are the only 2 moderators. And I use moderator like in the loosest of terms. We are – we’re essentially never around. Somebody has to alert us to problems. But I want us – you know and all – we have a very loose code of conduct that is you know essentially like don’t be a jerk, give people the benefit of the doubt. And this is kind of a – this is a private space but also don’t expect your privacy not to be – not to be trampled on here because we can’t trust everybody online. And it’s been really interesting to me that probably in 5 years now of us having that community, there have been very few dustups as opposed to other same kind of communities that I’m a part of. And I don’t know what that you know – I don’t know that it’s – speaks specifically to women in tech being you know like being a certain kind of person as much as – to the fact that we set these kind of very loose boundaries and let everybody participate in enforcing them. I was wondering like as a member what your – you know like both your take on that is and what your experience has been in the group.

EM: It feels like a very, I would say, benefit-of-the-doubt type space. It’s one of the few, like, big group discussions – that I’m a part of where I don’t like dread opening the digest email. I always learn something from people in the group, and whenever I have a request to make or a question to ask, I don’t, you know, scrunch up my shoulders, like, prepping myself for the onslaught of like, “How could you be so stupid?” responses, right? [laughter] And I also feel like it’s one of the places where there’s a deliberate effort made on the part of the participants to try to understand people’s intent, and to assume good intent. And good intent does not obviate bad behavior, but if you look at it from the point of view of like, “Oh, you were trying to do x but actually you did y, so let’s figure out how we could get you back to x.”

AS: Yeah, you know, I think that that’s in the design of the listserv, that was probably the value that was the most importance to us, was just never jumping to conclusions about someone, until you figured it out. In some weirdly serendipitous way it has worked out, but I—yeah, sometimes we wonder about that, I was like, “This is very strange.” Also I think it’s because we usually tend to not talk about politics, so that probably helps too.

EZ: I think that assumption of goodwill is so important, and to sort of say, “I know that your intentions must be good so now let’s figure out what just happened there.” One of the groups that I got quite involved with for a while when I had my child was Nuevo Dads, and it was a group of men who’d recently had kids, and just talking about parenting advice and trying to have a space where people could ask questions.

Another father asked a question about sleep training, and I jumped into the thread because Drew had just gone through sleep training, and another member of the thread basically reacted by saying, “You guys are horrible. I can’t believe you would ever make a child sleep in his own bed. What’s wrong with you?” And the entire conversation just sort of came to this screeching halt. And it turns out that sleep training is one of these things like politics, that–

AS: Oh, it’s the third rail. I watched a community of like, over 2,000 women, who had been supportive over the years, completely disintegrate over a sleep-training question.

EZ: I never returned. [laughs] And it was so sad, because, you  know, I had tried to jump in and sort of share my experience, and the rest of the community sort jumped on this guy, essentially saying, “Wait a second, I can’t believe that you’re taking on these people and making them feel bad.”

But it was just for me, it was this this reminder of just how fragile these communities are. You really only have to hit one of the speed bumps and unless you have really careful moderation to get you through it, these things can fall apart as quickly as they come together. The fragility of community amazes me.

AS: Yeah, why do you think it is that we still, it’s been so hard to scale that sort of sense of benefit of the doubt in all of these online communities, whether it’s listservs, or it is Twitter, or the bigger platforms?

EZ: So my take on it is that low barrier to entry tends to equal low barrier to exit. You get involved with this, all you had to do was respond to a couple of emails, if it doesn’t work, I’ll walk away and I’ll find another place to be.

And in that sense online community can feel very very different from physical community where, if I really have a falling out with my neighbors, and it really falls apart, I might have to move. I might have to pick up and move all of my stuff. This isn’t to say that people don’t end up with online communities that end up enormously important and end up being sort of major parts of their lives. But I think many of these communities, they can have that fragility because the cost of entering or exiting ends up being so low.

AS: Yeah, I mean, I think that you’re right, because I think that we see this a lot, especially for communities where, I’ll speak as a person of color, whenever there is that like, whatever the equivalent of the sleep training politics question is that has made me leave a listserv, is you find that the next place that you go becomes smaller. And so you become excluding of the kind of, or you try to be excluding of the kind of perspective that you have. Like I’ve noticed a lot, since the election for example, in my friend circle, that people are communicating more via these large text message threads that are, you know, it’s like 5 or 6 or 10 people, but it’s, you are literally choosing who your people are.

And in some regard I’m like, yes, this feels like a great safe space. And in another way it’s, you know, it’s really sad that people feel driven away for whatever reason from these larger groups.

EZ: So I’m seeing exactly the same thing around closed spaces, I’m seeing a lot of conversations moving to WhatsApp groups. I think the phenomenon of the private Facebook group is a really interesting one. I’ve seen people who have minority political opinions, they’re pro-Trump in an anti-Trump state, sort of move into those spaces. The really interesting downside of all of this is that you lose that visibility of the conversation. And so the echo chambers get even deeper. For us over at Global Voices, it’s a huge problem.

So dialogues that used to happen on blogs, that used to happen on sort of, public Facebook pages, they’re all in private WhatsApp groups now. And the conversations are fascinating, but the ethics of, how do you share it, how do you bring it out. I’m on a WhatsApp group with activists in a Middle Eastern nation. And they’ve all stopped writing publicly because they’re really concerned about having their words out in public. They write a ton on this WhatsApp group, but the whole question of, “Are we allowed to share that? And if we don’t share that, you know, do we essentially just have silence coming from this country?” It’s a really weird switch in the online dialogue space.

AS: Are you seeing any platforms or people that are working in tech to remedy this? Because I think that you’re absolutely right, we’re losing so much of the public-facing conversation and in some way it is, like to me at least it seems that some of the responsibility, not all of it but some of it, rests on these platforms being spaces where we can do that safely.

But on the other hand, everybody’s so much more aware of surveillance now, everybody is so much more aware of, you know, all of the ways that people have ill intentions towards especially, you know, global justice movement work. And so I think that that’s going to be a delicate balance to strike.

EZ: So I see tons of students coming to my door with ideas for tools to increase dialogue. I see people all the time coming up with an idea for how to fix Twitter or fix Facebook so that you break out of echo chambers or filter bubbles. What I’m not seeing a lot of is responsibility coming from the platform owners themselves. What I would really love to see is these companies start thinking about a double bottom line.

So when you look at newspapers, like high quality newspapers, like The Washington Post or The New York Times, they’re trying to make money but they also do things all the time that lose the money but are important for civic reasons. So they cover stories that they know aren’t going to get a huge amount of attention. You know, Aminatou, I follow West African coverage very closely. That doesn’t get a whole lot of clicks, but it’s critically important so that people know what’s going on with Boko Haram in Nigeria.

AS: Yeah.

EZ: So when The Times is doing that, they are basically playing the two bottom lines. And I would love to see Facebook think of themselves in terms of two bottom lines. How do they make money? But how do they build and deploy tools that are designed to help us have better conversations, help us speak freely, help us listen to and talk with a diverse range of people.

AS: So earlier on we talked about the psychology and the emotional work and the burden that moderating is.

Is there any specific technology that you’re seeing that are great tools for facilitating this creation of safer productive online spaces? And that’s for both of you.

EZ: Sure, so I’ll call on research by one of my doctoral students, Nathan Matias, who is finishing up his dissertation right now. And his dissertation focuses on a piece of software he wrote called Civil Servant. And what Civil Servant tries to do is to help moderators of online communities test rule changes. So a concrete example of it, the people on Reddit who are moderating our world news are pretty concerned about fake news. They’re concerned about stories that are just entirely made up. And so they have a list of sources that frequently get accused of fake news, and they sort of have the question, “Do we just downvote these, do we ban them, do we prevent them?”

What they ended up doing was they used Civil Servant to do some A/B testing. So they had three options, they could put up these stories without any comment, they could put them out and flag them and say, “Hey, this comes from a platform that often seems to turn out fake news. You should examine it very carefully.” And then there’s a third case which is “You should examine it very carefully, and downvote it if it’s fake news.” And so they set up these three conditions, they test it, it turns out that that second condition, of “You should examine it very carefully,” that works very well to get fake news out of your feed.

AS: “Interesting.”

EZ: Telling people to examine it closely and downvote it does not work well, because people on Reddit do not like being told what to do. So the point is not that this is magically a fix to fake news, the real point of this is that this is a way to do experimentation.

So even in a system where you don’t control the algorithm, you don’t have control over all the levers, as a moderator, you can do experimentation, and you can make a system work better for yourself. Nathan points to a whole class of systems that people are starting to call “successor systems.” And these are ways to basically work within a platform that you don’t control, but work for more fairness or work for more worker rights, or work for beneficial outcomes, even if you don’t control the playing field. It’s the successor to the system that you have to work within.

AS: Erin, what do you think?

EM: I see a lot of people kind of always thinking that the next thing around the corner is gonna solve this problem on a technical level. I think a lot of people are pinning hopes on way better automatic image recognition, because I think we’ve seen that text filters are—they just don’t work. People work around them so quickly. And then you also have to worry about the Scunthorpe problem.

And also with those kind of really terrible image memes that go around, you have to do a lot of juggling to get a text-based filter to work with those. So I don’t really see that kind of barrier-type tech helping at all. I think a lot of what Ethan’s talking about is kind of nudging people to be the best version of themselves. Asking them to think about things to kind of short-circuit that direct trigger between seeing something and doing something. Giving people a moment to pause and reflect seems to work a lot. I think probably my favorite text-based moderation tool is pretty old now, it was used a lot on the blog Making Light, “disemvoweling,” where you take people’s negative comments and just take all the vowels out.

AS: Oh wow!

EM (cont): So they’re still there and they’re kind of readable, but it takes just a little bit more cognitive effort to understand what’s going on. And it turns out that trolls get really mad if you just delete their comments. But if you take all the vowels out, it doesn’t seem to have the same effect on them, and then of course people can see what’s going on. [laughs] But there’s something about adding just a little bit of friction, that makes people slow their roll.

AS: I’ve just started calling all of my trolls. Whenever…the thing that’s really funny about, on Twitter specifically, when somebody with like, 20 followers, says something unpleasant to you, it turns out that when you email them, or when you Google them, the only two things you can find out about them is exactly where they live and exactly where they work. Because they have no other digital footprint.

And so, I like, depending on the offense, have been like really mercenary. Where I will like, call them at work and say, “Hi! This is Amina from the Internet.” And like, those conversations have been hilarious. Twice I had teenagers, and talking to their school yielded perfect results for me. But it’s so much work.

EM: Now, that would be great if you could build some kind of Twilio Twitterbot that you could just point and have a recorded message play. That would save you a lot of time.

AS: Exactly! It would save me a lot of time. I mean, now I have a script. It’s kind of, it’s amazing how people will be their worst selves until you, until they realize that you also know something about them, you know? It’s kind of something that they never realize. But yeah, unfortunately none of those things are scaleable, or I would also be a tech millionaire. [laughs]

EZ: So they might be scaleable as people find ways to start working together, right? So there’ve been so many creative efforts on Twitter for women who’ve been facing harassment to exchange blacklists and block lists.

And it would be really interesting to see whether there’s a way to sort of escalate. So you’ve called some of these trolls, but maybe there’s others where you end up throwing your script at them, you get the information, but you decide that they’re not worth your time. They may well be trolling someone else out there, and you may have saved someone the 15 minutes of trying to figure out who they are. The other thing is, the more you talk about it, the more you share the information on it, maybe the more it becomes a normal form of practice. But I have to say, I just love that idea of, “Hi! I’m Amina from the Internet. I’ve actually stepped out of my screen and into your life, even if only through a phone call.” I can imagine the sort of terror that would go through the head of a troll who assumed that he, and I’m assuming most of these people are he, was anonymous–

AS: Oh, 100 percent of the time. [laughs]

EZ (cont): –harassing you with impunity.

AS: Yeah, you know, that Lindy West story that you brought up earlier to me is so illustrative of this whole problem, right, where, somebody can do something so despicable to you, but you call them up, you talk to them, turns out they’re just another regular human being. And the thing about that is terrifying is that, I dunno, at least with some of the human that I’ve talked to is just this realization that just like anybody can be a troll, you know? Because I think about that man specifically and he was so well adjusted in his life. And it, it’s kind of this boundary where you’re…because of the interactions that you have online, it really affects the IRL, real life interactions that you have with people and seeing how those norms change and how emotionally it affects you in all of those ways.

But I guess as we kind of wrap up the show, I was wondering if—and we’ve done a little bit of this—but if you had specific personal advice for our listeners to create better conversations and engagement in their communities, if you had any tactical tips that are not, you know, not my weaponized, call-people-at-work and terrify-them-in-the-middle-of-the-day even though it works. [laughs] Like what are things you think would be useful for our listeners to hear?

EM: So it’s probably just my personal bias as a linguist and a lexicographer but I always feel that you really get a good perspective on what other people think just by looking at the words they use, like what’s the terminology they prefer, what contexts do they use those words in, can you mirror that language back to them, so that they feel heard and understood? Like if you go into a conversation intent on using your own vocabulary even if it’s not the vocabulary of the space, you kind of stick out like a sore thumb. But if you try deliberately to become fully part of the conversation, to use the language of the group, then things, at least in my experience, tend to go more smoothly.

EZ: So I’ll offer two that I try to do. The first is that I’m not expecting the platforms to solve the diversity problem anytime soon. So I try to engage in audits periodically of what I’m reading, I will you know, take a week and just kind of keep track in a notebook or a text file, what I’m reading, who I’m paying attention to, or look at my Twitter feed. And when I feel like I’m getting too concentrated, I’m listening to too many men, I’m listening to too many white men, I’m listening to too many liberals, um, I’ll try to do some pruning and I’ll try to do some adding. And so I think taking on that responsibility, for who you’re choosing to pay attention to, I think that ends up potentially being very helpful.

The other is that I try very, very hard in social media to take two or three deep breaths before I respond. So, to the extent that I get harassment online, it has to do with my role 20 years ago in unleashing the pop-up ad on the world. And every so often, someone will find an article that I wrote on The Atlantic three years ago talking about this, and you know, fire off a death threat. And so anytime I see someone mentioning pop-ups, I’m sort of expecting hatred. And so I got a, not a particularly well crafted tweet about this a little while ago. And I took two or three breaths, and I wrote back to the person, and somewhere in the exchange, we discovered that he’s a high school student, ten miles away from my rural town way out in western Massachusetts. And the end of the whole thing is I ended up coming and speaking to his high school computer science class, which was sort of awesome.

But would not have happened had I sort of reacted with my initial reaction which was assuming that he was attacking me. So I find that that deep breath helps me a lot.

AS: This was such a great conversation. There’s so much, so much more that we could’ve touched on. For more resources, and to keep the conversation going, head over to x.design .blog. Ethan and Erin, thank you both for taking the time, and I hope to see you both very soon.

EM: Thank you so much!

EZ: Thanks for having us. Great conversation.