How Nextdoor reduced racist posts by 75%

Latest

In March of last year, Fusion published an article detailing the ways in which Nextdoor, a social network for neighbors, had become a home for racial profiling. In the “Crime and Safety” forum of many Nextdoor communities, users were reporting people as “suspicious” seemingly based primarily on the color of their skin.

Nextdoor CEO Nirav Tolia told me this week that he was “floored by that article,” having not realized before he read it that the network he co-founded to foster more close-knit communities had a racism problem.

“I’m a person of color so it really cut deep,” Tolia said. “We hated the idea that something we built would be viewed as racist…I hadn’t seen it in my own neighborhood’s Nextdoor and so didn’t realize it was an issue for us. Once I got past that, I was powered by the challenge to do something about it.”

Erasing racism through technology alone is impossible, but Nextdoor found that a few changes to its interface actually did significantly discourage racial profiling by its users. On Thursday, Nextdoor rolled out these changes to all 110,000 neighborhoods on its platform. All users who make posts to their neighborhood’s “Crime and Safety” forum are now asked for additional information if their post mentions race. Nextdoor says that the new forms it’s introducing have “reduced posts containing racial profiling by 75% in our test markets.”

But what does that mean exactly? And how do they know when it works? Nextdoor is far from the only online platform struggling to deal with its users’ racial bias; other companies could learn from its experience—if the changes really do work. So I spoke with Tolia to find out exactly what Nextdoor did to tackle the racial profiling problem, and why he thinks these changes are going to make the social network less racist.

In April, Nextdoor started a pilot program to see if it could change its interface to discourage its users from racially profiling people in their posts. A test group of neighborhoods were shown six different variations of the form used to make a “crime and safety” report for their neighborhood.

Some just saw the addition of new language: “Ask yourself: Is what I saw actually suspicious, especially if I take race or ethnicity out of the equation?” Some were asked to say in advance whether they were reporting an actual crime or just “suspicious activity.” Others actually had their posts scanned for mentions of race (based on a list of hundreds of terms Nextdoor came up with) and if a post did mention race, the user got an error message and was asked to submit more information about the person.

They were inspired by the work of Stanford psychologist Jennifer Eberhardt, who studies the way race can influence the judicial system and has helped trained police officers to recognize and overcome their bias. “We tried to create decision points,” said Tolia. “To get people to stop and think as they’re observing people to cut down on implicit bias.”

Then the company analyzed how the variations changed user behavior. The most effective variation, Tolia said, involved multiple new things—the tip asking users to reflect on what really counts as suspicious and the form asking for more info about a person if race is mentioned. Introducing friction to make it harder to post about race increased the number of posts that were started but then abandoned by 50%, the assumption being that these people were going to post something potentially racially offensive. Tolia says some companies might be unhappy with a result in which their users are less active, but Nextdoor is pleased.

“The cost of less content is the gain of more helpful and constructive content,” he said.

The company also analyzed every “crime and safety” post that did get published to the platform in the test areas, manually examining every one that used one of the hundreds of terms the company came up with to designate a discussion of race. It then had a diverse group of people from inside and outside the company read those posts and rate their level of racial profiling on a scale of 1 to 4, from “not profiling” to “possibly,” “likely,” or “definitely” profiling. The reviewers didn’t know if the person who wrote the post had done so under the old version of the site or whether they had gotten one of the pilot versions of the ‘crime and safety’ interface.

“It is difficult to say whether something is racial profiling or not,” said Tolia. The company decided to define it as descriptions of criminal behavior that don’t have sufficient description, i.e. “dark-skinned man broke into a car” or a detailed description of someone, including race, that fails to describe them doing something sufficiently criminal, i.e. “a 6-foot-tall Hispanic man in yellow high-tops was walking around looking suspicious.”

By the end of the summer, they’d collected thousands of posts. According to the categorization done by the manual reviewers, there was a 75% reduction in racial profiling posts by people using the new version of the site, compared to those in the control group who were still using the original version of the site. So the company decided to roll out the new version to all of its users.

While this should theoretically cut down on posts made that include racial profiling, I asked Tolia about the offensive and biased conversations about race that could follow in the wake of an otherwise “clean” initial post. He said Nextdoor has added “racial profiling” as a category for people who want to flag a conversation as inappropriate and that it has provided additional training on how to recognize toxic conversations about race to the volunteer “neighborhood leads” who are responsible for moderating discussions.

As Buzzfeed notes, it’s not perfect: Nextdoor is aware of “two instances of racial profiling that had slipped through its algorithms in the last few months.”

Tolia says he knows that these forms won’t end racism, but the company is convinced it will reduce the amount of bias aired on its platform, and thus contribute to its mission to create and foster close-knit communities.

“We don’t think Nextdoor can stamp out racism,” said Tolia, “but we feel a moral and business obligation to be part of the solution.”

It’s a mission that tech companies are increasingly embracing.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin