The bias in our software

Latest

Your software is not neutral. Whether you’re Facebook, Twitter, Airbnb, or any other web-based service, you are not an unbiased third-party to the users of your software. It’s time to stop pretending you are.

Recently, after Facebook was confronted with charges of suppressing conservative-leaning news, there was a strong push from the company to convince the public at large that it remained neutral and had never and will never present news with bias. While it might be true that there was no overt wrongdoing, let’s be clear about this: Facebook does influence what you see and what you don’t.

Maybe it’s the algorithm , showing you what the computers think you’re most likely interested in, content that will keep you on the site longer to view more advertisements. Maybe it’s human intervention, faceless editors deciding what is and isn’t newsworthy to promote to the 1.5 billion people using the service. Or maybe it’s by your own doing ,  deciding who you’re friends with, which friends are hidden from your Newsfeed, and other actions you’ve taken while logged into the network that’s helped them build a profile of your interests. Whatever the case, there is nothing neutral about the way Facebook presents information to its users.

Who are the editors, the decision makers at Facebook? Do they represent the diversity of the United States? Of the world? Could a lack of diversity in its ranks explain why it took years for images of breastfeeding to be allowed on Facebook (and Facebook-owned Instagram), while the Confederate flag has always been seen as free expression? How many white men were in the room where that decision was made? How many women of color?

No, Facebook is not neutral at all. But they’re not alone in their belief that they provide the public with a service on par with a utility. Bloombergs Sarah Frier writes, “Twitter Inc.’s website has seen major wars of words in the U.S. presidential race, giving rise to passionate voices on all sides — including those who are racist — but that’s what the service is for,” which is the argument of Omid Kordestani, Twitter’s executive chairman. It wouldn’t be a stretch for one to speculate that this is why Twitter stood quietly on the sidelines for so long, rather than addressing their abuse problem. All these years, the people in their boardroom saw Twitter as a neutral platform enabling free speech.

What Twitter has yet to realize is that an important part of protecting free speech is protecting the most vulnerable. In Anil Dash’s words: “Allowing abuse hurts free speech. Communities that allow abusers to dominate conversation don’t just silence marginalized people, they also drive away any reasonable or thoughtful person who’s put off by that hostile environment. Common sense tells us that more people will feel free to express themselves in an environment where threats, abuse, harassment, or attacks aren’t dominating the conversation.”

Twitter believes it’s neutral, but like almost all software, their code makes decisions for us. Because I follow Anil Dash, Twitter’s Connect feature suggested similar users in the technology space for me to follow. It recently recommended over half a dozen white men — not one woman, not one person of color. To argue that Twitter doesn’t have a hand in whose voices are heard would be a lie. Twitter would do right by its users and shareholders to understand and respect that responsibility.

And then there’s Twitter’s algorithmic timeline — similar to Facebook’s Newsfeed — that shows you the “best” tweets first. No definition of best is provided, beyond what one could assume are tweets that are more likely to garner your likes and retweets. That is, by its very nature, not a neutral feature.

It’s hardly surprising that networks are trending this way. Log in and you’re inundated with information from every angle. To solve that problem, particularly for new users, but also for advertisers who want to be sure our attention is rapt, companies like Twitter and Instagram are moving away from the time-ordered feed. Many worry that this means missing an update from a friend, but I sense a bigger issue: Do algorithms have the same biases that humans do? Do the rich get richer while those from underrepresented backgrounds get pushed further into the margins? Have we really considered how those features, along with those with infamously quick block fingers, silence people with less privilege than Silicon Valley elites?

“How can we have a conversation if we can’t even hear other voices?” Emily Neuberger asks.

The decisions these companies make affect who feels welcome and safe on their platform. Those decisions speak to their values. Twitter’s lack of diversity seems to be at the root of much of this. Far from perfect, they do have tools to report abuse. Yet, time and time again, legitimate reports go without appropriate action. They put the burden on their users to surface hateful, harassing comments, and when they do: Nothing! Unless, well, you have over 100,000 followers or write for The New York Times. Then maybe a human will take a closer look at your case.

Airbnb should’ve known that its platform is not neutral. USA Today’s Marco della Cava writes, “Racism represents a new hurdle for the start-up, which to date has been in the news due to its battles with cities such as New York and San Francisco over short-term rental rules.”

But I’d argue that hosts discriminating against guests is not a new problem. What’s new is the attention that’s been brought to it. This could’ve been resolved if Airbnb had listened to these experiences earlier — that is, before it became a PR headache.

According to the USA Today report, Airbnb says “a new program is in the works to recruit more underrepresented minorities in computer science and data science.” And how about Airbnb’s support and design teams? Or the company’s leadership, including the board of directors? Sure, hiring underrepresented engineers is the right thing to do, but it does very little to solve this problem. Not every answer lies at the end of a line of code.

Airbnb can start somewhere easy: Require hosts to explain why they declined a listing. Have a human review it for patterns of discrimination. A human investment shows you care, and signals that you will respond quickly and appropriately to complaints of wrongdoing. That’s more meaningful than banning a few bad eggs who get the most press.

Twitter should hire a more robust support staff, tasked with automatically reviewing accounts that are muted or blocked a certain percentage of the time.

Facebook could start by using a small portion of its billions in profit to hire more than 7 black people in a year to ensure that news curation teams reflect the demographics of their region or country. After that, why not give users the ability to select what kind of news and content they do and don’t want to see on the service? A feature like that could be seen as a win-win — it directly gives Facebook more data, while the users get more control over what information they see.

These are all places to start, which will inevitably uncover more issues that need fixed. While there are software-related answers to these problems, it’s not that simple. Twitter’s Engage, which lets you filter your interactions to users Twitter deems more quality, is an interesting attempt, but is once again misguided by poor messaging. The app is presented for celebrities, but works for everyone.

If the patterns aren’t yet clear, the answer isn’t more code. The answer is more humans, from diverse backgrounds, making decisions for which they can be held accountable.

The worry, of course, is increased human influence means increased bias. It means removing the veil of neutrality and embracing the responsibility of shaping human interaction. Here’s the thing — our social networks are already making moral choices. In Michael Nunez’s report for Gizmodo, one of Facebook’s anonymous news curators claimed, “We choose what’s trending… There was no real standard for measuring what qualified as news and what didn’t. It was up to the news curator to decide.”

Is this why you didn’t see anything about Black Lives Matter on Facebook? Is Twitter heading down that same path with its commitment to algorithms and supposed neutrality? We wonder about the ethics of future technology, without taking the time to fully consider the decisions that are being made today — decisions often made without the input of major portions of the population, including women, people of color, and people with disabilities, to name a few fairly significant groups of people.

Until there are more people throughout all levels of these companies with real, lived experiences that reflect all the kinds of diversity in our world, many of these issues will continue to take a backseat to a new like button. So, I have to ask: How much do we trust the code? More importantly, how much do we trust the people writing the code? People make mistakes and have biases. It’s simply part of who we are as humans. It’s time to realize their software isn’t perfect, either.

Andy Newman proudly works for Big Cartel and is a filmmaker and writer. He lives in Los Angeles, CA.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin