51.5 F
New York
Thursday, November 26, 2020

-

Home News Twitter to investigate apparent racial bias in photo previews

Twitter to investigate apparent racial bias in photo previews

Document Analysis NLP IA

653
WORDS

WORDS
3:15
Reading Time

Reading Time
neutral
sentiment

Sentiment0.092937830687831
objective
redaction

Subjectivity0.26146825396825
probably it's an affirmation
Affirmation0.41098484848485

Highlights

RELEVANT
FREQ, RAKE or TFIDF
Entity
ORG
Entity
PERSON
Entity
PRODUCT
Entity
OTHER
Key Concepts (and relevance score)

Summary (IA Generated)

The first look a Twitter user gets at a tweet might be an unintentionally racially biased one.

Twitter said Sunday that it would investigate whether the neural network that selects which part of an image to show in a photo preview favors showing the faces of white people over Black people.

The trouble started over the weekend when Twitter users posted several examples of how, in an image featuring a photo of a Black person and a photo of a white person, Twitter’s preview of the photo in the timeline more frequently displayed the white person.

The public tests got Twitter’s attention – and now the company is apparently taking action.

We’re looking into this and will continue to share what we learn and what actions we take.

Twitter’s Chief Design Officer Dantley Davis and Chief Technology Officer Parag Agrawal also chimed in on Twitter, saying they’re ‘investigating’ the neural network.

To address it, we did analysis on our model when we shipped it, but needs continuous improvement.

It’s not a scientific test as it’s an isolated example, but it points to some variables that we need to look into.

The conversation started when one Twitter user initially posted about racial bias on Zoom‘s facial detection.

He noticed that the side-by-side image of him (a white man) and his Black colleague repeatedly showed his face in previews.

After multiple users got in on testing, one user even showed how the favoring of lighter faces was the case with characters from The Simpsons.

It’s problematic to claim incidences of bias from a handful of examples.

That doesn’t mean the previews question is not worth looking into, as this could be an example of algorithmic bias: When automated systems reflect the biases of their human makers, or make decisions that have biased implications.

In 2018, Twitter published a blog post that explained how it used a neural network to make photo previews decisions.

This decision to use contrast as a determining factor might not be intentionally racist, but more frequently displaying white faces than black ones is a biased result.

There’s still a question of whether these anecdotal examples reflect a systemic problem.


131FansLike
3FollowersFollow
16FollowersFollow