An artificial-intelligence recommendation system asked Facebook users who saw a newspaper video showcasing black males if they wanted to “keep viewing videos about primates.”
Facebook stated that the error “was clearly an unacceptable error,” and that the system had been disabled while an investigation was underway.
“We apologize to anyone who may have been offended by these suggestions.”
It’s the latest in a long line of gaffes that have sparked worries about AI’s racial bias.
Google’s Photos software labelled photographs of black individuals as “gorillas” in 2015.
The firm stated that it was “appalled and sincerely regretful,” yet Wired reported in 2018 that the corporation’s solution was to simply restrict photo searches and tags for the word “gorilla.”
Twitter revealed in May that its “saliency algorithm” had racial biases in the way it cropped image previews.
Some facial-recognition systems’ algorithms have also been found to contain biases, according to studies.
In 2020, Facebook announced a new “inclusive product council” – as well as a new equity team on Instagram – that would look at whether its algorithms were biassed against people of colour, among other things.
According to a spokeswoman for BBC News, the “primates” recommendation “was an algorithmic error on Facebook” and did not reflect the content of the video.
“As soon as we realised this was happening, we blocked the entire topic-recommendation tool so we could examine the source and prevent it from happening again.”
“As we’ve previously stated, while our AI has improved, we recognise that it isn’t flawless and that we still have work to do.”