YouTube’s Recommender AI , Is Still Promoting
Misinformation, Study Finds.
New research published July 7 by Mozilla suggests that YouTube’s AI has failed to back up its promise to curb misinformation on its platform. .
New research published July 7 by Mozilla suggests that YouTube’s AI has failed to back up its promise to curb misinformation on its platform. .
The crowdsourced study drew on data from over 37,000 YouTube users. Mozilla claims it is the largest investigation into YouTube's recommender algorithm to date. .
TechCrunch suggests that YouTube has been able to dodge scrutiny by keeping the algorithm and associated data hidden from oversight via “commercial secrecy.”.
Mozilla suggests three things
to rein in YouTube's AI: .
Laws that mandate transparency into AI systems.
protection for independent researchers so they can investigate algorithmic impacts.
and more control for platform users,
including the ability to opt out
of “personalized” recommendations.
TechCrunch asked Brandi Geurkink, Mozilla’s senior manager of advocacy and the lead researcher for the project, what she felt was the most concerning finding of the study.
One is how clearly misinformation
emerged as a dominant
problem on the platform, Brandi Geurkink, Mozilla senior manager of advocacy,
via TechCrunch.
So to see that that is what is emerging
as the biggest problem with the
YouTube algorithm is really concerning to me, Brandi Geurkink, Mozilla senior manager of advocacy,
via TechCrunch.
So to see that that is what is emerging
as the biggest problem with the
YouTube algorithm is really concerning to me, Brandi Geurkink, Mozilla senior manager of advocacy,
via TechCrunch