YouTube’s search function will now recommend fewer videos that “could misinform users in harmful ways”, the company announced via a blog post, as the platform seeks to combat criticism for amplifying conspiracy theories and extremism.
The Google-owned video sharing portal did not provide a clear definition of what it considers to be harmful misinformation, but said that some examples were “videos promoting a phony miracle cure for a serious illness, claiming the Earth is flat, or making blatantly false claims about historic events like 9/11”. The changes will also affect “borderline content”, or videos that come close to violating the company’s rules for content without technically crossing the line.
“We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users,” the company said in a blogpost, noting that the shift would only impact recommendations and not which videos exist on the platform.
The company said this latest tweak will apply to less than one per cent of the content on its site and will not impact whether a video is allowed on the site. “This change relies on a combination of machine learning and real people. We work with human evaluators and experts from all over the United States to help train the machine learning systems that generate recommendations,” the blog post added. “These evaluators are trained using public guidelines and provide critical input on the quality of a video.”