Recent terror attacks around the world have served to once again put the spotlight on the likes of YouTube, Facebook, Twitter, and other online services, which have been urged to do more to remove extremist content from their respective platforms.
In a piece written for the Financial Times and published on Sunday, Kent Walker, senior vice-president and general counsel of YouTube-owner Google, insisted the video-streaming site is working hard to deal with the issue of violent extremism online, though admitted that, “as an industry … more needs to be done.” With than in mind, he said YouTube is now introducing four new measures to run alongside existing efforts.
First, the company promises to start making more use of technology aimed at automatically identifying extremist content. Walker said YouTube plans to “devote more engineering resources to apply our most advanced machine learning research to train new ‘content classifiers’ to help us more quickly identify and remove” extremist content.
Next up, YouTube plans to increase the number of human evaluators as part of its Trusted Flagger program by adding 50 expert NGOs to the 63 organisations that are already part of the program.
The Trusted Flaggers are used in cases where a more nuanced judgement is needed regarding “the line between violent propaganda and religious or newsworthy speech.” Walker notes that while flags reported by regular users can sometimes be inaccurate, those reported by Trusted Flaggers are more reliable, proving accurate 90 percent of time.
The company said it’s also going to take a new approach to videos “that do not clearly violate our policies” but could be considered as close to the mark. Such content will now appear behind a warning notification and will play without ads.
Viewers won’t be able to recommend or comment on the video, a move designed to limit engagement and make it harder to find. “We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints,” Walker wrote.
Finally, YouTube intends to double down on its counter-radicalization efforts with targeted ads aimed at potential Isis recruits that take them to anti-terrorist videos designed to dissuade them from joining.
Walker promised to continue efforts to tackle dangerous content, adding, “Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free. We must not let them.”
Last week Facebook also highlighted its own efforts aimed at keeping extremist content off its service, which include the use of AI technology to quickly identify and remove content previously taken down by the company. Facebook is also using a team of employees to back up its AI-related efforts to crack down on terror-related posts appearing on its service.
No comments:
Post a Comment