Skip to content

How good is Google AI?

YouTube has tried to keep violent and hate videos out of its service for years. The Google unit hired thousands of human moderators and put some of the best minds in artificial intelligence to work on the problem.

On Thursday, none of that managed to stop an armed man who used social media to broadcast his wave of murders in a New Zealand mosque, while legions of online publishers tricked YouTube software to broadcast the attacker’s video.

When the uproar was broadcast live on Facebook, the police alerted the social network, which deleted the video. But by then it had been captured by others, who posted it again on YouTube. Google said it was “working vigilantly to eliminate any violent filming.” Even so, many hours later, it could still be found, a disconcerting reminder of how far the giant Internet companies should go to understand and control the information shared in their services.

“Once it has been determined that the content is illegal, extremist or a violation of its terms of service, there is absolutely no reason why, in a relatively short period, this content can not be automatically deleted at the loading point,” says Hany Farid, a professor of computer science at the School of Information at the University of California at Berkeley. “We have had the technology to do it for years”.

Can a videogame be the driver of a massacre like New Zealand?

YouTube has worked to prevent certain videos from appearing on your site for years. A tool, called Content ID, has existed for more than a decade. It gives copyright owners, such as film studios, the ability to claim content as their own, receive a payment for it and eliminate pirated copies. Similar technology has been used to blacklist other illegal or undesirable content, such as child pornography and terrorist propaganda videos.

Google revealed that it was using AI techniques

About five years ago, Google revealed that it was using AI techniques such as machine learning and image recognition to improve many of its services. The technology was applied to YouTube. At the beginning of 2017, 8 percent of the videos marked and eliminated by violent extremism were eliminated with less than 10 visits. After YouTube introduced an automated learning-driven signage system in June 2017, more than half of the videos extracted by violent extremism had less than 10 visits, he reported in a blog post.

Google executives have testified several times before the US Congress

The subject was violent and extremist videos that are broadcast through YouTube. The message that is repeated is: YouTube is improving, sharpening its algorithms and hiring more people to deal with the problem. Google is considered to be the best equipped company to deal with this because of its AI skill.

So, why could not Google have prevented a single video, which is clearly extremist and violent, from being published again and again on YouTube?

“There are many ways to cheat computers” says Rasty Turek, executive director of Pex, a company that builds a technology that competes with YouTube’s Content ID.

Making minor changes to a video, such as placing a frame around it or flipping it on its side, can confuse the software that has been trained to identify problematic images, Turek explains.

Big problems with live streaming

The other big problem is live streaming, which by its very nature does not allow the AI software to analyze a full video before the clip is loaded. Smart publishers can take an existing video that they know YouTube will block and broadcast live second to second, essentially streaming it online to avoid Google software. When YouTube recognizes what is happening, the video has already been played for 30 seconds or a minute, regardless of how good the algorithm is, says Turek.

“Live broadcast reduces speed to a human level” he says. It is a problem that YouTube, Facebook, Pex and other companies working in the field are fighting, he added.

This rebroadcasting trick is a particular problem for YouTube’s approach to organizing blacklists of videos that break their rules. Your AI-powered software is trained to automatically recognize the clip and block it if another person tries to upload it to the site again.

It still takes a while for the AI software

It still takes a while for the AI software to be trained before it can identify other copies. And, by definition, the video must exist online before YouTube can start this automatic learning process. And that’s before people start dividing offensive content into short clips to broadcast live.

Another complicating factor is that the edited clips are also being published by reputable news organizations as part of their coverage of the event. If YouTube removed a news report simply because it had a screenshot of the video, press freedom advocates would oppose it.

The New Zealand shooter used social networks to obtain maximum exposure. He published on Internet forums used by right-wing and anti-Muslim groups, tweeted about his plans and then began the live broadcast of Facebook on his way to the site of the attack.

He published a manifesto full of references to the Internet and alternative culture, probably designed to give journalists more material to work on and thus spread their notoriety, says Jonas Kaiser, a researcher affiliated with the Berkman Klein Center for the Internet at Harvard.

“The patterns seem to be very similar to those of previous events” said Kaiser.

Published inArtificial Intelligence (AI)
%d bloggers like this: