• The Listener
  • North & South
  • Noted
  • RNZ

How AI is used to detect harmful videos

Photo/Getty Images

Artificial intelligence can be used to improve video content-moderation at tech companies like Facebook and YouTube.


Content in the individual frames of a video deemed to breach standards is analysed to generate a unique code. Any videos with characteristics matching the coded video can then be identified and removed. But these “checksum” techniques are relatively easy to bypass by manipulating the video. Hashes were used by Facebook to detect and remove the Christchurch massacre video.

Digital watermarks

The video or image is encoded with a digital signature when it is uploaded to a social network, making it easier to track across the platform and target for removal. It’s resistant to image tampering, but requires digital watermarks to be widely deployed.

Read more: What the big tech companies need to do after the Christchurch shooting

Video classification

Machine learning algorithms are trained to detect types of content in videos by analysing large quantities of similar videos. This could be very effective, but requires a large amount of computing power, can be time consuming and needs a substantial collection of videos for the AI system to learn from. 

Source: University of Otago

This article was first published in the March 30, 2019 issue of the New Zealand Listener.