How AI is used to detect harmful videosby Peter Griffin
Artificial intelligence can be used to improve video content-moderation at tech companies like Facebook and YouTube.
Content in the individual frames of a video deemed to breach standards is analysed to generate a unique code. Any videos with characteristics matching the coded video can then be identified and removed. But these “checksum” techniques are relatively easy to bypass by manipulating the video. Hashes were used by Facebook to detect and remove the Christchurch massacre video.
The video or image is encoded with a digital signature when it is uploaded to a social network, making it easier to track across the platform and target for removal. It’s resistant to image tampering, but requires digital watermarks to be widely deployed.
Machine learning algorithms are trained to detect types of content in videos by analysing large quantities of similar videos. This could be very effective, but requires a large amount of computing power, can be time consuming and needs a substantial collection of videos for the AI system to learn from.
Source: University of Otago
This article was first published in the March 30, 2019 issue of the New Zealand Listener.
Eileen Merriman doesn’t have to dig too deep to find the angst, humour and drama for her award-winning novels.Read more
The tide of great New Zealand books on the world wars shows no sign of going out. Russell Baillie reviews four new Anzac books.Read more
A telegraph “boy”, heroic animals and even shell-shock make for engaging reads for children.Read more
Ensuring lighthouses stay “shipshape” isn’t a job for the faint-hearted.Read more
Service medals are being reunited with their rightful owners thanks to former major Ian Martyn and his determined research.Read more
A meeting aims to see world leaders and CEOs of tech companies agree to a pledge called the ‘Christchurch Call’.Read more
The fictionalised account of a British woman who spied for the Soviet Union is stiflingly quaint.Read more