AI or Artificial Intelligence has really come a long way forward since the time of its inception and has found a lot of applications in different aspects of life. Nowadays, AI is skilled enough to drive cars on its own, modulate contents like statuses and images and videos on social media and also decide what TV series you should watch next on your Netflix account. However, it still hasn’t developed the ability to identify the medium of violence and block it, pointing out to a basic flow that is generally present with Artificial Intelligence.
The latest example of this shortcoming was a terrorist attack in New Zealand when one of the terrorists streamed an attack on Facebook relentlessly and the functioning Artificial Intelligence could not track, report or close it. It aired for a solid 17 minutes before the police department took it down. However, the video and other related posts had already spread like wildfire through social media.
This raises a simple question. Why AI, which can identify problematic images, videos and status updates cannot do anything against violence. One of the main reasons that has been attributed to this is the fact that violence is a subjective matter and a lot of it depends on the context in which the violence video is being streamed. Though most social media sites depend on both human moderation involvement and Artificial Intelligence for proper screening, the huge amount of content that surface on the internet every day makes it impossible to keep up, even when both are combined.
Moreover, according to experts in machine learning, Artificial Intelligence relies a lot on a particular pattern to make their decision of whether to block content or let it run. Images and videos of violence have different perspectives when seen through different contexts and this disparity is enough to confuse the AI. External conditions like poor lighting quality and inferior quality videos also play a role in this inability. Thus, it is pretty clear from the above-mentioned facts that a lot of work still needs to be done as far as using AI to moderate violence is concerned.