AI AND ONLINE CONTENT MODERATION

      No Comments on AI AND ONLINE CONTENT MODERATION

 

A WORLD WHERE EVERYONE IS CONNECTED

It is not secret that our world has become more connected than ever before with the help of social media. Some of these platforms have billions of users. This makes it challenging for each platform to properly moderate posted content. Governments around the world continue to put pressure on them to take on the responsibility of filtering out dangerous content such as live-streamed terror attacks, cyberbullying, political manipulation, and more.

THE ROLE OF AI MODERATORS

Without AI a user would have to report illegal content which would then have to be viewed by a human moderator to make a decision. This is not only slow and inefficient but exposes these moderators to dark and vulgar content.

There are many ways in which an AI program can ease the burden of moderating such content. These include picking and choosing what kinds of harmful content a human moderator is exposed to and the ability for the moderator to ask the AI questions preparing themselves for what they are about to see. Now of course, the real reason why social media companies prefer using AI over human moderators is because it is more efficient and saves money.

At this point some human oversight is still necessary for a variety of reasons. Ofcom and Cambridge Consultants have released a report based on their research which highlights some of these reasons. It appears as if fully automated content moderation is not possible at the present time.

COMMON MODERATION TECHNIQUES

Most online platforms use one or both of the following techniques when it comes to moderating. Pre-moderation takes place when an AI checks content prior to being checked by a human moderator and prior to publication. Post-moderation takes place when content has been flagged as inappropriate and is taken down after publication. The rise in AI systems will allow them to more effectively conduct pre-moderation along with helping to assist humans with any content that has been flagged after publication. In other words, the AI will be able to help humans correct mistakes made by itself which will increase the amount of training data the AI has access to which will improve its performance long term.

There are also a number of ways in which an AI program can be improved at the pre-moderation stage before mistakes are made. These involve the use of generative adversarial networks or GANs which have been covered in a previous post. Pictures of inappropriate content can supplement existing examples of harmful content which improves the training process.

THE REPORT

The full paper by Cambridge is available for reading here. The paper examines the capabilities of AI technologies in meeting the challenges of moderating online content and how improvements are likely to change the game entirely over the course of the next 5 years. The last recent jump in content moderating capability came when deep neural networks were developed. These neural networks enabled systems to recognize features in complex data inputs such as human speech, images, and text.

With this new technology, AI will be better able to assist humans with the daunting task of moderating online content.

Leave a Reply

Your email address will not be published. Required fields are marked *