Deepfakes: Meta criticized for its shortcomings in the fight against AI-generated content
AI is profoundly transforming the flow of information on social media, to the point that artificially created images and videos can imitate real-life situations with disturbing accuracy.
Faced with this evolution, web giants are trying to adapt their moderation systems. But in the case of Meta, these efforts still seem far from convincing.
Indeed, the company's Oversight Board, responsible for evaluating content moderation decisions, has just pointed out several shortcomings in the management of deepfakes on Facebook, Instagram, and Threads. According to the online report, current mechanisms are insufficient to limit the spread of misleading content generated by AI…
Detection deemed too limited against deepfakes
After examining videos, including one AI-generated video showing alleged damage to buildings in Israel, the organization believes that the detection methods used by Meta are "not robust or comprehensive enough.".
Regarding this video, which was released last year during a conflict between Israel and Iran, it was initially left online by the platform. The Board ultimately decided to reverse this decision and alert Meta to the limitations of its current strategy. One of the identified problems concerns the excessive reliance on self-reporting by content creators. In practice, platforms often depend on users to report whether an image or video was generated by artificial intelligence. This approach is considered unrealistic in the face of organized disinformation campaigns. The situation is further complicated by the multi-platform circulation of content, as in the case studied, the video was reportedly first published on TikTok before being shared on Facebook, Instagram, and X, making moderation more difficult. The Oversight Board calls for an overhaul of moderation. Given these limitations, the Oversight Board is asking Meta to thoroughly review its approach to AI-related moderation. To this end, several avenues are being explored to improve transparency and the detection of synthetic content. Among the recommendations is the wider deployment of media provenance standards, such as the C2PA (Content Credentials) system, designed to identify the origin of a digital file. This type of technology would clearly indicate to internet users whether an image or video has been generated or modified by AI. The Council also asks Meta to develop more effective detection tools and to generalize the labeling of AI-generated content. The goal is then to make this information visible on a large scale, so that users can more easily identify misleading media. The creation of a separate community standard dedicated to AI-generated content is also requested, especially since such a rule would allow for better regulation of deepfakes and a faster response when misleading content circulates. While these recommendations come amid growing tensions surrounding disinformation and the role of platforms, as AI tools become more accessible, the ability of social networks to distinguish truth from falsehood is becoming a major issue for the reliability of online information.
Please Login to leave a comment.
Want to Post Your Topic
Join a global community of creators, monetize your content easily. Start your passive income journey with Digbly today!
Post It Now
Comments