Article:

Disinformation, the deliberate spreading of false information to misshape perceptions, has afflicted societies for centuries. Nonetheless, the digital sphere has emphasized its virulence with the increased availability of generative artificial intelligence (GenAI) tools like DeepSwap, ChatGPT, and DALL-E. These new technology avenues, when coupled with the prodigious distribution capabilities of digital media, have accentuated the hurdles in subduing the dissemination of potentially detrimental counterfeit content.

Indeed, this manifestation of fake news, bolstered by cutting-edge AI, has been acknowledged by the World Economic Forum as one of the most pressing global threats in the next few years. Its emphasis lies on the opportunity for exploitation amidst escalating political and societal tautness and critical events like elections.

As technology improves, unskilled individuals with limited resources can create convincing imitations of people, images, or events, leading to believable conspiracy theories. During the 2024 elections, for instance, when over two billion eligible voters in 50 countries were set to participate, disinformation became a significant concern. Its capability to manipulate public opinion and undermine trust in media and democratic processes was highly contentious. However, an interesting twist is that while AI-based content can be manipulated, it also has the potential to bolster our abilities in identifying threats and protecting against them.

Combatting AI-Generated Disinformation

In response to this alarming trend, governmental and regulatory bodies across the globe have initiated various guidelines and legislation to shield the public from AI-generated disinformation. In November 2023, a total of eighteen countries, including the UK and the US, became part of a non-binding AI Safety agreement. Furthermore, the European Union approved an AI Act that restricts certain AI applications. Similarly, the Indian government, faced with a wave of deepfakes during its election cycle, enacted laws mandating social media companies to remove flagged deepfakes or risk losing protection from third-party content liability.

However, the rapidly evolving AI landscape often overtakes the authorities’ ability to develop relevant expertise and establish consensus among diverse stakeholders from government, civil, and commercial sectors. As a result, social media platforms have also set up safeguards to protect users, which include heightened scanning for fake profiles and directing users towards reliable sources, especially around election times. Financial challenges, though, have led to downsizing AI ethics and online security teams, which raises questions about platforms’ abilities to efficiently control false content.

Technical challenges persist in identifying and containing misleading content. With information spreading in volumes and at a speed unparalleled in history via social media platforms, detection efforts are not easy. Posts possessing damaging information can go viral in a matter of a few hours as engagement often takes precedence over accuracy. Even though automated moderation has enhanced capabilities to some extent, it still falls short in detecting irregularities in the usage of certain hashtags, non-English words, keywords, and misspellings.

Furthermore, disinformation can unintentionally amplify when mainstream media or influencers share unchecked content. For instance, the Irish Times published an AI-generated article in May 2023 due to a lapse in its editing and publication process. Concurrently, a false AI-created picture of an explosion at the Pentagon quickly circulated on Twitter, which caused a brief decline in the stock market, despite its quick debunking by US law enforcement.

What Can Be Done?

Despite AI fuelled disinformation, not all uses of AI are troublesome. In fact, AI holds the potential to tackle the limitations of human content moderation, reducing reliance on human moderators. This can result in improving efficiency and decreasing costs. However, these advantages are not devoid of limitations. Large language models (LLMs) used for content moderation often overstep due to absence of adequate human oversight. This oversight is necessary to interpret context and sentiment, which in itself is a tightrope between preventing harmful content and controlling differing views. Additionally, issues like biased training data, AI hallucinations, and flawed algorithms remain significant hurdles.

Another plausible resolution, currently exercised in China, is to “watermark” AI-generated content to assist in identification. Although these alterations between AI and human-generated content can often escape human notice, existing deep-learning models and algorithms can effortlessly identify these changes. This triggers an acute challenge for digital forensic investigators who have to constantly innovate to counter adaptive methods adopted by malevolent entities leveraging these technologies.

Enhancing Digital Literacy

The fight against disinformation also depends on users’ ability to critically evaluate AI-generated content. The task of identifying and reporting misleading or harmful content needs constant vigilance, yet our understanding of AI’s capabilities and our skill in spotting fake content is still limited. Historically, skepticism has been encouraged when consuming print content, but with technological advancements, it’s time to apply this attitude to audio-visual media as well.

Testing Ground

As malicious actors exploit AI to create and disseminate disinformation, the 2024 elections will be a landmark for assessing companies, governments, and consumers’ abilities to counter this threat. Authorities will need to double down on bolstering protective measures against AI-induced disinformation. This step is essential to safeguard people, institutions, and political processes. Moreover, instilling communities with necessary digital literacy and vigilance is crucial in self-defence against disinformation where other measures may fall short.

Need security services for your WordPress site? Contact DrGlenn for protection and recovery. Order Services Today!.