Artificial Intelligence

Disinformation: Coming to a Business Near You

A group of young people on their smartphones using social media
Credit: Rawpixel.com / stock.adobe.com

Dave DeWalt, Founder and CEO, NightDragon and Dr. Bilyana Lilly, author of “Russian Information Warfare: Assault on Democracies in the Cyber Wild West

In recent years, we’ve seen the impact of disinformation rising, with concerns around election interference, false information leading to violence, the spreading of fake news by nations to further their strategic objectives, and more. This problem has grown so great that the World Economic Forum named disinformation as the top global risk for 2024.

While the existence of disinformation is not new, the digital age and its ability to spread any narrative so rapidly (and from any voice, whether authentic or synthetic) is amplifying the reach and virility of false information – often more quickly than what is true. According to one study, false news is about 70% more likely to be reshared by people than true news, which proliferates false narratives and often leads to confirmation bias or confusion about the current state of events.

By all signs so far in 2024, the disinformation threat landscape is becoming even more complex. With more than 60 countries around the world participating in elections this year, awareness of disinformation’s use by various actors and the ability to separate truth from falsehoods is paramount.

AI-generated content has already led to serious election-related disinformation concerns, with the number of deepfakes being created increasing by 900% this year alone. With such rampant disinformation, the legitimacy of the election process and newly elected governments is being put into question.

This risk is only heightened and multiplied by the recent advances and growth around Generative AI (Gen AI), which helps create realistic content more quickly and amplifies the effects of disinformation across social networks and other channels. Gen AI specifically has opened a world of opportunity for content creation, new ideas, answers at our fingertips, and so much more.

Disinformation Risk Has Entered the Corporate Sector

Financial Risk

While disinformation is often brought up in the context of elections or social media companies, the reality is that enterprises in every industry face a rising and significant threat in this area. Publicly traded companies, for example, lose about $39 billion annually due to disinformation-related stock market losses, while $78 billion globally is lost each year.

For example, in May 2023 a deepfake of an explosion in the Pentagon spread via verified accounts on X. Only minutes later the stock market temporarily dipped by half a trillion dollars.

Reputation Damage Risk

False or misleading information can tarnish long-held brand reputation and consumer trust. Organizations with as few as four negative articles can experience losses of up to 70% of prospective customers. With social media largely replacing traditional advertising, organizations must be prepared to combat false narratives that can be easily and rapidly shared.

For example, 2020 saw allegations that storage cabinets from online retailer Wayfair were related to a child trafficking ring, leading to significant reputational damage for the company and removal of the implicated items from its website.

Operational Disruption Risk

Disinformation campaigns surrounding an organization's products, services, or operations, can lead to internal mayhem, as well as to disruptions to supply chains and partnerships.

For example, many events surrounding the emergence of COVID-19 fell victim to disinformation, including conspiracy theories that 5G technology caused COVID-19, leading to arson attacks against telecom infrastructure in the UK.

Cybersecurity Risk

Business leaders should also be aware of and protect against the spread of disinformation through deepfake phishing or “CEO fraud”—instances in which a highly-trained deepfake is used to impersonate an organization’s top leaders. The advancing technology used to execute these tactics may be overlooked if employees are not proactively adopting a defensive mindset.

General awareness of these tactics has increased following the elaborate and infamous case of a finance worker tricked into paying 25 million during a video conference call with a bad actor impersonating the company’s “CFO.”

Legal and Regulatory Risk

Disinformation campaigns can also violate laws and regulations related to defamation, intellectual property rights, consumer protection, and data privacy.

For example, in 2018, the Securities and Exchange Commission (SEC) charged numerous hedge funds for shorting firms and spreading disinformation about them.

Business leaders need to ensure that their organizations are compliant with relevant laws and regulations and mitigate the risks associated with non-compliance. By adopting a proactive approach to monitoring, detection and resilience, businesses can mitigate and detect disinformation before it transforms from a potential risk into a real threat with a negative impact on the business.

Innovators Tackling the Disinformation Challenge

Fortunately, innovators are addressing these problems, with investors ready to back up these technologies. In 2022, VC funds invested $187.7 million in combating disinformation. What’s more, an analysis by Crunchbase of just 16 companies in this space found that they raised a collective $250 million towards technology development in this sector.

There are many ways that innovation and technology, including AI, can help mitigate disinformation. For instance, startups today are leveraging AI to uncover fake social media profiles, bot activity, and content made using Gen AI to expose threats, while also providing real-time alerts and actionable insights. Others use machine learning to automatically flag harmful narratives and malicious behavior, allowing service providers to analyze the relationship between users and the content shared.

A separate category of companies uses AI to help platform companies, such as social media organizations, detect disinformation, hate speech, harmful content, and other similar negative items. These technologies work to mitigate disinformation at its source, as it is often spread through social media platforms.

These startups are growing and providing accessible solutions to the bad actors existing in the world of AI. With a global market size of $3.86 billion alone for deep fake detection, the sector is expected to expand at a compound annual growth rate of 42% through 2026.

The Future is AI

Leaders in every industry should be aware of disinformation threat scenarios that are pertinent in their sector. Proactive measures to mitigate this risk include continuous monitoring and detection of disinformation narratives relevant to the company, incident response and crisis management plans including disinformation aspects, raising awareness among staff members about disinformation and its pernicious effects, and introducing tabletop exercises to practice scenarios including disinformation.

What’s clear in the world of disinformation is that it isn’t going away any time soon, especially with AI on the rise. In terms of both risk for bad actors and opportunity for innovators - 2024 may be just the tip of the iceberg.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

Other Topics

Technology

NightDragon

NightDragon is an investment and advisory firm focused on growth and late-stage investments within the cybersecurity, safety, security and privacy industries.

Read NightDragon's Bio