Getty Images/iStockphoto
11 ways to spot disinformation on social media
Social media disinformation is meant to be deceptive and can spread quickly. Here are some ways to spot it.
Fake news is not new, but the rate at which it can spread is. Many people have a hard time sorting real news from fake news on the internet, causing confusion.
With the U.S. presidential election in November 2024, AI on social media has taken center stage to spread images of political figures with false messages. Spam AI messages are flooding people's Facebook feeds with random content that appears to be reshared by bot accounts for engagement. Fake X, formerly Twitter, accounts are pushing presidential candidates with stolen images and deep fakes.
However, there are also foreign interference and national security concerns. National intelligence officials have found increasing efforts for election interference using AI-generated content from China, Iran and Russia. Iran is accused of making threats to former President Donald Trump and spreading disinformation on his campaign. The Biden administration has seized Kremlin-run websites with intentions to influence the U.S. presidential election with disinformation. The FBI affidavit refers to these influence campaigns as Doppelganger and unveiled criminal charges against Russian nations, sanctions on entities and the seizure of internet domains. This effort is to stop the spread of disinformation to the American public and the attack on U.S. politicians.
This isn't Russia's first use of social media to spread disinformation. Russia built a digital barricade to prevent its citizens from accessing information about its war efforts with Ukraine. Russian citizens must rely on the information their authorities permit because they are cut off from global information. The free and open internet does not exist in Russia.
One of the main problems with this digital barricade is the spreading of disinformation. Russians receive false information, such as the assertion that Ukraine is the aggressor in this conflict.
Social media is becoming a more common way for readers to get their news and information. However, not all information on these sites can be trusted. The Red Cross addressed rumors that the federal government was deliberately withholding aid to victims of Hurricane Helene. The spread of this false information is hurting relief efforts in affected areas as people are reluctant to donate money and supplies.
Disinformation can cause mistrust, as its main goal is deception. And it can spread through bots, bias, sharing and hackers. Keep reading to learn 11 ways to spot disinformation on social media.
What is fake news?
Fake news is articles that are intentionally false and designed to manipulate the readers' perceptions of events, facts, news and statements. The information looks like news but either cannot be verified or did not happen. This fabricated information often mimics the real news media without credibility and accuracy.
Things that can cause suspicion in a news story include the following:
- Unverifiable information.
- Pieces written by nonexperts.
- Information not found on other sites.
- Information that comes from a fake site.
- Stories that appeal to emotions instead of stating facts.
Categories of fake news include the following:
- Clickbait. This uses exaggerated, questionable or misleading headlines, images or social media descriptions to generate web traffic. These stories are deliberately hyperbolic to attract readers.
- Propaganda. This spreads information, rumors or ideas to harm an institution, country, group of people or individual -- typically for political gain.
- Imposter content. This content impersonates general news sites to deceive readers.
- Biased/slanted news. This attracts readers to confirm their own biases and beliefs.
- Satire. This creates fake news stories for parody and entertainment.
- State-sponsored news. This operates under government control and potentially creates and spreads disinformation to residents.
- Misleading headlines. These stories may not be completely false but are distorted with misleading headlines and small snippets displayed in news feeds.
Fake news is harmful because it can create misunderstanding and confusion on important issues. Spreading false information can intensify social conflict and stir up controversy. These stories can also cause mistrust.
What contributes to disinformation?
Fake news spreads more rapidly than other news because it appeals to emotions, grabbing attention. Here are some ways disinformation spreads on social media:
- Continuous sharing. It's easy to share and like content on social media. The number of people that see this content increases each time a user shares it with their social network.
- Recommendation engines. Social media platforms and search engines also provide readers with personalized recommendations based on past preferences and search history. This further contributes to who sees fake news.
- Engagement metrics. Social media feeds prioritize content using engagement metrics, including how often readers share or like stories. However, accuracy is not a factor.
- AI. AI systems can also promote disinformation. AI can create realistic fake material based on the target audience. An AI engine can generate messages and test them immediately for effectiveness at swaying targeted demographics. It can also use bots to impersonate human users and spread disinformation.
- Hackers. These people can plant stories into real media news outlets, appearing as though they are from reliable sources. For example, Ukrainian officials reported hackers broke into government websites and posted false news about a peace treaty.
- Trolls. Fake news can also appear in the comments of reputable articles. Trolls deliberately post to upset and start arguments with other readers. They are sometimes paid for political reasons, which can play a part in spreading fake news.
Misinformation vs. disinformation
Misinformation and disinformation are two terms that can be used interchangeably -- however, they do have different meanings and intent.
Misinformation is inaccurate information shared without any intention to cause harm. Misinformation can be shared unintentionally either due to lack of knowledge or understanding of the topic. Typically, people spread misinformation unknowingly because they believe it to be true.
Disinformation is spread to deceive deliberately. Typically, there is an objective to disinformation. For example, some of the most profound disinformation posts revolve around the government such as the Russian's government disinformation campaigns on its war with Ukraine to get public support. They post information they want people to believe that is not true.
11 ways to spot disinformation on social media
The first step of fighting the spread of disinformation on social media is to identify fake news. It's best to double-check before sharing with others. Here are 11 tips to recognize fake news and identify disinformation.
1. Check other reliable sources
Search other reputable news site and outlets to see if they are reporting on this story. Check for credible sources cited within the story. Credible, professional news agencies have strict editorial guidelines for fact-checking an article.
2. Check the source of the information
If this story is from an unknown source, do some research. Examine the web address of the page, and look for strange domains other than .com, such as .infonet or .offer. Check for any spelling errors of the company name in the URL address.
Consider the reputation of the source and their expertise on the matter. Bad actors may create webpages to mimic professional sites to spread fake news. When in doubt, go to the homepage of the organization, and check for the same information. For example, if a story looks like it is from the U.S. Centers for Disease Control and Prevention, go to the CDC's secured website, and search for that information to verify it.
3. Look at the author
Perform a search on the author. Check for credibility, how many followers they have and how long the account has been active.
Scan other posts to determine if they have bot behaviors, such as posting at all times of the day and from various parts of the world. Check for qualities such as a username with numbers and suspicious links in the author's bio. If the content is retweeted from other accounts and has highly polarized political content, it is likely a fake bot account.
4. Search the profile photo
In addition to looking at the author's information and credibility, check their profile picture. Complete an image search of the profile photo on Google. Make sure the image is not a stock image or a celebrity. If the image doesn't appear to be original, then the article is likely not reliable because it is anonymous.
5. Read beyond the headline
Think about if the story sounds unrealistic or too good to be true. A credible story has plenty of facts conveyed with expert quotes, official statistics and survey data. It can also have eyewitness accounts.
If there are not detailed or consistent facts beyond the headline, question the information. Look for evidence to support that the event happened. Make sure facts are not solely used to back up a certain viewpoint.
6. Develop a critical mindset
Don't let personal beliefs cloud judgment. Biases can influence how someone responds to an article. Social media platforms suggest stories that match a person's interests, opinions and browsing habits.
Don't let emotions influence views on the story. Look at a story critically and rationally. If the story is trying to persuade the reader or send readers to another site, it could be fake news.
7. Determine if it is a joke
Satirical websites make the story a parody or a joke. Check the website to see if it consistently posts funny stories and if it is known for satire. One such site known for doing this is The Onion.
8. Watch for sponsored content
Look at the top results for "sponsored content" or a similar designation. These stories often have catchy photos and appear to link to other news stories. They are ads designed to reach the reader's emotions.
Check the page, and look for such labels as "paid sponsor" or "advertisement." These articles are baiting readers into buying something, whether they are legitimate or deceitful. Some of these sites may also take users to malicious sites to install malware. Malware can steal data from devices, causing hardware failure, or make a computer or system network inoperable.
9. Use a fact-checking site
Fact-checking sites can also help determine if the news is credible or fake. These sites use independent fact-checkers to review and research the accuracy of the information by checking reputable media sources. They are often part of larger news outlets that identify incorrect facts and statements. Popular fact-checking sites include the following:
- PolitiFact. This Pulitzer Prize-winning site researches claims from politicians to check accuracy.
- FactCheck.org. This site from the Annenberg Public Policy Center also checks the accuracy of political claims.
- Snopes. This is one of the oldest and most popular debunking sites on the internet that focuses on news stories, urban legends and memes. The independent fact-checkers cite all sources at the end of the debunking.
- BBC Verify. This site, which checks facts for news stories, is part of the British Broadcasting Corporation.
10. Check image authenticity
Modern editing software makes it easy to create fake images that look real. Look for shadows or jagged edges in the photo. Google image search is another way to check the image to see where it originated and if it's altered.
11. Watch for AI-generated fakes
AI continues to evolve, and at a glance, it can be tricky to tell what is real and what is fake. Advances in AI have created crisper and clearer images, and voice cloning is extremely accurate. However, watch for distortions in areas such as hands, fingers and eyes. These parts of the body typically have irregularities in AI-generated content, such as eyes not blinking properly. Also, watch for voice and facial expressions to not line up properly.
Check out other ways to detect deepfakes.
What are social networks doing to combat disinformation?
Social media platforms are cracking down on false information. In October 2023, the Israel-Hamas war took center stage on social media as disinformation started to spread quickly, and social media platforms are taking precautions.
Israel-Hamas war
Platforms issued statements about how they are handling disinformation on the war, which may be used to incite hate and violence. Here is what some social platforms released:
- TikTok. TikTok released a statement that said it launched a command center to manage safety globally. The company plans to improve the software to detect and remove any graphic or violent content, and it also hired Arabic and Hebrew linguists to moderate content.
- Facebook and Instagram. Facebook and Instagram parent company Meta stated they launched a special operations center with experts who speak Arabic and Hebrew to monitor content. They also lowered their threshold and rules for posting content to prevent questionable content.
- X. X announced it increased resources for the crisis and is monitoring content around the clock, especially content about hostages.
- YouTube. YouTube has removed videos since the attack and says it continues to monitor hate, graphic images and extremism, according to community guidelines.
- Telegram. The messaging app Telegram restricted Hamas-operated channels or those channels closely associated with the militant's war group. These channels are no longer accessible to Telegram users.
Regular moderations to prevent disinformation
Facebook runs one initiative, Facebook Journalism Project, and funds another, News Integrity Initiative, to address the general rise of disinformation. These organizations highlight problems with fake news and spread awareness. Facebook also takes actions against pages and individuals that share fake news and removes them from the site.
Instagram and Facebook have a "false information" label to combat disinformation. Third-party fact-checkers review and identify potential false claims and posts. If this team determines the information is untrue, they flag it with a label to notify social media users it contains misinformation. When readers want to view a post with this label, they must click an acknowledgement that says the information is not true. If they try to share this information, they get a warning they are about to share false information.
X released a statement that it does not tolerate disinformation but is taking a community-driven approach to combat the spread.
LinkedIn also encourages users to report any disinformation. If the review deems the information false, LinkedIn removes the post. LinkedIn has a strict user agreement, and if users do not comply, they are removed.
To fight fake news on social media, users must first recognize what is false. If the user deems the information as fake news, it's best to report it to the platform.
Amanda Hetler is senior editor and writer for WhatIs, where she writes technology explainer articles and works with freelancers.