alphaspirit - Fotolia
How new cybersecurity problems emerge from fake news
As fake news continues to emerge, new cybersecurity challenges for IT professionals arise. Learn why we should continue to care about cyber propaganda and what we can do.
Fake news is a popular term, especially in a post-truth world. In fact, both post-truth and fake news recently entered the lexicon, becoming entries for word of the year in 2016 and 2017, respectively.
Many cybersecurity experts argue that fake news is simply propaganda and should be dealt with in the same way that propaganda has historically been handled. In this article, we'll discuss why this response fails both security and consumers for several reasons, and why cyber propaganda is one of many cybersecurity problems.
Propaganda has a long history in conflict due to the simple fact that it works -- history is filled with examples of deception working. One of the most famous examples of deception was the Trojan horse. More recently, propaganda has been evident in cartoons, songs, broadcasts and even social programs in war zones. All of these approaches and methods required carefully crafted messages and rapid delivery -- these requirements have not changed.
Today's fake news -- cyber propaganda and deceptive data -- differs on several fronts, including its rapid and dynamic customization, its dissemination, and its interactive nature. Each of these points warrants an explanation and a comparison to previous methods, as the differences are significant enough to warrant examining counter-methods and approaches.
Historically, when governments engaged in propaganda campaigns, they spend time learning the cultures and values of their targets. This aspect remains unchanged, as messages today are carefully crafted and delivered. If the message is accepted, it worked, and if not, it is identified as deceptive and either ignored or destroyed.
In 21st century cyber propaganda, messages have a higher success rate due to the ability to customize messages to various groups within a population using data that has already been collected espousing preferences and values. For example, the data analytics generated by Cambridge Analytica (CA) was the equivalent of predicting what a person would prefer based on data collected for years -- one of the many cybersecurity problems that arose from the CA scandal.
Chatbots exacerbated the problem by intentionally inflaming users' emotions. These AI-driven programs were trained using data analytics from CA to steer and predict user responses. In a virtual environment where several human senses are ineffective, the ability to verify the entity on the other end of a conversation is compromised.
The growing sophistication of AI, along with this new, unintended use created an opportunity for a new type of interaction and a new dynamic emerged. The win-lose dynamic gave way to an environment in which lose-lose was acceptable, as long as the opponent lost.
In addition to greater targeting abilities, delivery also represented a uniquely 21st-century approach, as the ability to quickly disseminate and amplify a message at internet speed enables a lie to spread faster than the truth, which was proven by a 2017 MIT study, "The Spread of True and False News Online," published in Science magazine.
The result is that a targeted user may actually attempt to look up other sources to validate a story, but the responses they find reinforce the false narrative due to saturation, creating an echo chamber. This echo chamber emerges and becomes established before its existence can be detected with monitoring.
Unique cybersecurity problems may appear to be more of an IT problem than a security problem; however, this is not only a security problem, but it might be the most profound security problem that the industry faces today. If the data being passed through the various networks and systems is bad, then what use are the traditional security controls?
An implicit assumption with data security is that the data has value. Falsified data may have value to the source when it is being created, but once it is sent, the data immediately loses its value and renders its protections superfluous.
Who is most knowledgeable?
Since 21st-century fake news relies on the internet and social media to propagate, cybersecurity professionals should recognize that the domain they are protecting was and is still being used as a weapon.
A fundamental pillar of internet security has been found vulnerable, and it is the trust model of believing an entity without verifying the entity. Those who are most knowledgeable about internet technologies and securing those technologies should defend against its misuse.
The fake news problem is no longer a sociopolitical issue, as it is now a component of much larger cybersecurity problems that have been left unchecked for too long. The problem is undoubtedly challenging, but this should not deter the security community from finding unique ways to counter it.
The promising work that has been performed in computational linguistics, game theory and pattern analysis reflect the growing realization that fake news is indeed one of the many recent cybersecurity problems, and it reinforces that this modern day issue requires a 21st-century response. Just as biologists are called to find solutions to counter biologic agents, similarly, internet security professionals must counter fake news and other bad data.