“If the Internet says so, it might be true”

by | Feb 9, 2022

 

Written by Robert Muscat

It hasn’t become a rarity where we question the integrity of a news article, its author, or worse, the whole media portal altogether.

 

This behaviour might have become more habitual when targeted disinformation campaigns hit home at the end of last year, with a substantial number of fake media portals mimicking some of the most prominent local media houses.

Without debating if the correct terminology was used throughout the news reports highlighting the disinformation activity, one word becomes common across all news reports – Spoofing.

In computer literature, Spoofing is a deception technique where threat actors disguise themselves as legitimate identities, usually to scam victims, steal sensitive information, damage one’s reputation and others. Irrespective of the objective, threat actors always require the use of a medium to deliver their attacks and according to several different researchers, the most chosen medium for cyber-attacks is electronic mail (e-mail).

According to the Federal Bureau of Investigation’s (FBI) 2020 Internet Crime Report, from the spectrum of attacks delivered that involve spoofing, Business E-mail Compromise continued to be the costliest, with 1.8 billion US dollars lost to cyber criminals. This attack is a sophisticated scam which targets both businesses and individuals who perform financial transaction. The attacker usually compromises legitimate business email accounts through social engineering or computer intrusion techniques. By spoofing the legitimate identity, victims are instructed to perform financial transactions to attacker-controlled accounts, often not realising the unauthorized access from attackers on the legitimate account. Such funds are frequently transferred to cryptocurrencies, known to complicate the tracking of identities.

 

spp

But financial motivation is not always the case. Most of us have witnessed how the term ‘fake news’ bombarded the social media platforms during the 2016 U.S. presidential election, with then candidate Donald Trump using it as a propaganda tool to counter the ever-growing Democratic party accusations. At the same time, the world was witnessing a high rise in sophisticated disinformation campaigns through mud-slinging attacks on election candidates.

The term ‘disinformation’ here is important because it speaks of false information deliberately created and intended to harm a person or an organisation, which totally differs from ‘misinformation’ as in the latter, there is no intent to do harm through such activity.

In most of the malicious websites mimicking the identity of local news portals, an activity known as Typosquatting was used. This activity involves the deliberate creation and registration of a web address with a misspelling of the targeted brand name or brand’s trademark web address to kind of capture a consumer or a web user who mistakenly types in the wrong address.

Threat actors use automated web scraping tools to copy logos, website layouts, and content format, to replicate the look and feel of legitimate news portals, which would make it difficult to web unwary web users to realize at first glance.

To further understand the phenomenon of disinformation activity and the sophistications behind the campaigns, we can easily break down disinformation into 3 key vector characteristics referred to as the ABC framework – short for Actors, Behaviour, and Content presented by Camille François from Harvard University.

Actors are those executing the disinformation campaign, which would not necessarily be the individuals or organisation that would ultimately benefit from the impact of such campaigns. In fact, actors are now actively advertising disinformation campaign services, also known as Disinformation-as-a-service’ which can be found through what is called the Dark Web. Organisations can now hire service providers for such campaigns for as little as a few hundred dollars, where criminals will craft full-scale disinformation campaigns to falsely generate positive propaganda about themselves – or to generate negative disinformation campaigns designed to tarnish rivals with lies and malicious material.

 

This shadow industry is quietly booming, with research from the University of Oxford discovering that since 2018, more than 65 firms were offering computational propaganda as a service. Such firms would not only have human resource capabilities to create fake web sites and articles but would also have native speakers of the audience they are trying to influence, graphic designers, content writers, campaign trackers and many more. 2018 marks an important year with regards to the discovery of disinformation service providers, after the news of the fully-fledged campaign Cambridge Analytica had performed in the 2016 U.S. elections with different tactics of disinformation to influence voters.

 

Behaviour in this regard encompasses the variety of tactics and techniques actors may use to enhance and exaggerate the reach and impact of their campaigns, with the end goal being that of having the perceived impact of a genuine campaign which is thriving across. As an example, disinformation service providers were found to be studying social media platform algorithms that influence the trending articles shown to users, with the end goal of understanding how to manipulate suggested content.

In fact, social media platforms witnessed the implementation of bot armies and troll farms which would amplify the intended messages and in the case of troll farms, shoot down propaganda not conforming to their campaign objectives.

 

Content within these messages have been seen varying and delivered through different media such as fake news portals, manipulated photos, and also manipulated videos, especially when it comes to deep fakes. With deep fakes, videos are altered by artificial intelligence tools to either misrepresent an event that occurred or manufacture an event that never occurred. Categories of “harmful content” varies and can include violent extremism, hate speech, terrorism, and others but it is worth noting that this category is the most visible vector of the three since every user can see and form an opinion on the posted content. On the contrary, attributing messages to deceptive actors or observing behavioural patterns in a deception operation can be a difficult task.

 

This means that we, the general public, have influence in how much power a disinformation campaign accumulates. Every time a user shares, likes, or comments on content, it increases the likelihood that others will see it. Thus, by staying vigilant and understanding the motive behind each post we encounter, we can avoid engaging with fake content, crippling the information flow of disinformation in process.

 

Every time a user shares, likes, or comments on content, it increases the likelihood that others will see it. Thus, by staying vigilant and understanding the motive behind each post we encounter, we can avoid engaging with fake content, crippling the information flow of disinformation in process.

subscribe

Receive the 

latest updates

on open positions.

You have Successfully Subscribed!