Learn more about our mission

Disinformation by imagery in two divisive conflicts

Reporters have been relaying the horrors of conflict zones since the 18th century, connecting global citizens with the realities of conflicts far beyond their own borders. An Irishman, William Howard Russell, was even at the forefront of what has been identified by historians as the birthplace of modern war journalism, with his dispatches from the Crimean war of 1854-56. 

In the 20th century, advances in audio-visual equipment allowed photojournalists to capture snapshots of human suffering that have become iconic images of a world at war.

The most recent decade, however, has seen the recording of conflict passed from the almost exclusive hands of official media or military sources into the palms of citizens. 

The proliferation of imagery and video straight from the epicentre of conflicts, taken largely on mobile devices, has saturated online media.

With this sea change, we have never felt so close nor so familiar with images of conflict. It also means that it has become much harder to sift through the content on every social media platform, on every website and messaging service and understand where it is coming from, who has taken it and why. And an important question for all of us viewing these images on a daily basis: to what extent is it an unfiltered view from the frontline, or part of a larger campaign of manipulation by one ‘side’ or another.

Let’s look more closely at how disinformation campaigns have been weaponising the happenings in two of the most high-profile and intensely chronicled conflict zones, in Gaza and Ukraine. 

War and social media

The early stages of conflicts and crises provide fertile ground for the propogation of misinformation in various forms. Nowadays, that involves the repurposing of old footage and photographs, the sharing of artificially generated images and even video game footage being passed off as war reporting. 

Even discerning consumers of media can be caught out in the fog of war, especially when emotions are high.  

The 2023/2024 Israeli siege and assault on Gaza, and the Hamas attacks of 7 October, have been no exception. In fact, the level of misinformation emerging around what’s happening in the region has shown what a powerful vector social media can be. This conflict has already provided a long list of examples worth bearing in mind in the wars and crises to come. 

Facebook, X (formerly Twitter), TikTok and Linkedin, to name a few, have all seen misinformation carried on their platforms. X has come under particular criticism for its lax approach to monitoring its users’ activity, but no social media site is immune. 

It’s not just average users and self-styled journalists who are guilty of spreading falsehoods via social media though. As in all conflicts, state propaganda has had a significant role to play in muddying the waters. We have seen examples from Israel, Russia and Hamas in recent times. 

Although the means of communication may change, the spread of false information in times of war is nothing new. “In war, truth is the first casualty,” the Greek playwright Aeschylus said around 500 BC. 

People’s biases come into play in conflict situations, something that Valerie Wirtschafter, a fellow in Foreign Policy and the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institute, has explained in relation to Israel and Hamas. 

“People are susceptible to this type of information because of really basic underlying cognitive processes,” she said. “We have our beliefs. We like to have our beliefs confirmed. This is confirmation bias. And we’re going to seek out information that confirms those beliefs or discount information that maybe goes against it. 

“It’s very motivated reasoning. And so I think that people want to have their priors confirmed. They have certain opinions, very strong emotional reactions, particularly to this crisis going on right now. And so that tendency to seek out information that confirms what we already think is very common here.”

So, let’s look at some of the most prominent examples of misinformation - particularly visual content - that have been doing the rounds online since 7 Oct and how to identify similar playbooks in the future.

Artificial images

A major technological development in the area of misinformation has been the emergence of image generating software programmes like Midjourney and Dall-E, commonly called AI (Artificial Intelligence). 

Artificially generated images have been shared on a variety of social media platforms in relation to the Israel-Hamas conflict, but the technology still suffers from some identifiable shortcomings. 

One early case of this in the current conflict involved a supposed image of a child crying in the rubble of a destroyed building. It was posted on X on 18 October by a user who describes herself as a “Palestinian journalist”. The post led people to believe the image was of a child hurt in a bombing in Gaza.

But it had already been posted on LinkedIn months earlier, around the time of the devastating earthquake that hit Turkey and Syria in early 2023. 

The image is not authentic to begin with though.

If viewed through the link to X, or LinkedIn, above, there are some giveaways once an initial (and understandable) emotional reaction has passed. The lighting and colouring is cartoonish for a start, but so is the child’s facial expression. There are exaggerated lines on the brow and chin. And then, of course, there are the fingers, something image generating software has yet to replicate convincingly. 

The picture has been posted across multiple social media platforms under various pretences. It has even been used as the banner image for a Gofundme campaign ostensibly raising money for people affected by the October 2023 earthquake in Afghanistan

(It should be noted, of course, that the distribution of the image has been diminished by the sheer and increasing volume of all too real images of children killed or maimed by collapsed buildings as the attack on Gaza went on.)

Another artificial image shared since the 7 October Hamas attack depicts what looks like a refugee camp for Israelis, with one tent prominently featuring the Israeli flag. 

However, there is a tell the image is faked - a flag in the middle distance that appears to have two stars of David on it, rather than one. In the top right corner, you can also see the renderings of tents become indistinct. The tents in the foreground are odd on closer inspection as well, as no two appear to be alike, each having its own wonky shape. 

The image was shared by users across multiple platforms by both pro-Palestinian and pro-Israeli accounts.

A third example of an artificially generated image that was widely shared online in the early days of the Israeli bombardment and assault on Gaza purported to show Atletico Madrid football fans holding a massive Palestinian flag.

This one is perhaps the most obvious of the examples. At first glance, the saturated colouring and unnatural lighting should at least arouse suspicion. Once again, a quick zoom does the trick. 

When it comes to depicting crowds of people, image generation programs struggle to get the details of individual crowd members right. 

Can you match each arm to its owner in the section below? Also, at least one of these spectators appears to be missing most of their head.

But it isn’t just imagery generated by models like Dall-E and Midjourney that have been misleading social media users since 7 October. People have also attempted to pass off video game footage as war reporting. 

How to spot video game footage?

Arma 3 is a military simulation game developed by Czech company Bohemia Interactive. Gameplay footage from Arma 3 is often misrepresented as real footage from conflict zones.

On 8 October, the day after Hamas’ attack, Arma 3 gameplay footage was posted by @AGCast4 on X (Twitter), with the user claiming it showed a salvo of Hamas rockets. The post has garnered over 147 thousand views, 1,000 likes and around 180 reposts as of the time of writing. 

The post is likely an exercise in trolling (the account mostly posts about guns and anime girls), rather than a deliberate effort to promote disinformation for political ends, although the results can easily be the same; once online, the content is out of the original poster’s hands.

Users pointed out the actual origin of the video using the website’s Community Notes function but the post remains on the site. 

There are many ways to identify the video as a fraud. Pausing on one of the most brightly lit frames reveals that the footage is clearly computer generated. Smoke is still difficult to reproduce with computer graphics. 

Half-way through the video, a rifle icon appears in the centre of the screen as part of the HUD (Heads Up Display) technology for computer game users. 

The jerky movement of the camera will also be familiar to anyone who has played a first-person video game. The movements are not natural, with the player taking repeated and unnecessary steps back and forth, something gamers typically do when idling. 

On top of this, the rockets all fire perfectly straight into the air and the smoke trails they leave behind are equally straight, showing no sign of being blown by the wind. 

Arma 3 footage has also been used misleadingly, on multiple occasions, since the full scale Russian invasion of Ukraine in February 2022. The game has been misused in other conflicts, too, like in this Twitter post from 2020 which claims to show US forces shooting down Iranian missiles.

The developers at Bohemian Interactive have posted an article on their website acknowledging that Arma 3 gameplay footage has been misrepresented as war videography. 

"While it's flattering that Arma 3 simulates modern war conflicts in such a realistic way, we are certainly not pleased that it can be mistaken for real-life combat footage and used as war propaganda,” said the company’s PR manager Pavel Křižka in that post, which made most reference to the war in Ukraine but also mentioned other conflicts. 

The developer listed some tips to bear in mind when viewing videos apparently showing the events of a war. 

More tips from video game developers

  • Very low resolution
    • Even dated smartphones have the ability to provide videos in HD quality. Fake videos are usually of much lower quality, and are intentionally pixelated and blurry to hide the fact that they’re taken from a video game.
  • Shaky camera
    • To add dramatic effect, these videos are often not captured in-game. Authors film a computer screen with the game running in low quality and with an exaggerated camera shake.
  • Often takes place in the dark/at night
    • The footage is often dark in order to hide the video game scene’s insufficient level of detail.
  • Mostly without sounds
    • In-game sound effects are often distinguishable from reality.
  • Doesn't feature people in motion
    • While the game can simulate the movement of military vehicles relatively realistically, capturing natural looking humans in motion is still very difficult, even for the most modern of games.
  • Unrealistic vehicles, uniforms, equipment
    • People with advanced military equipment knowledge can recognise the use of unrealistic military assets for a given conflict. For instance, in one widely spread fake video, the US air defence system C-RAM shoots down a US A-10 ground attack plane. Units can also display non-authentic insignias, camouflage, etc.

Repurposed images

While fake images produced by ever more powerful technology may be the most eye-grabbing form of misinformation in today’s online age, good old-fashioned misrepresentation of real images is still the biggest offender. 

Footage and photos from other conflicts, particularly the civil war in Syria, have been repurposed and presented as reporting from Gaza or Israel. Some of these have already been debunked by The Journal FactCheck

This form of misinformation is more difficult to detect as the images are real, but mislabelled. In cases like these, it is important to scrutinise the source and look for confirmation of claims elsewhere. 

In one example, Mario Nawfal, a user with over one million followers on X despite a history of misleading posts, posted a video which he claimed showed a salvo of Hamas rockets. The video was actually first posted in 2020 and showed footage taken during the war in Syria. The misleading post has since been deleted. 

In another post from the same account, footage taken during an Israeli bombardment in the vicinity of the Indonesian hospital in Gaza was presented as though it showed the Israeli assault on Shifa hospital. The original video was posted by Middle East Eye on their TikTok account on 9 November. The Israeli raid on Shifa hospital took place the following week. 

This post on X shows a video taken inside a tunnel. The poster claims it shows Israeli soldiers entering a tunnel built by Hamas in Gaza but it is actually footage taken from a Youtube video posted in October 2022. Videos taken from inside tunnels should always be treated with suspicion, especially if they don’t show the tunnel entrance. 

Again, scrutinising the source and looking for the footage or story elsewhere online is the best way to avoid being taken in by content like this. 

Images and videos of events that took place outside of conflict zones have been used to mislead people online, too. 

This video posted on X purported to show festival-goers running from Hamas militants during the 7 October attack. In reality, it was footage of people running to the stage when the gates opened at a Bruno Mars concert in Tel Aviv, which took place a couple of days before the attack at the Negev Desert festival. The video was originally posted on TikTok.

In another example, a fireworks display in Algeria was posted widely on social media sites with the accompanying claim that it showed rockets flying over Gaza. The original video was taken in Algiers in July 2023 and shows football fans celebrating a league win.

Exercising a healthy level of scepticism and scrutinising sources is the best way to avoid being taken in by this kind of misrepresentation. It also is helpful to take a moment before sharing a post or video that has made you feel strongly about something. 

Fact-checkers in Israel and Gaza right now are also trying to work despite their own experiences. In an interview with the New York Times, journalist Achiya Schatz explained: “We work hard to stick to what we know or don’t know, and to leave aside our political opinions. Especially now, in a time of war, we have to work carefully to not let our opinions cloud what is factual and what is not.” 

The toolkit on this website provides extra practical tips about how to spot AI images, believable conspiracy theories and erroneous headlines. 

Published

March 14, 2024

|

Updated

David MacRedmond

Journalist with The Journal

The Journal
Knowledge Bank

FactCheck is a central unit of Ireland’s leading digital native news site, The Journal. For over a decade, we have strived to be an independent and objective source of information in an online world that is full of noise and diversions.

Our mission is to reduce the noise levels and bring clarity to public discourse on the topics impacting citizens’ daily lives.

Contact us at: factcheck@thejournal.ie

Visit thejournal.ie/factcheck/news/ to stay up to date on our latest explainers

Explore

We use cookies to make our site work and also for analytics purposes. You can enable or disable optional cookies as desired. See our Cookie Policy for more details.