After the outbreak of conflict in the Middle East, videos and images spread on social networks that do not correspond to reality. The case of X (formerly Twitter) is the most frightening, and the attitude of its owner Elon Musk towards the situation is not convincing.
Hoaxes and hate speech have multiplied on social networks since the Hamas attack on Israel on October 7. Faced with the situation, the European Commission has alerted the social network X (formerly Twitter) and Facebook that the two platforms are used to spread disinformation. And he asked them to remove any illegal content.
The European Union has the Digital Services Act, a rule that requires all internet platforms to remove fake images. In the event of repeated violations, platforms could be forced to pay a fine equivalent to 6% of their annual turnover.
Meta claimed to have experts working to remove all content, while the owner of . The Commission does not rule out opening an investigation. Currently, she is already analyzing a response letter from the platform and deciding on the next steps.
X and the Musk era
The social network This usually happens across all social media during serious crises, like wars. But in this case, it was the litmus test for the platform led by Elon Musk. And for the experts, the first assessment is not good at all.
There have been chilling images of children in cages, documents proving the US approved $8 billion in aid to Israel and videos suggesting atrocities against civilians since the assault began of Hamas against Israel were staged by the Hebrew State.

All this fake news was widely disseminated on
Thierry Breton against Elon Musk
For the European Union, this is already too much. In an open letter, Thierry Breton, European Commissioner in charge of the Digital Agenda, gave Elon Musk, the boss of X, 24 hours to clean up the content on his platform. Failing this, the European Commission reserves the right to impose a penalty equivalent to 6% of the social network’s profits for non-compliance with European regulations on digital services.
Hamas’ attack on Israel was “a real stress test for digital platforms and their content moderation policies,” said Hamza Mudassir, co-founder of British consultancy Platypodes and professor of business strategy at the University of Cambridge.
The violence of recent days has sparked endless reactions on social networks. More than 50 million messages about the conflict have been published on X alone since Saturday. This explosion of content has been accompanied by a proliferation of “fake news” and propaganda messages, according to fact-checking sites around the world.
All social networks, from TikTok to Instagram, have been affected, but X in particular. Thierry Breton did not choose Elon Musk as a target by chance. “It’s clear he’s part of the problem,” said Sander van der Linden, a professor of social psychology at the University of Cambridge and an expert on misinformation on social media.
Elon Musk as ‘part of the problem’
“This is the real baptism of fire of X imagined by Elon Musk, since it is the first major international crisis to occur while he is already in power. The war in Ukraine started before I bought the site (in April 2022),” explains Jon Roozenbeek, a specialist in disinformation at the University of Cambridge.
First impressions are not favorable for
First because Elon Musk “is clearly part of the problem,” he said. “Personally, he doesn’t seem very interested in better content moderation,” Jon Roozenbeek said. His response to Thierry Breton says a lot about his position: “Could you list the violations (of European standards) to which you refer so that everyone can see them?”, he asked in X.

This is either an indication that he is unaware that there is something wrong in X’s world, or “a way of prolonging the discussions”, Roozenbeek argued.
Elon Musk was even caught promoting Twitter accounts known for spreading false information. On October 7, he invited people to follow @WarMonitors and @sentdefender to “follow the war in real time.” The problem is that these two accounts were already “among the main spreaders of false information about an explosion near the White House (which never happened)”, as the Washington Post. War Monitors has also been accused of publishing anti-Semitic messages.
Instigations to spread rumors
But it is above all the dismantling of protections established before the Musk era that worries experts. “The main problem comes from the new certification policy, which allows anyone to have the little blue emblem of a certified account provided they pay a monthly subscription. As a result, it has become much more difficult to know who to trust on Twitter,” concluded Roozenbeek.
“Elon Musk’s decision to reinstate the ‘super-spreaders’ of false information also contributes to the virality of certain problematic content, because we know that it only takes a few very influential accounts to make a difference,” added Sander van der Linden.
Without forgetting the new remuneration policy for content creators. “They are paid based on the number of times one of their messages is viewed. “As a result, they will be tempted to retweet the most viral messages – often with the most inflammatory content – without necessarily checking whether or not it is misinformation,” Van der Linden said.

This conflict is also beginning to show the effects of drastic cuts made to content moderation teams. For example, the fake clash videos used footage from the war simulation game Arma 3. “This wouldn’t have happened in the days of Twitter, because moderation teams had tools to learn from their mistakes , and the same type of fake videos, using clips from the same game, were used during the Ukrainian conflict,” Van der Linden explained.
Instead of the old rules of moderation, Elon Musk relies primarily on community policing. “It introduced community ratings, which allow users to add context or report misleading content. It’s a system that works quite well,” said Hamza Mudassir. The disinformation experts interviewed recognize the usefulness of this mechanism, but “it means that the responsibility for monitoring information falls 100% on the community,” Roozenbeek said.
Real-world consequences
Thus, the explosion of violence in the Middle East is likely to reveal “the flaws in Twitter’s anti-disinformation shield and its consequences,” said Hamza Mudassir. And it’s not just bad news for Elon Musk’s decisions. Spreading false information on social media “can have very real consequences,” Van der Linden stressed.
For example, “rumors spread (previously) on social networks have led to acts of violence in India,” adds this expert. In 2018, the British channel BBC documented a series of tragic consequences across the world following the spread of fake news.
The problem is that “it is difficult to require the head of a private company who must save to spend more in an area that has no impact on Twitter’s profitability,” explained Hamza Mudassir.

Unless X has to pay the price for the bombardment of disinformation. “We must not forget that Elon Musk presented his Twitter as the network to consult to be informed before others,” said Mudassir. But if this information is false, what is the point? Is this enough to scare away those who, even before the resurgence of violence in the Middle East, believed that Elon Musk’s promises rang hollow in the face of the growing power of disinformation in X?
But it is above all advertisers who risk taking a dim view of the way in which disinformation people use .
Source: Latercera

I am Robert Harris and I specialize in news media. My experience has been focused on sports journalism, particularly within the Rugby sector. I have written for various news websites in the past and currently work as an author for Athletistic, covering all things related to Rugby news.