Victory for Hire: Agents of Chaos and the Privatisation of Deception
in the Age of Misinformation
Michele Fossi in conversation with Omer Benjakob
Disinformation is a pressing issue today, with fake news spreading across social media and digital platforms. This has led to concerns about the impact that disinformation can have on elections, political processes, and public opinion.
Recently, a collective of investigative journalists revealed an unknown disinformation-for-hire group called ‘Team Jorge’ which provides election interference and online media manipulation services worldwide. Three of them—Gur Megiddo from TheMarker, Frédéric Métézeau from Radio France, and our interviewee Omer Benjakob from Haaretz—gained access to the group after they went undercover, posed as potential clients and secretly recorded their conversations. They eventually met with the group of digital mercenaries, uncovering information about their tactics, strategies, and impact.
The revelations from these encounters with Team Jorge shed light on the disinformation industry and provided insights into how disinformation campaigns operate.
Team Jorge claims to have been involved in over 30 presidential elections. The investigation also uncovered their involvement with Cambridge Analytica, as well as technologies used to create thousands of fake accounts, spreading disinformation online and even fake news through mainstream media.
DUST met with Omer Benjakob to discuss the significant discoveries from the Team Jorge investigation and their implications for our understanding of how deliberate misinformation campaigns can be utilised to achieve desired outcomes in varied contexts.
Michele Fossi – What makes this investigation—published in February across various international outlets like Le
Monde and The
Guardian— truly remarkable is its several-month-long duration and the participation of nearly one hundred journalists.
OMER BENJAKOB – The investigation is a product of the diligent work carried out by Forbidden
Stories, a French nonprofit organisation dedicated to supporting journalists who have faced assassination, threats, or imprisonment. We initially came together for a global investigation into the misuse of spyware, specifically Pegasus, developed by the Israeli firm NSO Group. This investigation served as a fascinating proof of concept, highlighting the importance of collaboration amongst journalists when addressing international and technological abuses on a global scale. We realised that reporting these stories could not be adequately done from a single location or outlet; effective coverage requires collaboration across borders rather than exclusivity.
M.F. – Can you please give me more context about the investigation and how it began?
O.B. – The
Pegasus investigation was based on a massive leak of phone numbers that were potentially selected for surveillance by the state clients of firms like NSO. Forbidden
Stories brought together reporters from across the world, and this investigation helped spark a global debate about spyware and state surveillance—the EU’s PEGA committee has just now wrapped up its investigation and made its policy suggestions; a similar discussion is taking place right now in the US. After this project, reporters from the group decided to shift their focus and identify what topics we, as journalists, were most interested in investigating in the wake of The
Project. We found that hacking and disinformation were particularly pressing and deeply connected issues.
M.F. – Are the two linked together?
O.B. – Absolutely, they are often two sides of the same coin. It’s worth noting that, when it comes to spyware technology, hacking and stealing information is often insufficient unless it can be weaponised. After our investigation into spyware, we all learned that the output of hacked materials is often disinformation—which is also disproportionately used to target journalists, particularly female reporters. Once someone gains access to a journalist’s phone, it becomes much easier to discredit or shame them, particularly in conservative societies. This backdrop served as the impetus for our investigation. In June last year, Forbidden
Stories gathered us all together to meet in Paris to discuss how we, as a group, could tackle the disinformation industry. The project was dedicated to the memory of Gauri Lankesh, an Indian reporter killed for her work which had increasingly focused on fake news being propagated by supporters of the ruling party and with its implicit support. Disinformation helped fuel the hatred behind her death in the same way it has helped undermine the media’s credibility and fueled a new type of populism in various countries.
During the Paris meeting, our colleague Gur Meggido suggested a change in strategy. Instead of waiting for a whistleblower or leaked information, we needed to take action ourselves and go undercover to discover the truth we knew existed but couldn’t yet prove.
M.F. – As the proverb goes, “If the mountain won’t come to Muhammad…”
O.B. – We realised that sometimes a small lie is necessary to unveil the greater truth. To paraphrase Ludwig Wittgenstein’s words, a lie can be compared to a ladder that, once utilised to overcome the barrier obstructing our perception of reality, can be discarded and left behind. These firms existed, we knew it—they had technologies of disinformation, and we knew that, too—the question was how to reveal it to the rest of the world.
I embarked on this adventure with two colleagues: Frédéric Métézeau from Radio
France, who played the mild-mannered French consultant, and Gur Meggido from TheMarker, who played the pushy Israeli.
We put together a backstory—we represented an unnamed businessman close to an unnamed government in an unnamed African country. Our goal: postpone the election—the reason: none. And then the disinformation market said: hold my beer!
We started doing meetings using this back story. No one asked us who we really were. We learned that no one has a real name in a secret world, and we could use it to our advantage. We may not be professional spies, but we banded together and utilised our collective knowledge to construct a credible backstory. We disseminated a rumour and patiently waited for someone to take the bait. The field we were investigating is very shady and dark, so we used that darkness to our advantage and eventually brought it to light.
M.F. – I assume that if you’re interested in buying fake news or election-disrupting services, you won’t find them advertised on a website.
O.B. – No, of course not. As investigative journalists reporting on arms deals, intelligence and spyware, we were already familiar with the existence of a group of people you could call the middlemen—the door openers. These people know people and help put together various deals operating in the shadow of legitimate industries like arms deals or political consultancy. We utilised this knowledge to navigate the wave of rumours we had initiated. By doing so, we successfully infiltrated this world and gained access to its inner workings.
M.F. – Did you not fear they would conduct their investigation and quickly discover your true identity, given that your face is likely widely available online?
O.B. – Yes, but they didn’t. Once you ride that wave, no one will investigate your background; everything is happening in the shadows. As we started talking with intermediaries, we began introducing rumours, dropping names—suggesting that an unstable African country, de facto we were talking about Chad, was considering postponing its election—and saying we needed help with this manner. Through this tactic, we managed to create a ripple effect and gain access to other individuals in this dark web of mediators; we could step into this world and gather firsthand information until someone finally said: “You should call Jorge.”
M.F. – And so you did.
O.B. – Correct, we were posing as potential clients and expressing interest in buying fake news and election-disrupting services. Our story was so appealing that the intermediaries we contacted realised they had a big client and passed us on to more sinister actors. They introduced us to Jorge. They called him and gave us a burner number. Over five meetings—four on Zoom and one in real life that included hidden cameras—he pitched his services.
M.F. – Could you provide us with a few examples?
O.B. – He introduced us to AIMS—their Advance Impact Media System, which they sometimes called their online social media ‘dominator’ program. It was a shocking experience as we witnessed the scale of disinformation that this software could produce. The software knew how to automatically create fake accounts—not shitty bots, but believable, complex digital personas with an actual phone number, a real email, and a presence across several platforms from Amazon to Airbnb, from Twitter to Instagram. The software could also operate these fake accounts and use them to push out any message or link they wanted. There was also an AI mechanism that could automatically generate posts for those avatars to push out and a program for creating fake news websites for sale. These technologies were being marketed together as part of a service that included political consulting and hacking of clients! They called it ‘active intelligence’ and even boasted about their ability to hack targets and leak confidential documents of rival parties. This tactic—known as ‘hack-and-leak’—is a classic disinformation campaign that was famously used in the 2016 hack of John Podesta’s emails—chair of Hillary Clinton’s 2016 US presidential campaign—and is still being used to this very day.
M.F. – How does it work?
O.B. – The technique is built on manipulating credibility—you take a high-profile target, show that they have been exposed and manipulate the materials you’ve hacked. Let’s say 80 per cent of the leaked email is real. They can add 20 per cent of fake information, and people will tend to believe it because most of the content is real.
M.F. – You are saying that fake news involves more than just falsehoods: it also entails manipulating information within a framework of genuine facts.
O.B. – True. As Jorge told me during one of our meetings:
“Fake news is about belief. You need credibility, and then you can manipulate it. What defines something as fake is not whether it is real, but rather the extent to which people believe it.”
They created effective disinformation campaigns based on this philosophy—you must establish credibility and then gradually manipulate it. Such campaigns are so insidious that most of their content is usually true, a well-known tactic used in Russian state disinformation campaigns.
M.F. – The Soviets’ disinformation tactics developed into an art for the use of private players.
O.B. – We initially believed governments were the primary culprits, but we soon understood that it goes beyond that. We knew that similar campaigns and disinformation tactics had been employed in various countries, including India, Brazil, Hungary, and Israel. But while these campaigns often align with typical right-wing populist ideologies, emphasising themes like anti-elitism, anti-media, or anti-judicial oversight, what is particularly significant today is the complete privatisation of the disinformation industry. For a relatively small investment of $100,000, any client can access disinformation services, and the demand extends beyond just politicians and election campaigns. Oligarchs and even crypto companies have used these services. It’s a worrying trend that reveals the extensive use of disinformation tactics in our contemporary society. What was once the domain of militaries looking to gain an advantage is now being sold by private companies worldwide.
M.F. – Who is behind Team Jorge?
O.B. – This company consisted mainly of three people with different domain expertise, with the main person being Jorge. His real name is Tal Hanan. He’s an Israeli who previously worked in the military and now operates in the defence contracting space, with many contacts in South America, Africa, and Southeast Asia. By recording our Zoom meetings with Jorge, we collected information about their tech, the services offered, and details about previous projects. This footage was then shared with reporters from The Guardian, Der Spiegel, Le Monde, Die Zeit, Paper Trail Media, OCCRP and El Pais, and others who spent six months digging into it, conducting fact-checking and more.
M.F. – Did you meet Jorge in person? How did the encounter go?
O.B. – Our first and final meeting with him took place in January. The purpose was to obtain a definitive confirmation of his identity by capturing a photograph of his face. Although we already knew who he was, we wanted visual evidence linking Hanan to Jorge. Initially, we attempted to arrange a meeting at a café in Tel Aviv under the guise of planning a trip to Israel. However, he suggested meeting at his office instead. I was the first to meet him, equipped with a hidden camera. Witnessing his incessant boasting about the services he could offer was astounding. He was even more forthcoming in person than he had been over Zoom. This exemplified the fact that once you’re on the inside, no obstacles are expected anymore. Having gained entry, we were able to survey everything. The meeting yielded significant information, including an astonishing reference he made to Cambridge Analytica and Brittany Kaiser that left us speechless.
M.F. – What did he reveal about Cambridge Analytica—the infamous political consulting firm involved in harvesting and analysing Facebook data to influence the outcomes of political campaigns?
O.B. – As you’ll remember, the firm was accused of using illegally obtained data to create targeted political ads to sway voters in various elections, including the 2016 US presidential election and the Brexit referendum. The controversy surrounding Cambridge Analytica led to increased scrutiny of social media platforms and their handling of user data. We know all of that because Brittany Kaiser was a whistleblower, or at least wanted to be perceived as such, and dished the dirt on her former employers. Jorge told us he had put Brittany Kaiser inside Cambridge Analytica. He said she worked for him.
Cambridge Analytica targeted individuals based on political preferences using psychological profiling and data. The specific content they presented remained uncertain, but Jorge’s involvement revealed a darker aspect and filled a missing piece of the puzzle. He was the dark side of at least one of their campaigns in Nigeria. In front of us, Jorge revealed he assisted Brittany Kaiser in spreading damaging information about political opponents as part of Cambridge Analytica’s operations.
M.F. – What other juicy stories did listening to their sales pitch lead to?
O.B. – During that meeting in their offices, Jorge showed us a clip from BFMTV, a leading French news channel, and claimed he could get fake news items broadcast for his clients. Our colleague Frédéric Métézeau looked into it, and as incredible as it may sound, he found that an anchorman of the channel, Rachid M’Barki, was able to put fake content on air on behalf of Team Jorge and get it broadcast on the TV channel during a late newscast aired in the middle of the night. The bots then posted these clips from the respected news channel to amplify the claim. Eventually, in February, M’Barki was fired. But the content he made with false information was still being shared by automated fake accounts. This incident highlights that fake news and disinformation can be disseminated through various forms of media, including both traditional and digital platforms.
M.F. – How did Team Jorge’s disinformation machine interfere with political elections?
O.B. – This seems to be their primary expertise. They even have a showreel, like a clip of all their past projects, showing how they hacked election committee websites and knocked down the internet or cellular reception in countries on election day.
They referred to it as ‘D-Day’, treating democracy as if it was a battleground. They even boasted about their efforts to suppress voter turnout. This mindset reveals that, to them, democracy is merely a game to be played—a war game.
It’s not about people or politics even; it’s just about impact and influence for the highest bidder—that’s what’s so scary; you have military-grade skills and tactics being used for non-defensive purposes—be it a political party which knows it’s about to lose or some crypto bro looking to make another easy million.
While we were having our conversations with Team Jorge throughout last summer, we quickly understood they were meddling in the ongoing Kenyan election! One of our meetings even coincided with the day the results would come out, and Jorge joked about him expecting the results to be contested for some time. They were spreading fake news and even hacked someone’s phone as the election was still going on. It’s disturbing to think about how this impacted the election results there or elsewhere.
M.F. – You mentioned the use of ‘automated fake accounts’ or bots. Overall, using bots can significantly amplify the impact of disinformation campaigns, making it difficult for individuals to distinguish between real and false information, creating a sense of confusion and distrust in the online space.
O.B. – Yes, bots are often used in disinformation campaigns to amplify the reach of false information. But we need to stop talking about bots and start thinking more about avatars. Bots are automated accounts that can be programmed to share, like, and comment on posts—but avatars are much less automated. They are ‘cyborgs’—half automated, half manual—and can be active on a number of different topics or social media, creating a much richer illusion of widespread support for a particular narrative or idea. These avatars can also be used to create fake accounts that impersonate real people, which can be used to spread disinformation further. They can also be programmed to engage in coordinated attacks on individuals or groups, targeting them with false information, harassment, or even threats, and being used as single agents. Team Jorge wasn’t the only group we discovered. We also revealed a group that is seemingly more legitimate and is called Percepto; they use so-called ‘deep avatars’, which are fake personas they’ve developed over the years, some of which have made a name for themselves as an investigative journalist or a researcher—though they are fake. They don’t exist in the world, but they exist online, and there they have tons of credibility, which can be bought and sold.
M.F. – However, Facebook, Google, and Twitter have been trying to prevent people from opening accounts automatically and using them at scale.
O.B. – Yes, but the disinformation industry has found ways to hack this system. During our meetings, Jorge and his colleagues showed us what I call their DaaS—the disinformation software called AIMS, which we had never heard of before. It not only allows you to create fake social media accounts, but it also creates believable fake people, which is what makes it so interesting.
M.F. – How do you create a ‘believable’ fake person?
O.B. – Cyborg avatars are a unique combination of automation and manual operation in the online realm. They are crafted to resemble real individuals and are utilised to create fake Twitter accounts. Creating a fictitious persona only requires a phone number and email address, typically using Gmail and a local phone number capable of receiving SMS messages. It may seem astounding, but this is the process by which digital identity verification occurs in the modern age. In essence, possessing genuine Gmail and WhatsApp accounts connected to actual IP addresses presents an authentic appearance. This underscores the reality of being a human in the 21st century, where services like Gmail and Facebook mandate using a genuine phone number. The prevalence of fake news mechanisms online revolves around this system, wherein the definition of authenticity has shifted and assumed a digital form.
M.F. – In your Haaretz article, I read that Team Jorge claimed to have sent a sex toy, delivered via Amazon, to a politician’s home, intending to give his wife the false impression he was having an affair.
O.B. – That’s a great example of how disinformation with avatars can sometimes only require one avatar and have a target of one person.
What’s concerning is that these avatars can materialise in the physical world and seem very real – paradoxically, they are more real than my mother, who doesn’t have a Gmail account. In today’s digital age, possessing a phone number and email address has become an essential aspect of being human.
It is indeed concerning that the creation of avatars only requires a phone number and email address.
M.F. – I recently experienced losing my phone, and it made me realise that without it, I could not prove my identity and access any of my apps. Even governments rely on SMS or phone verification as a means of identity confirmation. But are you saying that this widely-used security measure is not secure enough?
O.B. – These SMS-based verification systems are susceptible to hacking through social engineering tactics and basic hacking techniques. Jorge’s hacking capabilities were built on its access to the global telecom system in a way that allowed him to excerpt those very SMSes and thus gain access to his target’s Telegram accounts, for example. People should use authentication apps and always use not with two-step verification.
It’s not just a technical vulnerability but also a political and social one, and hackers are well-versed in navigating the complexities of modern identity. It’s a fascinating yet concerning development that underscores the importance of cybersecurity and the need for heightened awareness in safeguarding our online identities, not just by security experts but also by all of us. We all have to be much more careful. Our phone is not just our wallet or notebook; it’s who we are.
M.F. – With the emergence of ChatGPT, we are witnessing a new phase in Artificial Intelligence, one that has the potential to revolutionise the disinformation industry. On the other hand, Artificial Intelligence could also be used to combat disinformation by detecting and flagging false information and providing users with accurate and reliable sources of information.
O.B. – ChatGPT’s advanced capabilities can facilitate the spread of disinformation, enhancing its believability. This includes creating sophisticated fake news and deep-fake content, which can potentially manipulate public opinion and disseminate false narratives. This could undermine our ability to discern what is true and what is fake, creating a world where, as Russians say: “Everything is possible, and nothing is true.” I think the point to remember is humanism, on the one hand, and the idea of credibility, on the other. Humans have credibility, and fake human entities can exploit that credibility in a way that we have not fully understood yet. The dominant impact of these developments remains uncertain. However, we should realise that the transformation from a knowledge-based society to an information-based society over the past two decades has had a more profound impact than the emergence of Artificial Intelligence. This shift is monumental and influences every facet of our existence.
M.F. – What marks the passage from knowledge to information?
O.B. – Previously, humans created knowledge recorded in books, but now digital forces generate information intended for computers. We’ve been in this information age for a long time now, and the shift from knowledge to information is one that has massive social and cultural ramifications we don’t yet fully notice. I became interested in this through Wikipedia. While most early disinformation reporters were focused on social media, where people primarily interact with one another, I was focused on Wikipedia—a platform where people actually go to learn. It’s fascinating to me as a journalist because it allows us to witness the social process of fact-making for the first time ever. The digital age allows us to see how facts are made and reported on and how they are exposed.
One of my early and most significant stories involved a massive hoax on Wikipedia, which involved a fake historian creating false sources to support his claims. This was part of a broader effort by the Polish government to manipulate information and create a wider intellectual atmosphere of fakeness. The dual nature of the digital age is so clear: we have so much access to information, but there’s so much disinformation that goes hand in hand with it; we have information but not enough knowledge that we know how to spread the same way. The same things that help spread knowledge also threaten it. The age of information is, therefore, also, by definition, that of dis-and-misinformation. I think it’s also always been like that; after the printing press was invented, there was a massive wave of fake news about alchemy and prophecies, with so-called ‘wonder books’ using the same technological advances—which would eventually bring us to the Scientific Revolution, the Enlightenment, and the Modern Age—for the exact opposite purpose. Technology always breeds progress and ignorance; you cannot have one without the other.
M.F. – What is the main consequence of living in an information-based society compared to a knowledge-based society?
O.B. – The shift from a knowledge-based society to an information-based society has profoundly impacted our ability to act as autonomous agents. While opinions and collective efforts can be helpful in combating fake news, the assumption that no one knows the truth and that technology is the only means of achieving it is problematic. That is the core of Wikipedia: no one is an expert, but with the help of this system and its rules, we will reach the truth together. That’s why Wikipedia is simultaneously incredible at fighting fake news and also partially to blame for the fragmentation of human agency and truth in the Digital Age. The fact that each Wikipedia is different in every language is a good metaphor for our world.
Now, Wikipedia seems quaint because humans still make it for humans, but the next phase is already so much more explicitly anti-humanistic. In the current era of the Fourth Industrial Revolution, this issue is further exacerbated by using fake avatars and chatbots.
It’s alarming that we’re seeing a gradual erosion of human logic and agency across various fields, including art, culture, and knowledge. This trend is marginalising humans and being increasingly seen as problematic.
Living in an information-based society undermines our capacity to act as autonomous agents and perceive ourselves as such. The threat isn’t just from entities like the Chinese government using ‘deep fakes’ but also from the erosion of our intellectual capacity and agency in the broader field.
M.F. – With ChatGPT, we’ve even begun to anthropomorphise language models and treat them like agents…
O.B. – We need to be cautious not to overly romanticise this technology. While knowledge and technology can be incredibly beneficial, my concern lies in how we, as humans, perceive ourselves in relation to these advancements. It’s not that AI or non-human agents creating content is inherently wrong—after all, the algorithm on Twitter is powered by AI—but it’s rather a question of how much power we give them and how much we feel inferior in comparison. We must not lose sight of our intellectual capabilities and continue to view ourselves as active participants in creating and disseminating knowledge.
I love that designers and journalists now have access to things like Dall-E or Midjourney and that the threshold for participation in things once considered ‘technical’ is being lowered. It would be tragic not to have creatives utilise that and cede more territory to non-human creative agents.
M.F. – For decades, there have been contrasting perspectives surrounding the topic of technology. One viewpoint argues that technology surpasses human capabilities, leading to complete reliance on it and the potential loss of our own skills. An example is Google’s driverless cars, where driving skills may become obsolete. On the other hand, the other viewpoint sees technology as a means to augment our abilities without diminishing them. These opposing views persist in various domains, highlighting the diverse ways of engaging with technology. It is crucial to recognise that technology can either foster laziness, forgetfulness, and a loss of control or serve as a tool to enhance our capabilities and enable us to reach new levels of achievement.
O.B. – It is intriguing to reflect on the history of technology and how it often goes unnoticed. Taking a broader view, we can identify significant revolutions sparked by print, electricity, phones, and trains, with everything else being just a footnote. Communicating remotely through devices like phones and telegraphs was a groundbreaking achievement. For the first time in human history, individuals could connect regardless of physical distance, transcending the limitations of space.
In comparison, platforms like Twitter or devices like the iPhone seem comparatively small. They are just a small part of a much larger narrative. In the early modern era, technologies transformed the world in profound and incomprehensible ways. Similar to the transformative impact of horses on transportation, the electricity grid, accessible to all, represents a revolution surpassing many digital advancements. These developments can reshape our perception of agency, akin to the paradigm shift brought about by the printing revolution. Creatives must avoid fetishising technology and fixating on constant innovation. We must not forget the lessons of history. While ongoing debates surround the impact of new technologies, we should recognise the wealth of knowledge and experience already at our disposal. Let us not lose our human essence or become detached from our cultural roots. Technology is for us, by us, not the other way around.
M.F. – One last question: what happened to ‘Jorge’? Is he in jail now?
O.B. – No… the investigation is still ongoing. However, this situation highlights how humans are caught between incompetent local and national governments, deliberately oblivious social media platforms and technology giants who show little interest in resolving these issues.
To me, this represents a humanism issue, perhaps even a human rights issue. The challenge is combatting disinformation through technological regulations and civil rights or human rights frameworks.
I believe that disinformation poses a threat to human rights and humanity itself. It undermines our ability to enjoy the freedoms we deserve, creating a world where non-human entities demand their own rights. Avatars have no right to free speech, and granting them that right undermines the value of genuine human expression.
I cannot exercise my right to free speech effectively in this environment because my speech is not truly free if I am not fully informed. By introducing non-existent voices into a debate, the entire discourse is compromised. Safeguarding the integrity of democratic processes should be a fundamental human rights priority today.