Deceptive AI campaigns: from job scams to election interference

Artificial intelligence tools like ChatGPT have revolutionized productivity in many sectors, but they are also being used for increasingly questionable purposes. A recent OpenAI report revealed how various international actors are abusing these systems to launch disinformation campaigns, commit fraud, and carry out covert political operations.

One of the most striking campaigns was identified by chance, when one of OpenAI’s researchers received a fraudulent SMS message. The scam, known as “Wrong Number”, used advanced linguistic models to generate fake job offers. Victims were lured with the promise of earning more than five dollars for simply ‘liking’ posts on social networks; a sum completely outside market margins, where a thousand ‘likes’ usually cost less than ten dollars.

The fraud operators followed a known formula: first, a seductive interest was generated (ping), then small symbolic payments were granted to ‘prove’ legitimacy (zing), culminating in a final phase where victims had to transfer money or invest cryptocurrencies (sting). All this was done through platforms such as WhatsApp and Telegram, using conversations crafted by AI to gain people’s trust.

From Russia to China: multilingual disinformation for electoral purposes

OpenAI also dismantled an operation promoted by Russia, dubbed “Operation Helgoland Bite”. This network used ChatGPT to write content in German with a view to influencing the German federal elections of 2025. The publications, disseminated on a Telegram channel and on an account on the social network X that gained more than 27,000 followers, attacked the United States and NATO, in addition to promoting the far-right party Alternative for Germany (AfD).

Although OpenAI estimates that the real impact was limited, the concern lies in the automatic and scalable use of AI models to reproduce propaganda narratives without direct human intervention. In addition to textual content, these chatbots also served to investigate opposition activists and translate propaganda into German, making its penetration more effective.

For its part, China deployed several specific campaigns through regional swathes. One of them, “Sneer Review”, generated critical comments on geopolitical issues on platforms such as TikTok and X, especially against Pakistani activists such as Mahrang Baloch. Another operation, “Uncle Spam”, flooded American debates with contradictory opinions and used photos of fake military veterans, generated with AI, to give credibility to fictitious profiles.

Conclusion: AI as a tool for global manipulation

According to the report, OpenAI detected a total of ten coordinated campaigns from countries such as Russia, China, Iran, Israel and North Korea, where AI technology was used for political, fraudulent or even criminal purposes. These activities range from job scams and election manipulation to the creation of fake media and invented characters.

Faced with this situation, the company has proceeded to suspend accounts and block suspicious activities, in addition to collaborating with authorities to monitor the misuse of its technologies. However, the growing sophistication of these campaigns demonstrates that the fight against the misuse of artificial intelligence is only just beginning. The ability to produce coherent and convincing texts in multiple languages makes tools like ChatGPT a double-edged sword, particularly in contexts where disinformation can have serious consequences.

This type of abuse underscores the urgency of establishing ethical standards and robust detection mechanisms, both by developers and by governments and technology platforms. As AI continues to grow in adoption and power, protecting users from these risks will be as essential as continuing to innovate.