AI in the Hands of Scammers: New Fraud Schemes and Fake Charities

June 6
News

Rising AI Fraud Risks in Charitable Giving

The global threat of AI-powered charity fraud is growing. Experts warn that malicious actors could soon widely deploy AI agents for fundraising, creating highly convincing and emotionally charged content designed to deceive a broad audience. An AI agent is an autonomous program built on large language models, capable of analyzing data, making decisions, and learning.

Automated Scams and Deepfakes on the Rise

Projections indicate that the risks of AI agent-driven fraud will increase as these technologies become more integrated into various sectors. Such schemes are expected to become more prevalent in the second half of 2025 or early 2026, due to the increasing accessibility of these technologies. AI agents can target thousands of users simultaneously with personalized messages. Scammers will be able to use them to generate deepfakes and automate cyber-fraud schemes, including creating and distributing malicious content, as well as engaging in dialogues with victims, significantly saving scammers' time.

Dangers of Excessive Autonomy and Emotional Manipulation

The use of AI agents by scammers presents a very real and present danger. These agents save malicious actors time and resources, allowing them to ensnare more victims. The primary danger lies not so much in the programs themselves, but in AI's ability to generate content that evokes strong emotions—for instance, videos of sick animals or touching stories that quickly go viral and reach a large audience. Excessive autonomy in AI systems, such as automatically sending payments or transmitting banking details, could lead to data breaches or financial manipulation.

Novel Deception Methods and Lack of Mass Fraud Precedents

AI agents can autonomously scan social media to identify emotionally vulnerable individuals. They can then launch mass personalized outreach campaigns, mimicking genuine interest or offering false support. If a victim responds, scammers can escalate the attack using deepfakes, creating realistic video calls or voice messages, which significantly increases the victim's trust. This allows malicious actors to extort money under the guise of urgent help, blackmail victims, or redirect them to phishing sites to steal data.

Despite these threats, there haven't been any widely reported cases of mass AI agent-driven fraud in the US and Europe yet, although isolated incidents involving neural networks in fraudulent schemes have already occurred.