The operationalization of artificial intelligence (AI) by cybercriminals has raised significant concerns, according to a report from Google Threat Intelligence Group and Google DeepMind. AI is enhancing the effectiveness and precision of existing cyberattack methods rather than creating entirely new threats. This trend suggests a transformation in the execution of cyber threats, which can now be carried out with greater clarity and speed.
The report highlights that large language models are playing a pivotal role in accelerating the transition from planning to execution in cyberattacks. Cyber adversaries are utilizing publicly available information to develop detailed profiles of targets, including executives and vendors. In social engineering, AI is effectively generating tailored narratives and identities to deceive individuals.
Furthermore, malware developers are leveraging generative tools to improve their exploits and rapidly produce variants, which poses a challenge to conventional cybersecurity defenses. The report identifies various malware families that utilize AI APIs for stealth and automation. Noteworthy examples include HONESTCUE, which uses the Gemini API to create executable code dynamically, and PROMPTFLUX, a VBScript dropper that mutates its source code hourly. Additionally, PROMPTSTEAL employs AI for on-demand system command generation, while the COINBAIT phishing kit clones cryptocurrency exchange interfaces to enhance credential theft.