生成式人工智能及其潜在的邪恶用途

克里斯•麦高文
作者: 克里斯•麦高文
发表日期: 2023年8月1日

生成式人工智能似乎在2022年底爆发. 在此之前,没有人真正谈论它. But now everyone (well beyond the IT world) is not only talking about it but using it. 这是令人兴奋的,同时也是有问题的.
Generative artificial intelligence (AI) refers to a class of machine learning (ML) algorithms that can autonomously generate new content, 比如图像, 文本或音频, 通过从大量数据中学习模式和结构. 这项技术近年来取得了重大进展, resulting in AI models that can create remarkably realistic and convincing outputs.

The introduction of generative AI has brought accelerated advancements in various fields revolutionizing creative processes, 内容生成和决策系统. 虽然生成式人工智能呈现出许多积极的应用, 就像任何强大的工具, 它也可以被用于恶意目的. Its misuse in the realm of cyberattacks poses a grave concern and brings into focus the inherent risk and far-reaching implications that emerge when generative AI intersects with malicious activities. Organizations must be aware and ready to combat the possibility of generative AI being used to conduct cyberattacks.

Its misuse in the realm of cyberattacks poses a grave concern and brings into focus the inherent risk and far-reaching implications that emerge when generative AI intersects with malicious activities.

WormGPT

一个在地下论坛上臭名昭著的工具是WormGPT. WormGPT, developed in 2021, is a generative AI tool built using the GPT-J language model. It offers a diverse set of features including unlimited character support, chat memory retention and the ability to handle code formatting efficiently. It is a powerful tool for adversaries to execute sophisticated phishing and business email compromise (BEC) attacks.1

WormGPT presents itself as a black hat alternative to standard GPT models because it is purposefully designed for malicious intent. Cybercriminals can leverage WormGPT to automate the creation of highly convincing fake emails that are expertly personalized to each recipient, 这能显著提高它们的攻击成功率.

The introduction of WormGPT and its unscrupulous purpose highlights an alarming threat posed by generative AI, enabling even inexperienced cybercriminals to launch large-scale attacks without technical expertise 澳门赌场官方下载s need to be aware of these new tools and be particularly sensitive to new phishing and BEC attacks. One way to help mitigate this increased risk is through updated training programs and/or reviewing and enhancing email verification processes.

PoisonGPT

Another concerning scenario arises from the potential modification of an existing open-source AI model to spread disinformation. Such a manipulated model can be uploaded to public repositories such as Hugging Face,2 which is a prominent open-source community that specializes in developing tools that empower users to build, train and deploy ML models using open-source code and technologies. This practice is known as large language model (LLM) supply chain poisoning.

这项技术的成功, 被称为毒药pt, relies on uploading the altered model under a name that impersonates a reputable organization, 让它无缝融合,不被注意. Poison GPT underscores the urgent need for heightened vigilance in the face of the evolving cyberthreat landscape.3

Combating both WormGPT and PoisonGPT requires a robust security program that consists of educating users on the latest cybercriminal tactics, integrating multifactor authentication and implementing robust access control measures to significantly bolster organizational security against potential malicious applications of this cutting-edge generative AI technology. Adhering to these best practices enables organizations to fortify their defenses and achieve maximum protection.

多态的恶意软件

生成式人工智能也可以用来构建恶意软件. Polymorphic malware is a type of malicious software that continuously changes its code to evade detection by antivirus software and other security measures. What makes it particularly formidable is its use of adaptable cutting-edge AI technology to generate new code with each iteration. This adaptive behavior makes the malware incredibly challenging to detect and counteract.

Many other advanced persistent threats (APTs) also rely on AI-driven techniques, magnifying the need for unwavering vigilance when securing IT assets.

结论

Generative AI has the potential to revolutionize various aspects of life in positive ways. 然而, its misuse for conducting cyberattacks presents significant risk to individuals, 组织与社会. By understanding and recognizing the potential threats and taking proactive measures, cybersecurity professionals can harness the benefits of generative AI while safeguarding against malicious exploitation, 从而营造一个安全可靠的数字环境.

尾注

1 Mahirova,年代.; “什么是wormpt? 最近一波网络攻击背后的新人工智能《澳门赌场官方下载》,2023年7月18日
2 黑客新闻”WormGPT: WormGPT: New AI Tool Allows Cybercriminals to Launch Sophisticated Cyber Attacks, 2023年7月15日
3 Shenwai D.; “Meet PoisonGPT: An AI Method To Introduce A Malicious Model Into An Otherwise-Trusted LLM Supply Chain,《澳门赌场官方下载》,2023年7月14日

克里斯•麦高文

Is the principal of information security professional practices on the ISACA® 内容开发和服务团队. 在这个角色中, he leads information security thought leadership initiatives relevant to ISACA’s constituents. McGowan is a highly accomplished US Navy veteran with nearly 23 years of experience spanning multidisciplinary security and cyberoperations.