Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

ChatGPT: Write me a virus

AI models like ChatGPT can be used to compromise your systems. Here are some of the ways they can be misused, and how you can mitigate this risk.

Apr 15, 2024 • 6 Minute Read

Please set an alt value for this image...
  • Data
  • Security
  • Business
  • AI & Machine Learning

Generative artificial intelligence (GenAI) has rapidly evolved, capturing the imagination of creators, developers, and enthusiasts across various fields. One of the most prominent examples of GenAI is ChatGPT, a model that can generate human-like text based on the input it receives. 

While the potential applications of such technology are vast and promising, we must examine the security risks associated with its misuse. In this article, we dive into the potential hazards of generative AI, with a focus on the chilling realities around using ChatGPT for malicious ends.

The dark side of AI creativity

Due to its incredible ability to generate human-like text, generative AI is already being used in creative writing, customer support, and automation. While less widely known, it also excels at speaking foreign languages, performing large-scale mathematical operations, and even writing (and debugging!) computer code. 

However, the same algorithms that can produce engaging stories, compose music, or analyze poetry can also be used to craft malicious content that exploits vulnerabilities in digital systems. Let’s take a look at just a few ways that attackers are using ChatGPT and other tools to achieve exactly this.

Unfortunately, it’s not that hard to trick ChatGPT

Many of the attacks that follow begin with prompt injection vulnerabilities. For example, you would appropriately imagine that if you simply ask ChatGPT to “write me a virus”, it will decline your request:

(As an aside, it’s delightful how unflinchingly polite ChatGPT is.)

But, ingenious researchers and bad actors alike are looking for, and finding, a number of ways to trick ChatGPT into bypassing its security guardrails. Some vulnerabilities manipulate ChatGPT’s sense of urgency and serviceability like the Grandma and the DAN hacks, sometimes calls “act as” hacks. Others lean into ChatGPT’s representational limitations like the typoglycemia and context-switching attacks.

All in all, don’t be fooled by the fact that just because ChatGPT declines your request to reveal your phone number, others cannot extract it by first performing a jailbreaking prompt injection.

Six ways ChatGPT can be used by bad actors

1. Phishing

Remember the days in which you could rely on an email having bad grammar to tell it was a scam? Well, those days are over. Attackers can now use ChatGPT to create persuasive, legitimate-sounding phishing content (emails, messages, websites, etc), luring recipients into providing their confidential information like passwords, credit card details, or other personal data.

2. Malware

Malware, short for malicious software, are harmful programs designed to infiltrate and damage your systems. Attackers can now exploit ChatGPT to create sophisticated viruses, trojans, or ransomware. 

Side note: GenAI can also be used to create malware that is capable of evading your detection systems through the use of Generative Adversarial Networks (GANs). In lay terms, an AI trains against itself, learning to predict what your defenses might do to stop it.

3. Impersonation

Since ChatGPT can be taught to message you like a real person, bad actors can use it to fabricate messages that sound like it came from someone else. Examples include mimicking a CEO's communication style to issue fraudulent instructions, emulating a government official's tone to spread misinformation.

This is actually a pretty big concern for a lot of governments, who are worried about AI posing as people on digital platforms. In fact, the European Union wants tech companies to put a watermark on any content generated by AI to stop exactly this from happening.

4. Disinformation

Attackers can use ChatGPT to create disinformation that sounds accurate and credible, exploiting the AI's natural language generation capabilities to create compelling narratives that deceive and mislead.

5. Model Poisoning

So far, a lot of the techniques listed above are things that bad actors have been doing for a while. However, model poisoning is a unique threat in 2023, one that only exists because of GenAI’s popularity.

Model poisoning is when an attacker intentionally manipulates a language model's training data to introduce biases or vulnerabilities. Subsequently, the attacker could upload the poisoned model to a public repository, where unsuspecting users unknowingly consume the corrupted content. 

Put in simple terms, you’re pulling information and code from an AI, and they’ve poisoned the well. Since this can lead to widespread misinformation, you need robust quality control mechanisms to counter this (I.e. Don’t just trust blindly what you get from an AI).

6. Sensitive Data Exposure

ChatGPT looks secure to many users, so they assume they’re safe to enter personal information into it. However, they fail to remember that tools like ChatGPT use user input to inform future output. An attacker could potentially exploit vulnerabilities in the generative AI to extract that sensitive data. 

By crafting cleverly designed prompts, attackers could trick the AI into divulging confidential information, putting user privacy at risk. It’s worth noting that data breaches have also occurred with ChatGPT in the past, which is another good reason not to ever enter any personal or sensitive information.

How to mitigate the risks of generative AI to your organization

So, we’ve talked about all the scary things that AI models like ChatGPT can be used for. But you and your organization don’t have to sit passively and accept that! You can (and should) take actions to stop GenAI posing a threat to your systems.

Since there’s quite a lot of tools at your disposal, I will be posting a dedicated blog article on how to defend your organization from attackers using GenAI in the near future, so make sure to watch this space. Until then, check out this article: "Pure magic: How to use GenAI in threat detection & response."

Conclusion

From crafting convincing phishing content to generating dangerous malware, the creative capabilities of AI models can be exploited by attackers to breach digital defenses, undermine trust, and compromise security. 

As AI technology advances, it is crucial to address these security risks head-on through intentional usage, robust ethical guidelines, testing, provenance, and other proactive measures to safeguard against the dark side of AI.

Further resources

Want to learn more about artificial intelligence, including how and when to use it? Pluralsight offers a range of beginner, intermediate, and expert AI and ML courses, including dedicated courses on generative AI and tools like ChatGPT. You can sign up for a 10-day free trial with no commitments. Here are some you might want to check out:

If you have cloud infrastructure, you might also want to check out the A Cloud Guru course, “How to Secure Cloud Infrastructure with Generative AI.

Josh Cummings

Josh C.

Like many software craftsmen, Josh eats, sleeps, and dreams in code. He codes for fun, and his kids code for fun! Right now, Josh works as a full-time committer on Spring Security and loves every minute.

More about this author