How to prepare your business for quantum and AI security risks

Uncover quantum and AI security risks, what they mean for your organization, and how upskilling helps your people defend against these threats.

Feb 18, 2026 • 4 Minute Read

Please set an alt value for this image...

In the age of AI and quantum computing, protecting your organization can feel overwhelming. But even if tech is changing, the underlying principles of cybersecurity remain the same:

  • Confidentiality: Protecting data and systems from unauthorized access

  • Integrity: Ensuring data and systems are accurate, trustworthy, and haven’t been tampered with

  • Availability: Being able to access data and systems as expected, even if there’s a natural disaster or security incident

When it comes to securing your organization, you don’t necessarily need to reinvent the wheel. Instead, you can adapt your existing policies and practices for new quantum and AI security risks.

Post-quantum encryption (PQE): Why it matters and how to prepare

The strength of cryptography is generally determined by how long it would take a computer to “guess” the key that encrypts (or protects) your data. It relies on two things: algorithm and key size. 

The problem with quantum computers is that they can “guess” some cryptographic keys really fast. This allows them to decrypt data that classic computers wouldn’t be able to. To protect your data from quantum threats, you need to change your algorithms. 

There are two types of cryptographic algorithms: symmetric and asymmetric. Symmetric algorithms use the same key to encrypt and decrypt your data. If you use symmetric cryptography, you may need to change some old algorithms and double existing key lengths. 

Asymmetric algorithms use different keys to encrypt and decrypt your data. If you use asymmetric cryptography, you need to change all of your algorithms for post-quantum encryption.

Evaluate risk and make a post-quantum cryptography plan

To prepare for quantum, take these three steps:

  1. Conduct an inventory and determine where you use non-quantum safe algorithms and key lengths.

  2. Create a plan to switch to post-quantum algorithms (for asymmetric cryptography) and modern algorithms and longer keys (for symmetric cryptography).

  3. Evaluate the risk of harvest now, decrypt later attacks for your data.

Ideally, organizations should start implementing post-quantum encryption today because data that has historically been considered “safe” is now at risk. Attackers can steal sensitive data now and decrypt it later when they have access to a quantum computer.

In reality, PQE is important but not urgent for most organizations. You’ll want to start preparing and upskilling your people, but think of it more as a long-term process, rather than an immediate concern—unless you’re in national security or defense aligned sectors.

Upskilling tip: Start building quantum computing and PQE skills now

While quantum can seem far off, upskilling your teams now will prevent you from falling behind (and reduce the risk of harvest now, decrypt later attacks). 

Your Governance, Risk, & Compliance (GRC) specialists need to understand new post-quantum algorithms and advise the business on how to handle them. Information security professionals, architects, and engineers will need the technical expertise to manage the actual implementation.

These resources can help them get started with post-quantum cryptography:

AI security risks to watch for

AI introduces new risks, both from external sources and from internal use within the organization. Before you can mitigate AI risk, you need to know what to look for.

Data poisoning

Data poisoning occurs when attackers inject malicious or corrupted data into a training dataset. This can cause the AI model to learn incorrect patterns, and ultimately generate biased, inaccurate, or malicious outputs.

For example, an attacker could poison a spam-detection model's training data to ensure their phishing emails are always classified as legitimate.

Prompt injection

With prompt injection, an attacker inserts malicious instructions or hidden commands into a user's prompt. This can cause the AI to ignore its original safety instructions and perform a malicious action, like divulging confidential information or generating harmful content. This is a significant risk for large language models (LLMs).

Hallucinations

When a large language model “hallucinates,” it generates an inaccurate or nonsensical output. Hallucinations aren’t necessarily a cybersecurity issue—but they could be, if those hallucinations then create a data integrity issue.

Access rights amnesia

Sometimes, a model “forgets” what information a user is authorized to access because it has access to all of the organization’s information. As a result, it might provide sensitive information to unauthorized users within the organization, breaking the security practice of assigning rights based on the need to know.

For example, someone might ask to see upcoming product roadmap updates or organizational structure changes that they shouldn’t be able to see.

AI agents

AI agents can perform complex tasks without human oversight. But they also introduce new external entry points and internal risks, like providing unauthorized access. That’s why it’s critical to keep a human in the loop. (This can also mitigate skill decay caused by overreliance on AI).

Learn more about the future of AI and how to prepare your organization.

Upskilling tip: Invest in cybersecurity awareness training and AI fundamentals

AI benefits both attackers and defenders. Attackers can use AI to:

  • Find vulnerabilities

  • Create more advanced attacks at scale

  • Enhance content, presentation, and believability of phishing

  • Create realtime voice and video cloning

At the same time, defenders can use AI to:

  • Find vulnerabilities—and patch them

  • Conduct penetration testing

  • Better detect phishing

  • Detect deepfakes

But they need the right skills to use AI for defense and protect against AI-enabled attacks. As AI threats evolve, employees need more than yearly compliance training to keep their organizations safe.

Everyone needs updated cybersecurity awareness training that covers how attackers use AI and deepfakes. This is especially prevalent for those in finance and IT service desk roles.

Cyber security professionals, especially risk specialists and security architects, need general AI upskilling and training for specific products. 

These resources can help them get started: 

Staying informed is the first defense against new threats

The threat landscape is always changing. The only way to stay on top of it is to stay informed.

Build your team’s security skills for AI and quantum with Pluralsight SecureReady, an end-to-end security program built for modern enterprises. Learn more.

It is impossible to assess risk and secure what you don’t understand.
John Elliott

Security Advisor and Pluralsight Author Fellow

 

Want more tips to defend your organization from sophisticated AI threats and attacks? Watch the on-demand webinar

Julie Heming

Julie H.

Julie is a writer and content strategist at Pluralsight.

More about this author