Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Critical thinking: A must-have soft skill in the age of GenAI

In an age where content and code can be made up in moments, being able to tell fact from fiction is vital, both for leaders and operational staff.

Apr 15, 2024 • 7 Minute Read

Please set an alt value for this image...
  • AI & Machine Learning
  • Learning & Development

My grandfather is turning 90 this year. I’m proud of him because he’s not afraid of technology: he's quite happy to use his tablet and smartphone to interact with the world. However, a byproduct of this is every time I visit, I spend an hour or two on tech support. Most of my time is spent discussing the numerous phishing emails and scam calls he keeps getting hit with.

“Is there something you can just do to stop getting hit with these scams?” his wife asked. I could tell she's looking for a silver bullet — some app you could simply install, and all the scammers would just go away. It broke my heart to explain there was no such thing. I'd told them all the old tricks: how to check URLs, calling back trusted sources from a website, and so on.

“You’ve just got to be naturally cautious,” I said. “If someone is asking for your details, they might not be who they really say they are.” 

What I was trying to explain to them was the art of critical thinking, or how to make reliable judgements based on reliable information. And now, with the rise of generative AI, it's a must-have soft skill for both your career and your personal life. If you're a leader, it's also a skill you want your staff to have, as a lack of critical thinking can lead to poor business decisions.

Table of contents

What is critical thinking, and how can it affect a business?

Critical thinking is about arriving at your own well-researched conclusions instead of blindly accepting something at face value. You examine an issue objectively, relying solely on hard evidence, rather than other people’s personal opinions or biases. By doing this, you can make better, more sound decisions.

For example, a staff member comes to you, super pumped to try out a new AI product. They’ve heard it’s the silver bullet to all your problems, and it’s going to magically optimize and maintain the company website. You check out the product reviews on this AI company's website, and they’re all overwhelmingly positive. Sounds great so far, right?

However, you decide to put your critical thinking hat on. You analyze the reviews, and realize they’ve all got suspiciously similar language. So you decide to check on Reddit, where you find out lots of people are not only unhappy with the product. Worse, there’s reports that the company’s sales team resorts to harassment if you don’t sign up with them — they’ll even ring up the CEO and badmouth you.

Bullet dodged, right? But if you’d just accepted all the hype straight up, you might have lumped your company with a bad product and a bunch of nasty calls. 

These sort of decisions need to be made all the time as an organization. That’s why a lack of critical thought can greatly affect business outcomes, whether it’s at the operational or strategic level. In short, critical thinking goes hand-in-hand with being a good decision-maker.

If you were checking for hotel room bookings, would you be able to tell this image isn’t real? Fake hotel reviews are becoming more frequent as well.

Why critical thinking is more important now with generative AI

Thinking critically had already become important with the “post truth” era of the mid-2010s. But in 2023, we’ve had an explosion of generative AI (GenAI) tools that make it practically trivial to create deceptive works:

  • MidJourney, DALL-E and Stable Diffusion can create photos that require keen eyes to spot as fake. 

  • AI tools like VoiceLab can be used to deepfake people’s voices, often used in phone calls where an AI pretends to be a loved one.

  • There are a plethora of deepfake video services out there, and there is minimal to no specific legislation around deepfakes in many countries.

Even tools like ChatGPT can be used for deception, often unintentionally. It is well known to “hallucinate” sources that don’t exist, or lie convincingly about events that never happened. All of this means that you could be basing your business or personal decisions on fake information, and presenting this to other people as if it were true.

There’s a now infamous incident of a lawyer using ChatGPT to prepare a legal brief only to be fined because it was filled with fake citations. It's actually fairly easy to reproduce. Below, I've asked ChatGPT for two fake references, and one real reference.

Hard to tell the difference, right? In actual fact, none of them are real, despite ChatGPT's claims to the contrary. How did I find this out? By not taking ChatGPT's word at face value, and searching for these references myself.

Why tech professionals need critical thinking skills the most

If you work with technology, such as a programmer or data analyst, generative AI is an amazing tool.  You can get it to generate code for you, which is amazing — but it can also invent functions that don’t exist. Here is an example of ChatGPT inventing a time.log() function and presenting it as truth. If you don’t review the outputs, it’s easy to miss.

"But this would throw an error, so we'd catch it," you might say. True, assuming you test it first. But generative AI can still produce harmful code that doesn't throw any errors at all, mostly because it might not understand your business use case.

For example, you might ask a GenAI model to produce some code for some banking software. It runs without any errors, but it produces code that refunds a customer an unlimited amount of money, since it doesn't know it should cap it. This would obviously be catastrophic for a business! If your product went out with this code in it, the error might not be noticed until it's too late.

This might sound supremely unlikely. However, these kinds of mistakes are more common than you think. In terms of bugs, humans have a known error called "automation bias" (currently unpatched) in which they have far too much faith in automated systems, which often leads to over-reliance and then disaster.

Before GenAI, there were already programmers who pushed code live without testing. Now, it’s going to become much more tempting, especially if an AI produces a ton of code you a) don't quite understand, and b) can't be bothered reading.

The solution? Tech professionals from 2023 onward should all be well-versed in critical thinking, particularly when it comes to questioning anything outputted by an AI. Additionally, any workflows that involve AI should also have a human checking component, to mitigate the inherent risk of AI hallucinations and inaccuracies. This is especially true if you're using self-fixing or auto-code tools.

The amount of superficial information is going to increase

Since it’s easy to produce content, there is going to be a lot more content, and it’s not all going to be well-written or fact checked. AI is already being used to create entire spam sites. And the appetite for regular businesses to create low-quality, high-volume content is there. For example, with GenAI you can:

  • Auto-post ChatGPT-powered news articles moments after the event

  • Populate a recipe site from scratch

  • Pick a city and have an AI argue in favor or against a political position on a local issue

Some of these applications aren’t new, either. In 2014 an AI wrote about an L.A. quake three minutes after it happened. What has changed is the access and power of the models available.

On top of this, as AI models begin to digest AI-generated content with misinformation in it, lies are unwittingly perpetuated and the models get a distorted view of reality. If you’re interested in the actual mechanics involved, known as model collapse, it looks like this:

Why just looking for “tells” isn’t going to work with GenAI

Up until now, there’s been a lot of “tells” to figure out if something is fake. For instance:

  • With phishing emails, you check to see if it’s a badly written email with a lot of grammar mistakes

  • With images, people check the teeth or the number of fingers (these tend to be out of place)

But these tells are evaporating quickly. Even now, MidJourney can generate fake images with the proper number of fingers, and easy access to ChatGPT means grammar mistakes are a lot less frequent. Like with my grandfather, there’s not one ‘silver bullet’ trick that always gives misinformation away, and thinking there is gives you false sense of security when the rules change.

The only solution is to have a critical thinking mentality which allows you to analyze evidence in real time, evaluate its trustworthiness, and make an educated decision. This may involve looking for tells like the ones above, but it should never rely on them to the exclusion of critical thought.

A fake image of Ron Perlman. The teeth and jacket are off, but he has the correct number of fingers.

Conclusion: Stay informed and shore up your critical thinking skills

Make critical thinking one of the key soft skills you work on honing, both for yourself and for your team (if you’ve got one). Not only can it save you from being taken advantage of, it can make sure any business decisions you make are built on strong foundations.

Another good move is to make sure you're fully informed about generative AI. Not only does this help identify what it can do, it also allows you to make use of it. When used properly, AI models like ChatGPT have a lot of business benefits, and more organizations are leveraging these tools in an assistive capacity.

Further learning about ChatGPT and AI

Being informed is the best way to make measured decisions on how to handle AI use at your organization. There are a number of courses that Pluralsight offers that can help you learn the ins and outs of AI — you can sign up for a 10-day free trial with no commitments. Here are some you might want to check out:

If your organization is looking to jump right into integrating its products and services with ChatGPT, such as with ChatGPT plugins, here are some resources your technologists can use:

If you’re wondering how to deal with your company’s usage of ChatGPT and similar products, here are some articles that may help:

Adam Ipsen

Adam I.

Adam is the resident editor of the Pluralsight blog and has spent the last 13 years writing about technology and software. He has helped design software for controlling airfield lighting at major airports, and has an avid interest in AI/ML and app design.

More about this author