Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Is prompt engineering dead? No, and it won’t be for a long time

OpenAI claims it killed prompt engineering with the release of DALL-E 3. Here's why that's simply not true, and it won't be the case for decades.

Sep 22, 2023 • 7 Minute Read

Please set an alt value for this image...
  • Professional Development
  • AI & Machine Learning
  • Learning & Development

On Wednesday, OpenAI announced the release of DALL-E 3, the latest version of its existing AI image synthesis model. “No prompt engineering required,” they claimed, saying PE was dead. No longer would we suffer under the yoke of systems that “force users to learn” how to write prompts. 

Immediately, the late adopters celebrated. In the comments section of Ars Technica, they patted themselves on the back, proudly lauding the fact they had “ignored learning that prompt engineering thing” since it sounded “like a fad”, and now their feet-dragging had been rewarded. 

... Except for the fact this isn't the case at all.

Prompt engineering won’t be dead for the foreseeable future. You might think that’s an incredibly bold claim to make — in the vein of things like “Web 3.0 is the next big thing!” or “I bet we’ll all be in the Metaverse soon” — especially since the field is very, very new. 

So, do I have a crystal ball? No, it’s simple logic.

Why prompt engineering isn’t going anywhere

1. You still need to write a prompt to give the AI the requirements

Until we’ve all got fancy human-machine interfaces wired into our heads, or we miraculously invent Asimov’s telepathic robots, an AI isn’t going to be able to read our thoughts. That means we’ve got to communicate what we want, and we’ve got to use words to do it. Even in OpenAI’s announcement, they admitted if DALL-E 3 didn’t deliver the exact image that you want, “you can ask ChatGPT to make tweaks with just a few words.” 

That’s called a prompt, which you need to engineer. Are we going to have to stop writing these anytime soon? The answer is no.

2. Sharing your needs is an art, and humans suck at it

People are, as a whole, not great at communicating (even with other humans!). The number one cause of relationship breakdowns? Poor communication. Making an AI smarter isn’t going to fix this, since you’re communicating with it, so it’s a user-side issue.

A long time ago, I spent some time working as an in-house graphic designer. I loved working in the Adobe suite, and I thought it might be worth a try. However, there was one manager at the time who gave very vague briefs. I got used to hearing the sentence from her:

“Just make something that pops.”

When I asked for more details, I got zilch. So I went off on a limb and made a graphic, and sadly more often than not, it wasn’t what she wanted. Unfortunately, this made her think I was dumb (because I didn’t understand what was clear and obvious to her), and it made me frustrated (because I felt there was no way to succeed). 

The result? A lengthy, painful refining process where I had to reinvent the graphic over and over again, until we finally got something that matched what was in her head.

Now, even if I was a genius artist (Spoiler: I’m not, which is why I didn't make a career of it), I couldn’t have succeeded without a proper brief. The same is true for making the AI smarter, or a better artist. They can’t succeed if you never give them the proper brief.

That’s prompt engineering in a nutshell; saving yourself time and effort by giving the brief right the first time. Again, that isn’t changing anytime soon.

3. Most of the time, people don’t know what they want until they see it

To lather on the misanthropy a bit more, another thing I’ve learnt is that most people don’t have much of an imagination. You present them a wireframe of an app UI, and they genuinely think that’s what you’re going to go out the door with. 

The reason? They can’t conceive of what the final product looks like. The consequence of this is they might give you a brief for something, then when they actually see it, change their minds entirely (On a side note, this can be frustrating if you are the sort of person who can see what they’re asking for, and knows it won’t work, but the client insists on it anyway).

These will be the majority of people who will be using tools like ChatGPT and DALL-E 3 — the non-creatives, the people who need these tools because it’s something they can’t do themselves. This imagination gap means the prompts will often deliver something they’re not quite after, and so they’ll need to adjust their wording accordingly.

4. There’s just way too many variables for the AI to consider alone

Unless you’re just happy with “whatever”, there’s just so much information you’ve got to provide in your prompt, such as:

  • For images: Task, colors, subjects, lighting, composition, stroke type, resolution, lens type, art style. 

  • For prose: Task, tone, topic, audience, purpose, medium, structure, length, contextual data.

  • For coding: Task, programming language, sample code, purpose, length.

For instance, you might ask DALL-E for an “image of a mist-covered field.” But what if you got an image with no trees, and you really wanted one with a forest in the background? What if the grass is green, and you wanted it gray? What if it’s low-res and you wanted high-res? If you shoot from the hip with short briefs, you’re going to get results that are off the mark.

Knowing what to ask for, and how, is the essence of prompt engineering. Learning about variables like these means you can get more mileage out of these tools, instead of just asking it to do something once and then declaring “It didn’t do what I wanted, it’s rubbish!”

5. Refining your brief helps hone your requirements

“Make me a marketing plan for a campaign on Diet Cola.” Obviously, this prompt is missing a lot of context. However, the act of providing this context helps you iron out your thinking. For instance:

  • What are the strategic objectives of these plans? Too many, and it spells disaster.

  • What is the target audience? Are these the right people to go for?

  • What are the key messages? Again, having a handful is key.

  • What are the required assets? Are these the right assets? Should we be spending more or less on this campaign?

… And so on. This is just one example, but the process of creating a brief — whether it’s for an image asset, some copy, a marketing plan, or what have you — is in itself a valuable task. Cutting out this process and just shoving it all to the AI is a recipe for sub-par outcomes, because they do not yet possess the human skill of critical thinking. Creating a longer prompt with appropriate context is a mini-scale version of this process.

Conclusion: Prompt engineering will become more natural, not extinct

Over time, prompt engineering is going to become the new form of “Google fu” — the art of knowing what to type into a search engine to get what you want. It will be natural to people who grew up with the technology, but it will be far from obsolete. In this way, it will become more of a thing, not less. However, it may become something most people don’t actively have to learn, and being a “prompt engineer” certainly could fall short of becoming an established profession (That said, we do have professional Googlers — it’s called tech support!). 

Putting your head in the sand about it and saying “prompt engineering is dead” is akin to saying you don’t need to learn how to Google something, or teach yourself how to use word processing software, because it’s not going to be a big thing. AI tools are being used more, not less, and having an intuitive grasp for how to brief them makes your life a whole lot easier. 

After all, no AI is a mind reader. 

Want to learn more about AI? 

Amber Israelsen has an excellent course on Pluralsight that covers how to get started with prompt engineering if you’re looking to dip your toes in. In general, Pluralsight offers a range of beginner, intermediate, and expert AI and ML courses, including dedicated courses on generative AI and tools like ChatGPT. Since you can sign up for a 10-day free trial with no commitments, it’s a great way to take some professionally authored courses with a set course structure, rather than just scouring YouTube for whatever randomly pops up.

Adam Ipsen

Adam I.

Adam is the resident editor of the Pluralsight blog and has spent the last 13 years writing about technology and software. He has helped design software for controlling airfield lighting at major airports, and has an avid interest in AI/ML and app design.

More about this author