The Midjourney /describe command is life changing

Rob Young
ILLUMINATION’S MIRROR
6 min readSep 29, 2023

Struggling to get the results you want in Midjourney? Use the /describe command to reverse engineer the perfect prompt

Made with Midjourney

A few weeks ago, I was helping my sister brainstorm art ideas for a book that she’s writing.

We wanted to create art in a specific illustrative style and maintain that style throughout the book. The goal was to conceptualize the art with Midjourney, before working with an artist to create the final illustrations.

There were a few reasons for not wanting to use the Midjourney images directly, but a big one is that you can’t currently copyright AI generated images which I wrote about for DataDrivenInvestor here:

Copyright’s weren’t an issue at this point though, because we straight up couldn’t achieve the output we wanted. Even after hundreds of prompts.

The problem? You can’t always get what you want

We found out the hard way that the style we wanted was virtually impossible to re-create in Midjourney using the words that we were intuitively describing it with.

And that happens to pretty much everyone I’ve talked to, including the experts.

We wanted a textured, muted, paper collage art style for the illustrations.

But we were getting more anime-esque results like this:

Made with Midjourney

The differences are kind of subtle, but definitely noticeable.

And, if you spend a lot of time creating things in Midjourney, you probably aren’t looking for “good enough” all the time.

You want what you want.

The solution: Midjourney’s /describe command

After tons of prompts in various platforms, we got pretty close outside of Midjourney.

Close enough that it was worth attacking it from a new angle.

I’m not going to start talking about which platform was used, because this story is specific to Midjourney, but it produced the reference image here:

Not made with Midjourney, but still made with AI

Now that we had an image to work with, we had a perfect opportunity to try out the /describe feature in Midjourney.

If you aren’t aware, /describe is a command built into Midjourney to effectively reverse engineer your image into text format. It’s just the reverse of how you would typically generate an image with /imagine, but instead, you start with /describe and give Midjourney an image.

It will then create a series of prompts for you to try to emulate the original image.

And I would be lying if I said all of them were good. Not all of them worked, but one did, and that’s all we needed.

The /describe process

In this case, the reference image generated some pretty interesting prompts.

Screenshot by the author

I basically never get that detailed with Midjourney prompts these days, and I definitely didn’t know who Keith Negley, Martin Ansin, or Brian Despain were.

Spoiler alert: that was definitely my loss — go check those guys out. Their work is insane.

As it turned out, a combination of artist styles and seemingly randomized wording is exactly what we needed.

Here is what those prompts generated.

Prompt outputs and image results

I’m going to Tarantino this one and go in an illogical order to feed into my own storyline and narrative. Don’t @ me.

But no, really I’m going from worst to best.

The worst: #4

Prompt 4 was “/imagine a summer vacation house in the mountains, in the style of graphic design-inspired illustrations, golden hues, muted, earthy tones, detailed wildlife, romantic riverscapes, captures the essence of nature, smokey background — ar 128:71”

We got big digital illustration vibes, but it’s still pretty anime-ish and far less detailed than I was looking for. It also lacked the whole “paper collage” kind of aesthetic we wanted.

Made with Midjourney

The second worst: #3

Prompt 3 was “/imagine a florescent forest cabin art print, in the style of martin ansin, muted earth tones, detailed background elements, brian despain, keith negley, realistic landscape paintings, gray and amber — ar 128:71”

This was also highly anime-ish and generally got a little weird with the sky as well.

But, it was a little closer to the level of detail that we wanted and looked like it was trying to do the paper collage thing, which is why it takes #3 on our list.

Made with Midjourney

The second best: #1

Prompt 1 was “/imagine a modern art illustration of house in the forest, in the style of muted earth tones, romantic riverscapes, gray and amber, romanticized depictions of wilderness, hikecore, teal and orange, golden light — ar 128:71”

This one was getting much closer. The trees no longer looked like they came out of 90’s anime, and instead looked like they were cut on a Cricut machine.

The only thing we didn’t like was the fact that it turned from paper collage art to a somehow reflective water illustration that was definitely not paper collage art.

Close, but no cigar.

Made with Midjourney

Finally, the best: #2

If you’ve read this far and put up with my Tarantino bullsh*t, thank you. We’re about to land the plane.

Prompt 2 was “/imagine a wallpaper woodland home painting autumn forest forest, in the style of keith negley, photo-realistic landscapes, precise, detailed architecture paintings, realistic, detailed rendering, calm waters, cabincore, high-contrast shading — ar 128:71”

First of all — wtf is cabincore?

Second of all, you probably missed this in the last prompt, but also WTF is hikecore?

Either way, I think the problem was that we didn’t have some kind of obscure “core” art stylistic prompting in the originals.

It finally worked!

Made with Midjourney

Apparently my man Keith Negley was way ahead of us, and cabincore = paper collage cabin art. Either way, this was precisely the output we had been trying to achieve.

What we learned from this process

Midjourney, and all image generators for that matter, are sometimes unpredictable.

We tried tons of variations of words that we thought would intuitively make sense to the model, but there’s really no way to know that.

The models are trained on huge amounts of data, and the words that generate various image style outputs are dependent on meta data like image alt text and file names of the images that came before them.

Now, generally the models are incredible at understanding what you want. But, if you want something perfect, it’s also not uncommon to spin the wheel 95 times before getting your desired output.

The /describe command lets you drastically reduce those spins of the wheel, and in doing so, spend way less time getting results.

This is a go-to in my Midjourney toolbox, and I hope it helped you as well!

I hope you enjoyed this content! If you did, I post daily news, tips, and tutorials to help you navigate the digital world. Follow for more!

--

--

Rob Young
ILLUMINATION’S MIRROR

AI and ML enthusiast | Striving to be an unbiased thought leader | Global Tech Product Leader and Strategist