DALL·E vs. Midjourney: How AI Image Generators See Human Concepts Much Differently

A side-by-side comparison of results from two popular platforms

Jeff Hayward
Geek Culture
Published in
6 min readSep 11, 2022

--

Feature image produced using DALL·E. All other images produced with DALL·E and Midjourney.

If you’ve been reading my work (thanks if you have!), you’ll know I’ve plunged face-first into the world of AI imaging recently with DALL·E from OpenAI. Since then I have also discovered Midjourney, another ambitious project that produces images from simple text prompts in Discord. The results from both platforms are truly incredible, and I have already spent money on both after my free trials ended.

While the trained neural networks can produce incredible images that would take a human artist hours or days to complete, the “art” of text prompting is learning how to use the right words in the right order to bring your original ideas to life. Generally, the more descriptive and specific the text you enter, the closer it may be to what’s in your mind’s eye. It takes some practice (I’m still a rookie), but when you nail a concept, it’s very exciting.

Since I’m now on board with both DALL·E and Midjourney, I thought I’d try a head-to-head experiment. Not to see which one is better — that’s a hard call so far — but to see how each would interpret single-word prompts. Below are some samples I generated from both, showing how both labs envision these human…

--

--

Jeff Hayward
Geek Culture

Ex-reporter. AI critic. Nostalgia lover. Follow my publications Ai-Ai-OH and CanadEH.