A Weekend with Wordle

Jason M. Heim
18 min readJan 18, 2022

The viral success of Wordle is a healthy reminder that it’s possible to reach millions of people in a positive way with a simple, fun, and well-executed idea. Of course, I had to spoil it by tinkering with writing a “Solver” since I didn’t have anything urgent to do over the three-day weekend. The results were interesting enough to share, so here we go.

I’ll skip explaining the game here. If your’e not familiar, the best way to “get it” is to just play it. It runs in the browser, so there’s no need to install any apps, or deal with aggravating pop-up ads. Some of you may fondly remember that a lot of the internet used to be like this, but I digress…

As for “solving” this game, the problem space is simple* enough that you can write a decent solution without needing to chew up a lot of computing power. To me, any approach or algorithm that guarantees a “win” in 6 or fewer guesses is a valid solution.

From there, if you want, you can tinker further to try to find the “optimal” solver.

(*By “simple” I mean that there are only 2,315 possible solutions. Players can choose from an additional set of 10,657 words as guesses to help find letters more quickly, which makes for 30,030,180 combinations of “first guess” to potential target words. This is small enough that most algorithms to sift through all those options can fit in memory on a reasonably modern device.)

It’s been a while since I tinkered like this, so I had a bit of fun too. What surprised me is that the most optimal solution I found was also the easiest and cheapest.

The basics aren’t so basic

Before trying to implement a “solver” algorithm it helps to start with some basic utility classes and functions:

  • Write a function to compare an input word against a “target” word, producing a list of five “letter matches”. Wrap each letter match in a LetterMatch class that remembers the letter, index, and type of match. Wrap the input word and its matches in a Guess class.
  • Write a function that, given a Set<Guess>, tells you if a potential target word is still possible. Once you have at least one Guess, this can be used to divide up the remaining problem space. Note that I used a Set for this because you shouldn’t duplicate guesses, and the order of guesses you’ve already made has no effect on what your next guess should be.
  • Write a function that checks an input word against Set<Guess>. This method should determine if the input word can offer any new information. For example, if the input word contains at least one letter that has not yet been used in any prior Guess, then using the input word will tell us new information about that letter.

Armed with these the rest of the work gets a lot easier and a lot more fun, but…

Wordle’s UI cleverly hides information in plain sight

I expected to spend most of my time tinkering with scoring methods. As it turned out, I spent the bulk of this time chasing down bugs in my “basic” utility methods. Because of how it handles repeated letters, the output from matching a guess to a target is more nuanced than the instructions suggest:

Screenshot of the instructions for playing Wordle
Wordle’s instructions

Matches are highlighted in green. Letters that are in the target word, but misplaced, are highlighted in yellow. Letters that aren’t in the target word are highlighted as grey. When I print a LetterMatch in code, I represent these with M, o, and _, respectively. So WEARY in the instructions looks like M____, PILLS looks like _o___, and VAGUE looks like _____.

It gets more complicated though if a word has repeated letters, like EERIE or TENET. If EERIE is the target, then guessing TENET would look like _M_o_. If you switch this, guessing EERIE for TENET would look like oM___.

These two guesses have similar looking scores, but the second guess produced an interesting bit of extra information. EERIE has the letter ‘E’ three times, but TENET only has it twice, so when the third E is matched, it’s shown as _.

So while the instructions tell us that a grey letter means “not in the target word”, that’s not the whole story. In fact, if you write your code to interpret things this way, you will have pretty severe bugs. Similarly, a naive scoring method might yield oM__o when guessing EERIE on TENET, which will also cause bugs since that’s not how Wordle does things.

When you guess EERIE for TENET, the last E shows up as grey, but ‘E’ is present in the target word. The extra information we get from this is that the target word has exactly two of the letter E.

By contrast, when you guess TENET for EERIE, the result _M_o_ tells you that the target word has at least two of the letter E. This result is slightly less informative than its inverse. Since our goal is to learn as much as we can with each guess, it’s important to keep track of this distinction!

It gets cumbersome to continuously check for this edge case if your only notation options are M, o, and _. So in code, I made things easier for myself by adding a fourth type of match called “overflow”, which means that the letter is present in the target word but the guessed word has too many of that letter. When I print these, I use . for this. So now, guessing EERIE for TENET would look like oM__.. Guessing EERIE for BEAST would look like .M__..

With this extra notation, it gets a lot easier to write checks that filter out words that are no longer possible. For example, if we know the target word has exactly one E in it, then we can eliminate TENET and EERIE as possible targets!

Scoring approaches

Once I was convinced that I’d ironed out the bugs in my utility methods, I started tinkering with different ways to score each possible guess. This part was a lot more fun.

I cooked up a variety of ways to “score” an input word against the remaining possible targets. At the start of any game, there are 2,315 possible targets and 12,972 potential input words. We have no information about the target word yet, but each time we make a Guess, we narrow down the problem space, until we either get lucky and match the target, or reduce the remaining possible words down to one.

You can make your “score” function an abstraction, and use a simple loop-of-loops to test every potential input word against every possible target word. Choose the input word that scores the best, and you know which word (or words, in the case of a tie) should be tried next.

You can apply this recursively, so long as you trim the set of possible target words based on the results of the Guesses you’ve made.

Option 1: Fewest Remaining Possible Words

This approach is expensive, but given how thorough it is, I also expected it would be tough to beat. It’s a valid solution, since it solves every target with six or fewer guesses, but turned out to not to produce the best average score.

The basic idea is to test each input word against all possible remaining targets, and compute how many valid targets remain after the resulting Guess. The largest remaining target set in this loop indicates the worst case result. So we “score” a guess by the size of that set.

After that, we just pick the lowest of those scores, since in this algorithm, a lower score is better.

As psuedo-Kotlin:

val firstGuess = allGuessWords.map { word ->
val largestRemainingSetSize = allTargets.map { target->
val guess = word.tryGuess(target)
val newValidRemainingTargetCount =
allTargets.filter { target ->
guess.isStillPossible(target)
}.size()
}.max()
WordScore(word, largestRemainingSetSize)
}.minBy { wordScore -> wordScore.score }

Some quick notes:

  • In reality, there can be ties, so tracking the “best worst” in the outer loop is useful so that you can collect a set of all the ties.
  • Early iterations took multiple hours to run because I wanted to fully compute the score of every word and output that as a sorted map, which would show me both the best and worst first guesses. If you track the “best worst” as you go, you can speed this up by several orders of magnitude; just break the inner loop whenever you encounter a score that is larger (worse) than the best worst seen up to that point. You can’t get a full ranking of every first possible guess this way, but you get your best guess a lot faster.
  • To make this trivially fast, we could make an assumption that the worst case will always be a no-match result, _____ , and skip the inner loop entirely. I did not try this, it just came to mind as I wrote this, so I’ll leave that as an exercise for the reader…
  • The real code I wrote can be used in a recursive function and takes prior guesses into account. I made a “GameState” class that has a Set<Guess> and this is what implements the isStillPossible() method. The pseudo-code in this post skips this for brevity.

RESULTS

This algorithm produced a five way tie for a recommended first guess:

AESIR, REAIS, SERAI, ARISE, RAISE

Unsurprisingly, these all contain the same five letters. If you guess any of these words, then the worst case will be a no-match result, which will leave behind only 168 possible target words. So at worst, you’ve cut down the possible remaining targets from 2,315 to 168, eliminating almost 93%!

After adding the optimizations mentioned above, I was able to use my solver recursively and test each of these guesses against all possible targets and see how they performed in aggregate. Here are the results, sorted by lowest average score:

First guess: RAISE
Worst game: WOOER; [["RAISE":o___o], ["OUTED":o__M_], ["CHAWK":___o_], ["AMOLE":__M_o], ["WOOER":MMMMM]]
Average score: 3.642764578833693
Games with 1 guesses: 1
Games with 2 guesses: 42
Games with 3 guesses: 833
Games with 4 guesses: 1346
Games with 5 guesses: 93
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
First guess: ARISE
Worst game: LAPEL; [["ARISE":o___o], ["ETHAL":o__oM], ["ABAMP":o_._o], ["PANEL":oM_MM], ["LAPEL":MMMMM]]
Average score: 3.6591792656587474
Games with 1 guesses: 1
Games with 2 guesses: 31
Games with 3 guesses: 840
Games with 4 guesses: 1327
Games with 5 guesses: 116
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
First guess: AESIR
Worst game: JAUNT; [["AESIR":o____], ["CANTY":_Moo_], ["DIGHT":____M], ["AJIVA":oo__.], ["JAUNT":MMMMM]]
Average score: 3.6786177105831532
Games with 2 guesses: 39
Games with 3 guesses: 774
Games with 4 guesses: 1394
Games with 5 guesses: 108
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
First guess: REAIS
Worst game: GONER; [["REAIS":oo___], ["COTED":_M_M_], ["MAWKY":_____], ["BAGHS":__o__], ["GONER":MMMMM]]
Average score: 3.6829373650107993
Games with 2 guesses: 35
Games with 3 guesses: 778
Games with 4 guesses: 1388
Games with 5 guesses: 114
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
First guess: SERAI
Worst game: PAPER; [["SERAI":_ooo_], ["AGLET":o__M_], ["CONKY":_____], ["ADVEW":o__M_], ["PAPER":MMMMM]]
Average score: 3.687257019438445
Games with 2 guesses: 32
Games with 3 guesses: 786
Games with 4 guesses: 1371
Games with 5 guesses: 126
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

This brute-force analysis suggests using RAISE as your first word, with an average score of ~3.64, solving every possible game in five guesses or less. Pretty good!

ROOM FOR IMPROVEMENT

To see how well this did, I peeked at the average score and suggested first words that were produced by other folks who wrote their own solvers. It didn’t take long to find better average scores. On a whim, I decided to try using their recommended first guesses to see how they would score when using Option 1’s algorithm. It turned out that Option 1 works better with the suggestions from other algorithms! Some examples:

First guess: SOARE
Worst game: PAPER; [["SOARE":__ooo], ["AGLET":o__M_], ["RUMPY":o__o_], ["ACRED":o_oM_], ["PAPER":MMMMM]]
Average score: 3.618574514038877
Games with 2 guesses: 31
Games with 3 guesses: 929
Games with 4 guesses: 1247
Games with 5 guesses: 108
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
First guess: RAINE
Worst game: WOOER; [["RAINE":o___o], ["OGEES":o_.M_], ["CHAWK":___o_], ["CLOMP":__M__], ["WOOER":MMMMM]]
Average score: 3.6198704103671706
Games with 2 guesses: 38
Games with 3 guesses: 899
Games with 4 guesses: 1283
Games with 5 guesses: 95
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
First guess: RAILE
Worst game: WOOER; [["RAILE":o___o], ["OGEES":o_.M_], ["CHAWK":___o_], ["AMORT":__Mo_], ["WOOER":MMMMM]]
Average score: 3.628941684665227
Games with 2 guesses: 35
Games with 3 guesses: 880
Games with 4 guesses: 1309
Games with 5 guesses: 91
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
First guess: ROATE
Worst game: JAUNT; [["ROATE":__oo_], ["CLINT":___MM], ["DIGHT":____M], ["AJIVA":oo__.], ["JAUNT":MMMMM]]
Average score: 3.628941684665227
Games with 2 guesses: 34
Games with 3 guesses: 859
Games with 4 guesses: 1354
Games with 5 guesses: 68
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
First guess: COATE
Worst game: POUND; [["COATE":_M___], ["GLORY":__o__], ["BUMFS":_o___], ["AHEAP":____o], ["POUND":MMMMM]]
Average score: 3.634989200863931
Games with 2 guesses: 32
Games with 3 guesses: 863
Games with 4 guesses: 1338
Games with 5 guesses: 82
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

All of these performed better than RAISE, despite scoring lower as a first guess according to the algorithm. The best performer so far, SOARE, scored worse as a first guess because its worst case leaves 183 possible words instead of 168.

All this being said, it’s worth noting that with a good first guess this algorithm is guaranteed to solve any Wordle game in 5 guesses or less.

But it’s still clearly not “optimal”, at least not in terms of lowest average score. There’s more to the game than just getting the smallest worst case set, which led to the next attempt…

Option 2: Fewest Remaining Letters to Guess

This method takes Option 1 as a starting point, and but then tries to add some nuance to how the remaining possible word set is scored.

Option 1 just uses the size of the remaining possible word set. To build on this, I took the remaining possible word set and dug into what letters are found, and where. My theory was that the more we reduce the remaining possible letters, the easier it will be to get to the solution.

Since letters from the guesses are expected to be in the remaining possible set, I tried to normalize things a bit by removing any letter/index pairs that were already part of the input word.

In pseudo-Kotlin:

val firstGuess = allGuessWords.map { word ->
val remainingLetterCount = allTargets.map { target->
val guess = word.tryGuess(target)
val remainingTargets = allTargets.filter { target ->
guess.isStillPossible(target)
}.toSet()
// Get the letters/index combos we already guessed
val removeThese = guess.letterMatches.map {
// 'indexedLetter' returns an IndexedValue<Char>
// from the LetterMatch
it.indexedLetter()
}.toSet()
// Count how many indexed letters are left.
remainingTargets.map { target ->
// This gets IndexedValue<Char> as well.
target.toList().withIndex()
}.flatten().toSet().filter {
!removeThese.contains(it)
}.toSet().size
}.max()
WordScore(word, remainingLetterCount)
}.minBy { wordScore -> wordScore.score }

RESULTS

This algorithm recommends RAINE as the first guess (no ties). What’s fascinating is that this guess was tested with Option 1’s algorithm and beat all five of Option 1’s recommendations. This suggests that the best algorithm may depend on the current state, e.g. use strategy X for the first guess but strategy Y for all subsequent guesses.

This is supported by feeding this first guess into the Option 2 algorithm; RAINE scores significantly worse as a first guess using Option 2 than it did with Option 1:

First guess: RAINE
Worst game: HOVER; [["RAINE":o___o], ["OUTED":o__M_], ["COWKS":_M___], ["HEXYL":Mo___], ["HOMER":MM_MM], ["HOVER":MMMMM]]
Average score: 3.6725701943844493
Games with 2 guesses: 40
Games with 3 guesses: 874
Games with 4 guesses: 1206
Games with 5 guesses: 194
Games with 6 guesses: 1
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

With Option 1’s algorithm, RAINE’s average score is 3.620, which means that Option 1 does a better job using this first guess. So… what happens if we pass RAISE into Option 2?

First guess: RAISE
Worst game: MOVER; [["RAISE":o___o], ["OGEED":o_.M_], ["CHAWK":_____], ["BEVER":_.MMM], ["LOVER":_MMMM], ["MOVER":MMMMM]]
Average score: 3.6989200863930884
Games with 1 guesses: 1
Games with 2 guesses: 41
Games with 3 guesses: 838
Games with 4 guesses: 1213
Games with 5 guesses: 218
Games with 6 guesses: 4
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

RAISE did better with Option 1.

While it did yield a better first guess, Option 2 seems to perform slightly worse on average than Option 1 once your first guess is made. That being said, it is a valid solution. It will choose RAINE to start and solve all but one target word with five or fewer guesses and the remaining case is solved in six.

Option 3: Best Aggregate “Score”

I won’t expand much on this because it created more problems than it solved. The basic idea was to give each type of match a weighted value, e.g. 0 for no match anywhere in the word, 4 for right letter, wrong position, 10 for an exact match, etc. Tally up all of the points, pick whichever is best after testing a first guess against all possible solutions.

I spent some time trying to tune the weights, But wasn’t able to get this to be a valid solution. It was prone to have games needing too many guesses. It’s possible I just needed to invest more time on this, but it got boring so I abandoned it.

Option 4: Best Result Set Variety

This method takes a different path to trying to find the most effective first guess. Instead of computing the largest remaining set, just compare every possible first guess against every possible target, and collect the resulting LetterMatch lists into a set, i.e. a Set<List<LetterMatch>>.

The theory is that bigger result sets mean more variety, and more variety means that the set of possible solutions is broken up into more “buckets”. Don’t even bother checking the size of each bucket, we can try that later if needed. Since the set of possible solutions is a fixed size, having more buckets means that the average size of each bucket is smaller.

The worst hypothetical case would be a word that is made up entirely of letters that don’t appear in any target word. This hypothetical word would evaluate to _____ for every target word. Since every List<LetterMatch> would be the same, the resulting Set<List<LetterMatch>> would only have one entry.

The best hypothetical case would yield a unique list of matches for each possible solution; the size of the Set<List<LetterMatch>> would be the same as the size of the set of potential targets, and you’d be able to solve every game in just two guesses. Of course that isn’t feasible in reality (there are at most 1024 buckets, but 2315 words), but the idea still seemed pretty sound.

This approach is both very fast and very easy to implement. It looks something like the pseudo-Kotlin code below. Note that this scoring algorithm favors higher scores, so we use maxBy instead of minBy:

val firstGuess = allGuessWords.map { word ->
letterMatchListSet = allTargets.map { target ->
word.tryGuess(target).letterMatches
}.toSet()
WordScore(word, letterMatchListSet.size)
}.maxBy { wordScore -> wordScore.score }

It’s so simple and crude, this can’t be effective, can it?

RESULTS

This algorithm recommends TRACE as a first word (no ties). This word produces 150 unique lists of matches. Other words that score well on this metric are:

  • SALET: 148
  • REAST: 147
  • CARTE: 146
  • CARET: 145

Note that TRACE, CARTE, and CARET all have the same letters, but TRACE does a better job of dividing into more buckets so it scores the best.

Contrast with Option 1. All of these words scored the same, and quite poorly. Where RAISE scored 168, TRACE scored 246 (recall that lower is better in Option 1).

So according to Option 1, these are terrible first guesses. But what counts is letting Option 4 play all the games and see what happens. On this metric, Option 4 absolutely crushes Option 1:

First guess: TRACE
Worst game: BOXER; [["TRACE":_o__o], ["DINES":___M_], ["LUMPY":_____], ["ROWTH":oM___], ["JOKER":_M_MM], ["BOXER":MMMMM]]
Average score: 3.488120950323974
Games with 1 guesses: 1
Games with 2 guesses: 51
Games with 3 guesses: 1157
Games with 4 guesses: 1031
Games with 5 guesses: 73
Games with 6 guesses: 2
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

While this does have two games that require six tries, this solution solves more than half of the games in three or fewer guesses! This brings its average score down quite a bit to just 3.488, well below the best score I’d seen to this point (3.619 for SOARE with Option 1).

I also tested using some of the other best-scoring first guesses to see how they performed:

First guess: REAST
Worst game: CHILL; [["REAST":_____], ["COLIN":M_oo_], ["BIFID":_o_._], ["CLICK":MoM._], ["CHILL":MMMMM]]
Average score: 3.488120950323974
Games with 2 guesses: 47
Games with 3 guesses: 1153
Games with 4 guesses: 1053
Games with 5 guesses: 62
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
First guess: SALET
Worst game: ROVER; [["SALET":___M_], ["NIDOR":___oM], ["WHOMP":__o__], ["CORBY":_Mo__], ["AGAVE":___oo], ["ROVER":MMMMM]]
Average score: 3.491144708423326
Games with 2 guesses: 55
Games with 3 guesses: 1126
Games with 4 guesses: 1079
Games with 5 guesses: 52
Games with 6 guesses: 3
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
First guess: CARTE
Worst game: ROVER; [["CARTE":__o_o], ["FINDS":_____], ["LUMPY":_____], ["BAWKS":_____], ["AARGH":__o__], ["ROVER":MMMMM]]
Average score: 3.4980561555075593
Games with 2 guesses: 52
Games with 3 guesses: 1126
Games with 4 guesses: 1072
Games with 5 guesses: 62
Games with 6 guesses: 3
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
First guess: CARET
Worst game: BOXER; [["CARET":__oM_], ["DINGS":_____], ["ROUPY":oM___], ["WHELM":__o__], ["JOKER":_M_MM], ["BOXER":MMMMM]]
Average score: 3.5041036717062637
Games with 2 guesses: 52
Games with 3 guesses: 1115
Games with 4 guesses: 1078
Games with 5 guesses: 69
Games with 6 guesses: 1
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Interestingly, REAST has a slightly different score distrubiont but has exactly the same average score as TRACE. Also, REAST never needs six guesses, though it never gets the lucky first-guess either, since it is not in the list of possible targets.

The other words are ranked as slightly worse, and in the same order that they were scored as a first guess.

For completeness sake, I tried some of the recommended first guesses from exploring Options 1 and 2: RAINE, RAISE, and SOARE:

First guess: RAINE
Worst game: BOOBY; [["RAINE":_____], ["PLUSH":_____], ["WEDGY":____M], ["EBBET":_oo__], ["BOOBY":MMMMM]]
Average score: 3.51792656587473
Games with 2 guesses: 39
Games with 3 guesses: 1101
Games with 4 guesses: 1112
Games with 5 guesses: 63
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
First guess: SOARE
Worst game: BOOBY; [["SOARE":_M___], ["CUNIT":_____], ["GOOLD":_MM__], ["ABBAS":_oo__], ["BOOBY":MMMMM]]
Average score: 3.521814254859611
Games with 2 guesses: 33
Games with 3 guesses: 1123
Games with 4 guesses: 1077
Games with 5 guesses: 82
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
First guess: RAISE
Worst game: GONER; [["RAISE":o___o], ["DETER":_._MM], ["BLOWY":__o__], ["VOUCH":_M___], ["AJUGA":___o_], ["GONER":MMMMM]]
Average score: 3.524406047516199
Games with 1 guesses: 1
Games with 2 guesses: 41
Games with 3 guesses: 1089
Games with 4 guesses: 1114
Games with 5 guesses: 67
Games with 6 guesses: 3
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

All of these perform significantly better than they did in Option 1 or 2, but not as well as TRACE or RAINE.

One last thing: Option 1 thinks that TRACE is a bad first guess, but what happens if we open with TRACE using Option 1’s algorithm?

First guess: TRACE
Worst game: GONER; [["TRACE":_o__o], ["SOLEI":_M_M_], ["RHOMB":o_o__], ["GOPAK":MM___], ["GONER":MMMMM]]
Average score: 3.571922246220302
Games with 1 guesses: 1
Games with 2 guesses: 51
Games with 3 guesses: 972
Games with 4 guesses: 1205
Games with 5 guesses: 86
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Earlier I’d found that SOARE did the best with Option 1 by scoring 3.619, but I hadn’t considered TRACE since it had scored so poorly as a first word. This test proves that wrong, getting the average score down to 3.572!

Conclusions and other notes

First, huge kudos to Josh Wardle for creating Wordle. One of its best features is that it limits games to one per day, which gives it a similar vibe to solving a daily crossword puzzle, but with a lower time commitment. It also ensures that everyone is playing the same game, which makes sharing much more fun. Also, as someone with mild color blindness I appreciate the high contrast and dark mode options.

As for my solver, I’m quite happy with Option 4 and its suggested first guess of TRACE. My personal laptop is pretty old but Option 4 was able to compute the first guess and then play all 2315 games in about 45 seconds without any parallelism. Options 1 and 2 were much slower, especially before I added various short-circuit optimizations, and their results were not as good.

One of the more interesting solvers whose score I was trying to beat was this solution by Aditya Sengupta, which seems to share a lot with my Option 4. Aditya’s solution seeks to maximize “differential entropy” across the potential result buckets, whereas my Option 4 seeks to just maximize the number of buckets. While my algorithm is less sophisticated, it seems to perform better and chooses a different first word. That being said, I haven’t read this solution in depth enough to tell how the “overflow” case is being factored into the computation. My algorithm treats these separately from “not in the word”, which gives me 1024 (4⁵) potential buckets, where Aditya is dividing into 235 (3⁵) potential buckets. Since I have more potential buckets, I think I’m able to identify more variety, which is sort-of related to the “entropy” in Aditya’s solution.

Lastly, no discussion of Wordle would be complete without mentioning the “adversarial” version of Wordle called Absurdle by qntm. “Adversarial” means that it “cheats” by leaving its options open until you can force it to choose from a single remaining possible word. I let my Option 4 play Absurdle, expecting to score 6, since that’s the result I get for BOXER. Happily, my algorithm wins in 5:

Game of absurdle, guessing TRACE, SOILY, DOGMA, FAENA, PUPPY
Option 4 wins a PUPPY!

With that, I’m happy to quit tinkering, and just play Wordle without the solver. That being said, I plan to open with TRACE until I see something that works better :-)

--

--

Jason M. Heim

I write things. Software, mostly. Sometimes fiction, blogs, tweets, music, poems, but mostly, software.