Unboxing the Puzzle: How I Mastered NYT Letterboxed with an Algorithm

Andrew Nordstrom
9 min readJan 12, 2024

--

In those fleeting moments of stillness, sandwiched between the bustling rhythms of daily life, like waiting for my computer to finally finish updating when I need it most, I often find refuge in an unexpected place: the New York Times Games app. Here, amidst a digital sea of puzzles that offer a welcome reprieve from reality, Letter Boxed has become my personal favorite. It's like a secret clubhouse for word nerds, and I'm a proud member.

Letter Boxed is not your average word game; it's a cerebral dance of letters and logic. Picture a square, its four sides each lined with letters like guests at a very awkward dinner party, waiting to be introduced. The player's mission? To create words by navigating these letters, ensuring that each word begins and ends with letters from adjacent sides, and here's the twist, without revisiting the same side consecutively. This game transcends a mere test of vocabulary; it's a strategic labyrinth, pushing you to stitch together words in the most efficient way possible. It's like doing a crossword puzzle while playing hopscotch — challenging yet oddly satisfying.

Letter Boxed Game Board

During my particularly introspective winter break, the kind where you promise to be productive but end up rearranging your entire bookshelf alphabetically, my fascination with Letter Boxed reached new heights. A curious thought struck me: Could an AI like ChatGPT outsmart this game? Despite ChatGPT's linguistic finesse, it stumbled over the game's intricate rules. In particular, it struggled to chain words together.

This series of experiments was my lightbulb moment, the catalyst that propelled me towards an ambitious goal — to craft a solution uniquely tailored to this puzzle. Driven by a mix of curiosity and a dash of overconfidence, I embarked on a quest to develop an algorithm that could master Letter Boxed. This journey wasn't just about solving a puzzle; it was about merging my penchant for procrastination with the elegance of algorithmic problem-solving, transforming a casual pastime into a venture as engaging and intricate as the game itself.

Demo of web app solving today's puzzle

Consol view:

Received data: {'top': 'YDE', 'right': 'OLA', 'bottom': 'FIN', 'left': 'UVT'}
Possible words enumerated: 629 words found.

GitHub Repository Link: https://github.com/AndrewNordstrom/NYT_LetterBoxed_Solver

Crafting the Code: A Synopsis of the Solver's Mechanics

Flask Framework for Web Interaction: The project begins with Flask, a Python web framework that forms the backbone of the web app. It's the stage where users interact with the puzzle solver, providing a user-friendly experience.

Efficient Word Management with the Trie: Central to the puzzle-solving logic is the Trie, a data structure that efficiently manages a large dictionary of words. It's designed for quick word searches and retrievals, crucial for easily sifting through hundreds of thousands of words.

Web Scraping with BeautifulSoup: To keep the puzzle solver up-to-date, BeautifulSoup is employed for web scraping. It fetches the latest puzzle data from the NYT website, ensuring the solver works with the current day's challenge.

Algorithm for Puzzle Solving: The core algorithm works by enumerating all possible words from the puzzle's letter grid. It leverages the Trie for fast word generation and checks each word against the game's rules to ensure validity.

Graph Theory with Networkx: For an added layer of analysis, networkx is used to create a graph representation of the puzzle. This helps visualize potential word connections, enhancing the solver's decision-making process.

Flask Routes as Interactive Gateways: Various Flask routes provide the pathways for user interactions, such as fetching puzzle data and receiving the solutions. These routes are integral to the app's functionality.

Bringing It All Together: The final application synergizes these components: the Trie for word processing, BeautifulSoup for data fetching, and the puzzle-solving algorithm. This combination offers users a robust tool to conquer the daily NYT Letter Boxed puzzles.

Graph Theory at a Glance: Mapping the Puzzle’s Connections

Explore the solver’s architecture with a force-directed graph, where nodes reflect words colored by their puzzle-solving significance. This visual tool maps out the complex web of word connections, revealing the strategic power of graph theory in the project.

For this graph, I used January 7th Letter Boxed letters (YDE, OLA, FIN, UVT), which produced 629 nodes and 30845 links.

As node connections increase, colors transition from dark purple to bright yellow

Degrees Affect on Colors and Sizes: The nodes in the graph change color and size based on their ‘degree,’ a term in graph theory that refers to the number of connections a node has. The more connections, the higher the degree, the brighter the color, and the larger the node. This visual cue helps identify keywords pivotal to solving the puzzle.

Graph Theory in Action: The connections and layout unveil the intricate web of word possibilities. Clusters emerge, showing groups of words that can be targeted for efficient solving, while isolated nodes might represent less common paths.

How it Felt Navigating the Development

Navigating the Labyrinth of Development:

The C++ Conundrum and Python Pivot

Initially, armed with a blend of optimism and naivety, I chose C++ as my steed for this quest. With a dictionary as hefty as the one in my project (250k+ words!), C++ seemed like the obvious choice for its speed and power. But as it turns out, C++ was more like a high-maintenance sports car, fantastic when it works but a real pain when it doesn't.

Enter Python, the trusty, versatile Swiss Army knife. Not only was Python more approachable for experimenting with algorithms, but it also opened the door to creating a sleek web app interface. Python made life easier, especially when dealing with the intricacies of web development.

Tackling the Trie: A Data Structuring Odyssey

The quest then led me to a crossroads: how to efficiently sift through a mountain of words? I dabbled with the idea of using an A* search algorithm, a pathfinding maestro. However, A* was like trying to use a GPS in a maze—a bit overkill and not quite the right tool. That's when the Trie data structure caught my eye. Picture a Trie as a tree that's really good at playing Scrabble. It's a way to store words where each node is a letter, making it super quick to find whether a word or prefix exists in the project's massive dictionary.

Implementing the Trie was like learning to play a new instrument — fumbling at first, but then music to my ears. The challenge was not just building it, but turbocharging it for speed. After what felt like a caffeine-fueled hackathon, the Trie was not just puttering along; it was sprinting.

Debugging Debacles and Eureka Moments

In one of my initial C++ attempts, the algorithm churned through over 10 million combinations before VS Code waved a white flag and crashed. In another iteration, the solution it spit out was a 30+ word salad, with each word being a meager 3 letters long.

This puzzle-solving escapade wasn't a sprint; it was a marathon with hurdles. I rebuilt the code from scratch six times. Six! Each iteration was a learning curve, bending and twisting my approach until I struck the right balance between efficiency and effectiveness.

Python Code: The Final Frontier

The final code, written in Python with Flask, became my Excalibur. It included a Trie for rapid word searches, web scraping using BeautifulSoup to fetch the latest NYT puzzle data, and a Flask app to serve as the interface for puzzle-solving enthusiasts. This digital alchemist could now transform a jumble of letters into coherent, rule-abiding solutions.

Testing and Optimization

Testing the Letter Boxed solver was akin to navigating a labyrinth with countless twists and turns. The goal was to ensure that, regardless of the puzzle's complexity, the code could find the shortest and most efficient path to the solution. To achieve this, a series of intricate tests were conducted, simulating different puzzle scenarios, from the simplest to the most complex.

Types of Tests:

  1. Basic Puzzle Challenge: Starting with the basics, puzzles with just a few letters on each side were tackled. These 'warm-up' tests were crucial for ensuring the algorithm could handle standard puzzles. The results? Success! The solver navigated these with ease, finding optimal solutions within seconds.
  2. Puzzle with Common Letters: This test was like walking through a familiar neighborhood. With common letters, the algorithm had a field day, generating a multitude of valid solutions, demonstrating its versatility in handling puzzles rich in language possibilities.
  3. Rare Letters Conundrum: Here, the solver faced its first real test of strength. Puzzles with rare letters were like navigating through a foggy, unfamiliar path. The algorithm had to work harder to find solutions, if any, showcasing its ability to handle even the most challenging puzzles.
  4. Repeated Letters Puzzle: This scenario was akin to a mirror maze, with repeated letters creating the potential for confusion. The algorithm's task was to handle these duplicates without breaking the rules. The outcome? A resounding success, proving the solver's capability to deal with nuances.
  5. Minimalist Puzzles: Stripping down to the bare minimum, puzzles with fewer letters per face were tested. This was like sprinting through a straightforward path in the maze. The solver efficiently produced valid solutions, proving its effectiveness even in minimal scenarios.
  6. The Large Puzzle Test: This was the ultimate challenge, simulating puzzles with an extensive array of letters. While the solver took longer to compute solutions, it demonstrated its endurance and robustness in tackling large-scale puzzles.
  7. Vowels-Only and Consonants-Only Puzzles: These edge cases were like hidden corners in the maze, testing the solver's ability to handle less typical scenarios. The outcomes varied, reflecting the algorithm's adaptability to diverse puzzle structures.
  8. One-Letter Per Face Puzzle: This test was like finding the quickest route in a small, confined space. The solver adeptly managed these limited scenarios, showcasing its flexibility.

Optimization and Limitations

Throughout the testing process, the algorithm was fine-tuned for speed and efficiency. One significant adjustment was limiting the user interface to three letters per side, a decision guided by performance metrics. This restriction ensured that the solver remained practical and user-friendly without compromising on its ability to find solutions.

A Maze Well Navigated

In summary, the extensive testing phase was a journey of discovery and refinement. Each test, from the simplest to the most complex, contributed to honing the algorithm, making it a reliable and robust tool for solving a wide range of Letter Boxed puzzles. These tests provided the necessary assurance that no matter the complexity of the puzzle, the solver was ready to take on the challenge.

Letter Configuration Affect on Performance

Letter side configurations: 1–6 letters per side were tested

Testing: 500 tests were run per configuration. Each test was randomly generated.

A Maze Well Navigated

In summary, the extensive testing phase was a journey of discovery and refinement. Each test, from the simplest to the most complex, contributed to honing the algorithm, making it a reliable and robust tool for solving a wide range of Letter Boxed puzzles. These tests provided the necessary assurance that no matter the complexity of the puzzle, the solver was ready to take on the challenge.

Closing Thoughts

As I look back on this project, I can't help but chuckle. It's been a delightful detour from my usual 'save-the-world' endeavors into a world where the biggest crisis is a jumble of letters. I embarked on this journey not to solve life's big problems but to tackle the pressing issue of bored puzzle enthusiasts like myself. It was a passion project in the truest sense, fueled by equal parts enthusiasm and my uncanny ability to get absurdly obsessed with word games. Building this solver was a blend of joy, nerdiness, and a stubborn refusal to just play the game like everyone else. It helped remind me why coding is fun; it's not just about the logic and the problem-solving. It's also about those moments where you think, "Well, this might not change the world, but it sure is going to solve this puzzle!" And for anyone who wanders into my GitHub repository, I hope it inspires a chuckle and perhaps a spark to dive into your own quirky coding adventures.

--

--