メイン コンテンツにスキップ

 Subscribe

Authors: Mary Wahl, Shaheen Gauher, Fidan Boylu Uz, Katherine Zhao

Summary

We used reinforcement learning and CNTK to train a neural network to guess hidden words in a game of Hangman. Our trained model has no reliance on a reference dictionary: it takes as input a variable-length, partially-obscured word (consisting of blank spaces and any correctly-guessed letters), and a binary vector indicating which letters have already been guessed. In the git repository associated with this post, we provide sample code for training the neural network and deploying it in an Azure Web App for gameplay.

Motivation

In the classic children's game of Hangman, a player's objective is to identify a hidden word of which only the number of letters is originally known. In each round, the player guesses a letter of the alphabet: if the letter is present in the word, all instances of the letter are revealed; otherwise, one of the hangman's body parts is drawn in on a gibbet. The game ends in a win if the word is entirely revealed by correct guesses, and ends in loss if the hangman's body is completely revealed instead. To assist the player, a visible record of all letters guessed so far is typically maintained.

A common Hangman strategy is to compare the partially-revealed word against all of the words in a player’s vocabulary. If a unique match is found, the player simply guesses the remaining letters; if there are multiple matches, the player can guess a letter that distinguishes between the possible words while minimizing the expected number of incorrect guesses. Such a strategy can be implemented algorithmically (without machine learning) using a pre-compiled reference dictionary as the vocabulary. Unfortunately, this approach will likely give suboptimal guesses or fail outright if the hidden word is not in the player’s vocabulary. This issue occurs commonly in practice, since children selecting hidden words often choose proper nouns or commit spelling errors that would not be present in a reference dictionary.

An alternative strategy robust to such issues is to make guesses based on the frequencies of letters and letter combinations in the target language. For an English-language game, such strategies might include beginning with vowel guesses, guessing the letter U when a Q has already been revealed, recognizing that some letters or n-grams are more common than others, etc. Because of the wide array of learnable patterns and our own a priori uncertainty of which would be most useful in practice, we decided to train a neural network to learn appropriate rules for guessing hidden words without relying on a reference dictionary.

Model Design and Training

Our model has two main inputs: a partially-obscured hidden word, and a binary vector indicating which letters have already been guessed. To accommodate the variable length of hidden words in Hangman, the partially-obscured word (with “blanks” representing any letters in the word that have not yet been guessed) is fed into a Long Short Term Memory (LSTM) recurrent network, from which only the final output is retained. The LSTM’s output is spliced together with the binary vector indicating previous guesses, and the combined input is fed into a single dense layer with 26 output nodes that represent the network’s possible guesses, the letters A-Z. The model’s output “guess” is the letter whose node has the largest value for the given input.

We created a wrapper class called HangmanPlayer to train this model using reinforcement learning. The hidden word and model are provided when an instance of HangmanPlayer is created. In the first round, HangmanPlayer queries the model with an appropriately-sized series of blanks (since no letters have been revealed yet in the hidden word) and an all-zero vector of previous guesses. HangmanPlayer stores the input it provided to the model, as well as the model’s guess and feedback on the guess’s quality. Based on the guess, HangmanPlayer updates the input (to reveal any correctly-guessed letters and indicate which letter has been guessed), then queries the model again… and so forth until the game of Hangman ends. Finally, HangmanPlayer uses the input, output, and feedback it stored to further train the model. Training continues when a new game of Hangman is created with the next hidden word in the training set (drawn from Princeton’s WordNet).

Operationalization

Instructions and sample files in our Git repository demonstrate how to create an Azure Web App to operationalize the trained CNTK model for gameplay. This Flask web app is heavily based on Ilia Karmanov’s template for deploying CNTK models using Python 3. The human user visiting the Web App selects their own hidden word – which they never reveal directly – and provides feedback to the model after each guess until the game terminates in either a win or a loss.

For more information on this project, including sample code and instructions for reproducing the work, please see the Azure Hangman git repository.

  • Explore

     

    Let us know what you think of Azure and what you would like to see in the future.

     

    Provide feedback

  • Build your cloud computing and Azure skills with free courses by Microsoft Learn.

     

    Explore Azure learning


Join the conversation