Implementing Monte Carlo Tree Search in Node.js - 图2

This article is a follow-up to the previous one, but I’ll provide enough context so that it’s possible to drop in on this one. Be forewarned that this one’s going to be more technical. All code is available in this GitHub repo.
As with the previous article, this one also assumes some computer science knowledge on the reader’s part, in particular how the tree data structure works. Intermediate knowledge of JavaScript (ES6+) is required.
This article has one simple goal:

  1. Implement a Monte Carlo Tree Search (MCTS) algorithm to play a game given its rules.

That’s it. Performance? Maybe next time. This whole thing is going to be instructional and hands-on. I will provide brief explanations of the linked code snippets, and the hope is that you, reader, will follow along and take the time to understand tricky bits in the code.
Let’s begin.

Create the Skeleton Files

In game.js:

  1. /** Class representing the game board. */
  2. class Game {
  3. /** Generate and return the initial game state. */
  4. start() {
  5. // TODO
  6. return state
  7. }
  8. /** Return the current player’s legal moves from given state. */
  9. legalPlays(state) {
  10. // TODO
  11. return plays
  12. }
  13. /** Advance the given state and return it. */
  14. nextState(state, move) {
  15. // TODO
  16. return newState
  17. }
  18. /** Return the winner of the game. */
  19. winner(state) {
  20. // TODO
  21. return winner
  22. }
  23. }
  24. module.exports = Game

In monte-carlo.js:

  1. /** Class representing the Monte Carlo search tree. */
  2. class MonteCarlo {
  3. /** From given state, repeatedly run MCTS to build statistics. */
  4. runSearch(state, timeout) {
  5. // TODO
  6. }
  7. /** Get the best move from available statistics. */
  8. bestPlay(state) {
  9. // TODO
  10. // return play
  11. }
  12. }
  13. module.exports = MonteCarlo

In index.js:

  1. const Game = require('./game.js')
  2. const MonteCarlo = require('./monte-carlo.js')
  3. let game = new Game()
  4. let mcts = new MonteCarlo(game)
  5. let state = game.start()
  6. let winner = game.winner(state)
  7. // From initial state, take turns to play game until someone wins
  8. while (winner === null) {
  9. mcts.runSearch(state, 1)
  10. let play = mcts.bestPlay(state)
  11. state = game.nextState(state, play)
  12. winner = game.winner(state)
  13. }
  14. console.log(winner)

Take a moment to look over the code. Build a scaffold of the subparts in your mind, and make sense of it. This is a mental checkpoint; make sure you understand how it all fits together. Otherwise, leave a comment and I’ll see what I can do.

Finding the Right Game

In the context of developing an MCTS-playing agent, we can think of our real program as the code that implements the MCTS framework; the code in monte-carlo.js. The game-specific code in game.js is interchangeable, plug-and-play; it is the interface through which we use our MCTS framework. We’re primarily interested in making the brains behind MCTS, and it should really work with any game we decide to run it on. After all, we’re interested in general game-playing.
To test our MCTS framework, though, we’ll need to pick a specific game and run our framework using that. We want to see our framework spit out decisions that make sense for our chosen game at each step of the way.
How about tic-tac-toe, then? It’s what virtually every introductory game-playing instructional uses, and it has some very desirable properties:

  • Everyone has played it before,
  • Its rules are simple to implement algorithmically,
  • It has perfect information and is deterministic,
  • It is an adversarial 2-player game,
  • The state space is simple enough to mentally model,
  • The state space is complex enough to demonstrate the algorithm’s power.

But tic-tac-toe’s really boring, isn’t it? Plus, there’s some chance that you, reader, already know the optimal strategy for tic-tac-toe, and that takes some of the magic away. There are so many games to choose from. Let’s pick another one: how about connect four? It has all the benefits above, except maybe enjoying somewhat lower popularity than tic-tac-toe, and one probably can’t as easily build a mental model of connect four’s state space.
Implementing Monte Carlo Tree Search in Node.js - 图3
For our implementation, we’ll be using Hasbro’s dimensions and rules. That’s 6 rows by 7 columns; where vertical, horizontal, and diagonal runs of 4 count for wins. Discs are dropped from above, and settle on the first free slot from the bottom (thanks, gravity!).
A quick note before we move on, though. If you’re confident, you can go ahead and implement any game you want by yourself, as long as it adheres to the given game API. Just don’t come crying when you mess up and it doesn’t work. Keep in mind that games like chess and Go are way too complex for even MCTS to (effectively) tackle on its own; Google fixed that in AlphaGo by adding a healthy sprinkling of machine learning to MCTS. If you’re flying your own game, you can skip the next two sections.

Implement Connect Four

At this point, go ahead and rename game.js to game-c4.js; and also rename the class to Game_C4. Also, create two new classes: State_C4 in state-c4.js to represent game states, and Play_C4 in play-c4.js to represent state transitions.
Although this isn’t the main chunk of this article, how would you build this yourself?

  • How would you represent a game state in State_C4?
  • How would you represent a state transition (i.e. a play, or a move) in Play_C4?
  • How would you take State_C4, Play_C4, and the rules of connect four — and put that in cold, hard code in Game_C4?

Remember, we need connect four in the form demanded by the high-level API methods defined in the game-c4.js skeleton.
Maybe think about it for a while. Or you could just get the completed [play-c4.js](https://github.com/quasimik/medium-mcts/blob/master/play-c4.js), [state-c4.js](https://github.com/quasimik/medium-mcts/blob/master/state-c4.js), and [game-c4.js](https://github.com/quasimik/medium-mcts/blob/master/game-c4.js) that I made.


Phew! That was a lot of work, wasn’t it? (It was — at least for me.) The code requires some knowledge of JavaScript, but should be quite readable after some squinting. The most work goes into Game_C4.winner(), which builds runs of points in four separate boards, all in checkBoards. Each check board accounts a possible winning orientation (horizontal / vertical / left diagonal / right diagonal). The check boards are one larger than the actual game board on 3 sides to provide convenient zero padding for the algorithm.
I’m sure there are better ways to do this. The run-time performance of Game.winner() is not great; specifically, in big-O notation, it’s O(rows*cols) not great. This could be drastically improved by storing checkBoards within the state object, and only updating checkBoards with the last played cell (which would also be included in the state object). Maybe you can try this optimization later.

Play Connect Four

Here, we’re going to test Game_C4 by simulating 1000 games of connect four. Grab this program file: [test-game-c4.js](https://github.com/quasimik/medium-mcts/blob/master/test-game-c4.js).
Run node test-game-c4.js on a terminal. On a relatively modern processor and a recent version of Node.js, the 1000 iterations should run in under a second:

  1. $ node test-game-c4.js
  2. [ [ 0, 0, 0, 0, 0, 0, 2 ],
  3. [ 0, 2, 0, 0, 0, 0, 2 ],
  4. [ 0, 1, 0, 1, 2, 1, 2 ],
  5. [ 0, 2, 1, 2, 2, 1, 2 ],
  6. [ 0, 1, 1, 2, 1, 2, 1 ],
  7. [ 0, 1, 2, 1, 1, 2, 1 ] ]
  8. 0.549

Player 2 is internally represented by -1, for convenience of calculations in game-c4.js; the bit of code replacing -1 with 2 is just there to align the board output. The program outputs only one board for brevity, but it really plays 999 other games. After the single board output, it should output the fraction of player 1 wins over all 1000 games — expect a value around 55%, because the first player has first-mover’s advantage.

Where We Are Now

Alright. We’ve got a working game, with API methods that work with game states represented by nice State objects. Where are we at right now?

Goal: Implement a Monte Carlo Tree Search (MCTS) algorithm to play a game given its rules.

Of course, we’re not there yet. The previous section does one very important thing for us: it provides a tangible goal, forming the backbone for testing our implementation of MCTS. Now, we move on to the main event.

Implement MCTS

Reading the previous article — particularly the MCTS in Detail section — should help with understanding the rest of this article. Here, I’ll follow a similar organization as in MCTS in Detail. I’ll also quote myself in some places to elucidate certain points.

Implement Search Tree Nodes

Implementing Monte Carlo Tree Search in Node.js - 图4

To store the statistical information gained from these simulations, MCTS builds its own search tree from scratch…

At this point, invoke your knowledge of trees). MCTS is a tree search, so it’s no surprise that we’ll need tree nodes. We will implement these nodes in their own class MonteCarloNode, in monte-carlo-node.js. Then, we’ll use that to build the search tree in MonteCarlo.

  1. /** Class representing a node in the search tree. */
  2. class MonteCarloNode {
  3. constructor(parent, play, state, unexpandedPlays) {
  4. this.play = play
  5. this.state = state
  6. // Monte Carlo stuff
  7. this.n_plays = 0
  8. this.n_wins = 0
  9. // Tree stuff
  10. this.parent = parent
  11. this.children = new Map()
  12. for (let play of unexpandedPlays) {
  13. this.children.set(play.hash(), { play: play, node: null })
  14. }
  15. }
  16. ...

Again, make sure this all makes sense:

  • parent is the parent MonteCarloNode,
  • play is the Play made from the parent to get to this node,
  • state is the game State associated with this node,
  • unexpandedPlays is an array of legal Plays that can be made from this node,
  • this.children is built from unexpandedPlays, and is a Map of Plays to children MonteCarloNodes (not quite, see below).

MonteCarloNode.children is a map from play hashes to an object containing (1) the play object and (2) the associated child node. We include the play object here for convenient recovery of play objects from their hashes.
Importantly, Play and State should provide hash() methods. We’ll use these hashes as keys to JavaScript Maps in several places, like in MonteCarloNode.children.
Note that two State objects should be considered different by State.hash() — even if they have the same board state — if each reached that identical board state through different play orders. With this in mind, we can simply make State.hash() return a stringified ordered array of Play objects, representing the moves made to reach that state. If you grabbed my copy of state-c4.js, this is already done.
We’ll now add member methods to MonteCarloNode.

  1. ...
  2. /** Get the MonteCarloNode corresponding to the given play. */
  3. childNode(play) {
  4. // TODO
  5. // return MonteCarloNode
  6. }
  7. /** Expand the specified child play and return the new child node. */
  8. expand(play, childState, unexpandedPlays) {
  9. // TODO
  10. // return MonteCarloNode
  11. }
  12. /** Get all legal plays from this node. */
  13. allPlays() {
  14. // TODO
  15. // return Play[]
  16. }
  17. /** Get all unexpanded legal plays from this node. */
  18. unexpandedPlays() {
  19. // TODO
  20. // return Play[]
  21. }
  22. /** Whether this node is fully expanded. */
  23. isFullyExpanded() {
  24. // TODO
  25. // return bool
  26. }
  27. /** Whether this node is terminal in the game tree,
  28. NOT INCLUSIVE of termination due to winning. */
  29. isLeaf() {
  30. // TODO
  31. // return bool
  32. }
  33. /** Get the UCB1 value for this node. */
  34. getUCB1(biasParam) {
  35. // TODO
  36. // return number
  37. }
  38. }
  39. module.exports = MonteCarloNode

That’s a lot of methods!
In particular, MonteCarloNode.expand() replaces null (unexpanded) nodes in MonteCarloNode.children with real nodes. This method will be a part of Phase 2: Expansion in the four-phase MCTS algorithm. Other methods explain themselves.
As usual, you can implement these yourself or you can grab the completed [monte-carlo-node.js](https://github.com/quasimik/medium-mcts/blob/master/monte-carlo-node.js). Even if you do it yourself, I recommend checking against my completed program to make sure everything’s OK before moving on.
If you just grabbed my completed program, have a quick glance over the implementation, just as another mental checkpoint to re-center your overall understanding. These are short methods. You’ll get through them in no time.
Implementing Monte Carlo Tree Search in Node.js - 图5
In particular, MonteCarloNode.getUCB1() is an almost direct translation of the following formula into code. This whole equation is explained in detail in the previous article. Go take another look; it’s not that hard to understand and it’s worth it.

Update the MonteCarlo Class

The current version is monte-carlo-v1.js, a mere skeleton. The first update to the class is to include MonteCarloNode and to add a constructor.

  1. const MonteCarloNode = require('./monte-carlo-node.js')
  2. /** Class representing the Monte Carlo search tree. */
  3. class MonteCarlo {
  4. constructor(game, UCB1ExploreParam = 2) {
  5. this.game = game
  6. this.UCB1ExploreParam = UCB1ExploreParam
  7. this.nodes = new Map() // map: State.hash() => MonteCarloNode
  8. }
  9. ...

MonteCarlo.nodes allows us to get any node given its state; this will be useful. As for the other member variables, it just makes sense for them to be associated with MonteCarlo.

  1. ...
  2. /** If given state does not exist, create dangling node. */
  3. makeNode(state) {
  4. if (!this.nodes.has(state.hash())) {
  5. let unexpandedPlays = this.game.legalPlays(state).slice()
  6. let node = new MonteCarloNode(null, null, state, unexpandedPlays)
  7. this.nodes.set(state.hash(), node)
  8. }
  9. }
  10. ...

This lets us create the root node. It also lets us create arbitrary nodes, which could be useful. Maybe.

  1. ...
  2. /** From given state, repeatedly run MCTS to build statistics. */
  3. runSearch(state, timeout = 3) {
  4. this.makeNode(state)
  5. let end = Date.now() + timeout * 1000
  6. while (Date.now() < end) {
  7. let node = this.select(state)
  8. let winner = this.game.winner(node.state)
  9. if (node.isLeaf() === false && winner === null) {
  10. node = this.expand(node)
  11. winner = this.simulate(node)
  12. }
  13. this.backpropagate(node, winner)
  14. }
  15. }
  16. ...

Finally, we arrive at the heart of the algorithm. Quoting verbatim from the first article, here’s what’s happening:

  1. In phase (1), existing information is used to repeatedly choose successive child nodes down to the end of the search tree.
  2. Next, in phase (2), the search tree is expanded by adding a node.
  3. Then, in phase (3), a simulation is run to the end to determine the winner.
  4. Finally, in phase (4), all the nodes in the selected path are updated with new information gained from the simulated game.

This 4-phase algorithm is run repeatedly until enough information is gathered to produce a good move.

  1. ...
  2. /** Get the best move from available statistics. */
  3. bestPlay(state) {
  4. // TODO
  5. // return play
  6. }
  7. /** Phase 1, Selection: Select until not fully expanded OR leaf */
  8. select(state) {
  9. // TODO
  10. // return node
  11. }
  12. /** Phase 2, Expansion: Expand a random unexpanded child node */
  13. expand(node) {
  14. // TODO
  15. // return childNode
  16. }
  17. /** Phase 3, Simulation: Play game to terminal state, return winner */
  18. simulate(node) {
  19. // TODO
  20. // return winner
  21. }
  22. /** Phase 4, Backpropagation: Update ancestor statistics */
  23. backpropagate(node, winner) {
  24. // TODO
  25. }
  26. }

Here are stub methods that we’ll fill in shortly. We’re now at version monte-carlo-v2.js.

Implement MCTS Phase 1: Selection

Implementing Monte Carlo Tree Search in Node.js - 图6

Starting from the root node of the search tree, we go down the tree by repeatedly (1) selecting a legal move and (2) advancing to the corresponding child node. If one, several, or all of the legal moves in a node does not have a corresponding node in the search tree, we stop selection.

  1. ...
  2. /** Phase 1, Selection: Select until not fully expanded OR leaf */
  3. select(state) {
  4. let node = this.nodes.get(state.hash())
  5. while(node.isFullyExpanded() && !node.isLeaf()) {
  6. let plays = node.allPlays()
  7. let bestPlay
  8. let bestUCB1 = -Infinity
  9. for (let play of plays) {
  10. let childUCB1 = node.childNode(play)
  11. .getUCB1(this.UCB1ExploreParam)
  12. if (childUCB1 > bestUCB1) {
  13. bestPlay = play
  14. bestUCB1 = childUCB1
  15. }
  16. }
  17. node = node.childNode(bestPlay)
  18. }
  19. return node
  20. }
  21. ...

This function uses the UCB1 statistics available, by querying the UCB1 value of each child node. It selects the child with the highest UCB1 value, then repeats the process for the selected child node’s children, and so on.
When the loop terminates, the selected node is guaranteed to have at least one unexpanded child, unless that node is a leaf node. This case is handled by the calling function MonteCarlo.runSearch(), so we don’t have to worry about it here.

Implement MCTS Phase 2: Expansion

Implementing Monte Carlo Tree Search in Node.js - 图7

After selection stops, there will be at least one unexpanded move in the search tree. Now, we randomly choose one of them and we then create the child node corresponding to that move (bolded in the diagram). We add this node as a child to the last selected node in the selection phase, expanding the search tree. The statistics information in the node is initialized with 0 wins out of 0 simulations.

  1. ...
  2. /** Phase 2, Expansion: Expand a random unexpanded child node */
  3. expand(node) {
  4. let plays = node.unexpandedPlays()
  5. let index = Math.floor(Math.random() * plays.length)
  6. let play = plays[index]
  7. let childState = this.game.nextState(node.state, play)
  8. let childUnexpandedPlays = this.game.legalPlays(childState)
  9. let childNode = node.expand(play, childState, childUnexpandedPlays)
  10. this.nodes.set(childState.hash(), childNode)
  11. return childNode
  12. }
  13. ...

Take another look at MonteCarlo.runSearch(). Expansion is done within a check if (node.isLeaf() === false && winner === null). Obviously, it’s impossible to expand if there are no children possible in the game tree — for example, when the board is full. We also don’t want to expand if there’s a winner — this is as obvious as saying you should stop playing the game when your opponent wins.
So what happens if the node is leaf? We just backpropagate with whomever won in that node — be it player 1, player -1, or even 0 (a draw). Similarly, if there’s a non-null winner at any node, we just skip expansion and simulation, and immediately backpropagate with that winner (1 or -1 or 0).
What does it mean to backpropagate with a 0 winner? Does it really work okay with MCTS? More on this later. Spoiler: it works okay.

Implement MCTS Phase 3: Simulation

Implementing Monte Carlo Tree Search in Node.js - 图8

Continuing from the newly-created node in the expansion phase, moves are selected randomly and the game state is repeatedly advanced. This repeats until the game is finished and a winner emerges. No new nodes are created in this phase.

  1. ...
  2. /** Phase 3, Simulation: Play game to terminal state, return winner */
  3. simulate(node) {
  4. let state = node.state
  5. let winner = this.game.winner(state)
  6. while (winner === null) {
  7. let plays = this.game.legalPlays(state)
  8. let play = plays[Math.floor(Math.random() * plays.length)]
  9. state = this.game.nextState(state, play)
  10. winner = this.game.winner(state)
  11. }
  12. return winner
  13. }
  14. ...

Because nothing is saved here, this mostly involves Game and not much of MonteCarloNode.
Looking at MonteCarlo.runSearch() again, simulation is done within the same check if (node.isLeaf() === false && winner === null) as expansion. The reason: if one of these two conditions hold, then the final winner is whomever the winner of the current node is. We just use this winner for backpropagation.

Implement MCTS Phase 4: Backpropagation

Implementing Monte Carlo Tree Search in Node.js - 图9

After the simulation phase, the statistics on all the visited nodes (bolded in the diagram) are updated. Each visited node has its simulation count incremented. Depending on which player wins, its win count may also be incremented. In the diagram, blue wins, so each visited red node’s win count is incremented. This flip is due to the fact that each node’s statistics are used for its parent node’s choice, not its own.

  1. ...
  2. /** Phase 4, Backpropagation: Update ancestor statistics */
  3. backpropagate(node, winner) {
  4. while (node !== null) {
  5. node.n_plays += 1
  6. // Parent's choice
  7. if (node.state.isPlayer(-winner)) {
  8. node.n_wins += 1
  9. }
  10. node = node.parent
  11. }
  12. }
  13. }
  14. module.exports = MonteCarlo

This is the part that affects the selection phase in the next iteration of the search. Note that this assumes a two-player game, allowing the flip in node.state.isPlayer(-winner). You can probably generalize this function for an n-player game by doing node.parent.state.isPlayer(winner) or something.
Think a while about what it means to backpropagate with a 0 winner. This corresponds to a drawn game, and every visited node’s n_plays statistics get incremented, while neither player 1’s nor player -1’s n_wins statistics get incremented. This update behaves like a lost game for both players, pushing selection towards other plays. In the end, games that end in a draw are as likely to be under-explored as games that end in a loss. This doesn’t break anything, but it results in suboptimal play for when forcing a draw is preferable to losing. A quick fix would be to increment n_wins of both players by half on draws.

Implement Best Play Selection

Implementing Monte Carlo Tree Search in Node.js - 图10

The beauty of MCTS(UCT) is that, due to its asymmetrical nature, the tree selection and growth gradually converges to better moves. At the end, you get the child node with the highest number of simulations and that’s your best move according to MCTS.

  1. ...
  2. /** Get the best move from available statistics. */
  3. bestPlay(state) {
  4. this.makeNode(state)
  5. // If not all children are expanded, not enough information
  6. if (this.nodes.get(state.hash()).isFullyExpanded() === false)
  7. throw new Error("Not enough information!")
  8. let node = this.nodes.get(state.hash())
  9. let allPlays = node.allPlays()
  10. let bestPlay
  11. let max = -Infinity
  12. for (let play of allPlays) {
  13. let childNode = node.childNode(play)
  14. if (childNode.n_plays > max) {
  15. bestPlay = play
  16. max = childNode.n_plays
  17. }
  18. }
  19. return bestPlay
  20. }
  21. ...

Note that there are different ways to choose the “best” play. The one here is called robust child in the literature, choosing the highest n_plays. Another is max child, which chooses the highest winrate n_wins/n_plays.

Implement Statistics Introspection and Display

Right now, you should be able to run node index.js on the current version [index-v1.js](https://github.com/quasimik/medium-mcts/blob/master/index-v1.js); however, you won’t see very much. To see what’s happening inside, we need to do a bit more.
In monte-carlo.js:

  1. ...
  2. // Utility Methods
  3. /** Return MCTS statistics for this node and children nodes */
  4. getStats(state) {
  5. let node = this.nodes.get(state.hash())
  6. let stats = { n_plays: node.n_plays,
  7. n_wins: node.n_wins,
  8. children: [] }
  9. for (let child of node.children.values()) {
  10. if (child.node === null)
  11. stats.children.push({ play: child.play,
  12. n_plays: null,
  13. n_wins: null})
  14. else
  15. stats.children.push({ play: child.play,
  16. n_plays: child.node.n_plays,
  17. n_wins: child.node.n_wins})
  18. }
  19. return stats
  20. }
  21. }
  22. module.exports = MonteCarlo

This lets us query the statistics of a node and its direct children. With this done, we have completed MonteCarlo. You can run with what you have, or optionally grab my completed [monte-carlo.js](https://github.com/quasimik/medium-mcts/blob/master/monte-carlo.js). Note that in my completed version, there’s an additional parameter on bestPlay() to control the best-play policy used.
Now, incorporate MonteCarlo.getStats() into index.js yourself, or instead grab my complete version of [index.js](https://github.com/quasimik/medium-mcts/blob/master/index.js).
Then, run node index.js:

  1. $ node index.js
  2. player: 1
  3. [ [ 0, 0, 0, 0, 0, 0, 0 ],
  4. [ 0, 0, 0, 0, 0, 0, 0 ],
  5. [ 0, 0, 0, 0, 0, 0, 0 ],
  6. [ 0, 0, 0, 0, 0, 0, 0 ],
  7. [ 0, 0, 0, 0, 0, 0, 0 ],
  8. [ 0, 0, 0, 0, 0, 0, 0 ] ]
  9. { n_plays: 3996,
  10. n_wins: 1664,
  11. children:
  12. [ { play: Play_C4 { row: 5, col: 0 }, n_plays: 191, n_wins: 85 },
  13. { play: Play_C4 { row: 5, col: 1 }, n_plays: 513, n_wins: 287 },
  14. { play: Play_C4 { row: 5, col: 2 }, n_plays: 563, n_wins: 320 },
  15. { play: Play_C4 { row: 5, col: 3 }, n_plays: 1705, n_wins: 1094 },
  16. { play: Play_C4 { row: 5, col: 4 }, n_plays: 494, n_wins: 275 },
  17. { play: Play_C4 { row: 5, col: 5 }, n_plays: 211, n_wins: 97 },
  18. { play: Play_C4 { row: 5, col: 6 }, n_plays: 319, n_wins: 163 } ] }
  19. chosen play: Play_C4 { row: 5, col: 3 }
  20. player: 2
  21. [ [ 0, 0, 0, 0, 0, 0, 0 ],
  22. [ 0, 0, 0, 0, 0, 0, 0 ],
  23. [ 0, 0, 0, 0, 0, 0, 0 ],
  24. [ 0, 0, 0, 0, 0, 0, 0 ],
  25. [ 0, 0, 0, 0, 0, 0, 0 ],
  26. [ 0, 0, 0, 1, 0, 0, 0 ] ]
  27. { n_plays: 6682,
  28. n_wins: 4239,
  29. children:
  30. [ { play: Play_C4 { row: 5, col: 0 }, n_plays: 577, n_wins: 185 },
  31. { play: Play_C4 { row: 5, col: 1 }, n_plays: 799, n_wins: 277 },
  32. { play: Play_C4 { row: 5, col: 2 }, n_plays: 1303, n_wins: 495 },
  33. { play: Play_C4 { row: 4, col: 3 }, n_plays: 1508, n_wins: 584 },
  34. { play: Play_C4 { row: 5, col: 4 }, n_plays: 1110, n_wins: 410 },
  35. { play: Play_C4 { row: 5, col: 5 }, n_plays: 770, n_wins: 265 },
  36. { play: Play_C4 { row: 5, col: 6 }, n_plays: 614, n_wins: 200 } ] }
  37. chosen play: Play_C4 { row: 4, col: 3 }
  38. ...
  39. winner: 2
  40. [ [ 0, 0, 2, 2, 2, 0, 0 ],
  41. [ 1, 0, 2, 2, 1, 0, 1 ],
  42. [ 2, 0, 2, 1, 1, 2, 2 ],
  43. [ 1, 0, 1, 1, 2, 1, 1 ],
  44. [ 2, 0, 2, 2, 1, 2, 1 ],
  45. [ 1, 0, 2, 1, 1, 2, 1 ] ]

Beautiful.

Parting Words

It’s been a wonderful journey, and I hope you’ve enjoyed it. The next post will be about optimization, and the current state of the art in MCTS.
I’ll see you then.

如果发现译文存在错误或其他需要改进的地方,欢迎到 掘金翻译计划 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 本文永久链接 即为本文在 GitHub 上的 MarkDown 链接。


掘金翻译计划 是一个翻译优质互联网技术文章的社区,文章来源为 掘金 上的英文分享文章。内容覆盖 AndroidiOS前端后端区块链产品设计人工智能等领域,想要查看更多优质译文请持续关注 掘金翻译计划官方微博知乎专栏