+ All Categories
Home > Documents > Sonal Sharma Master's Project - Sacramento

Sonal Sharma Master's Project - Sacramento

Date post: 12-Feb-2022
Category:
Upload: others
View: 4 times
Download: 0 times
Share this document with a friend
110
NNGo9x9 : EVOLVING NEURAL NETWORK TO PLAY GO USING SELECTIVE MINIMAX SEARCH A Project Presented to the faculty of the Department of Computer Science California State University, Sacramento Submitted in partial satisfaction of the requirements for the degree of MASTER OF SCIENCE in Computer Science by Sonal Sharma FALL 2013
Transcript

NNGo9x9 : EVOLVING NEURAL NETWORK TO PLAY GO

USING SELECTIVE MINIMAX SEARCH

A Project

Presented to the faculty of the Department of Computer Science

California State University, Sacramento

Submitted in partial satisfaction of

the requirements for the degree of

MASTER OF SCIENCE

in

Computer Science

by

Sonal Sharma

FALL

2013

ii

© 2013

Sonal Sharma

ALL RIGHTS RESERVED

iii

NNGo9x9 : EVOLVING NEURAL NETWORK TO PLAY GO

USING SELECTIVE MINIMAX SEARCH

A Project

by

Sonal Sharma

Approved by:

__________________________________, Committee Chair

Dr. V. Scott Gordon

__________________________________, Second Reader

Dr. Cui Zhang

____________________________

Date

iv

Student: Sonal Sharma

I certify that this student has met the requirements for format contained in the University

format manual, and that this project is suitable for shelving in the Library and credit is to

be awarded for the project.

__________________________, Graduate Coordinator ___________________

Dr. Nikrouz Faroughi Date

Department of Computer Science

v

Abstract

of

NNGo9x9 : EVOLVING NEURAL NETWORK TO PLAY GO

USING SELECTIVE MINIMAX SEARCH

by

Sonal Sharma

Games are an interesting field of study in field of Artificial Intelligence. Minimax

Search is a game theory applied to game playing and it has proved to be successful for

many games but not all. One of them is the Game of Go. Go is an ancient Chinese Game

usually played on 19x19 board. It is a 2-player game where the objective is to capture the

opponent’s stones and control as many territories as possible to win the game. Go has its

own set of challenges that makes it difficult for Minimax to be efficient. Some of the

challenges of Computer Go involve huge branching factor and difficulty to come up with

an optimal evaluation function for the leaf nodes.

The objective of this project is to implement a system that combines the Selective

Minimax search and the Neural Network to evolve the Computer Go player. The

limitations of Minimax approach are addressed by using the concepts of Selective

Minimax tree and Alpha Beta Pruning. Selective Minimax search provides focused

search space and faster evaluation of the moves for the computer. The idea of Selective

Minimax is to search and evaluate only those moves that look promising among the legal

vi

moves or empty intersections. The Neural Network is trained using supervised learning

and Resilient Backpropagation is used for the learning process. The learning method of

Resilient Backpropagation proves to be faster and more efficient than the standard

Backpropagation for the game of Go. The trained Neural Network combined with an

evaluation function is used to evaluate the leaf nodes of the Selective Minimax tree. For

Minimax to suggest an optimal move for the computer, it would need to search at greater

depths to look ahead which in result would be very costly. For Neural Network to suggest

an optimal move, it would need many data and game records to recognize patterns from

the board that might not be feasible.

Using the hybrid method, the Neural Network helps Minimax to look ahead and

the static evaluation function helps Neural Network to suggest a sensible move in case of

unseen patterns. This hybrid method provides an advantage over Minimax search as the

learning and the experience of Neural Network helps to look ahead without actually

searching at greater depths and helps to avoid falling for local maxima. NNGo9x9 uses

the hybrid method of Neural Network and Selective Minimax and proves to be an

effective player which minimizes the drawbacks of each method.

_______________________, Committee Chair

Dr. V. Scott Gordon

_______________________

Date

vii

ACKNOWLEDGEMENTS

I am very grateful for the opportunity to work on the Master’s Project. It has been

a very enriching experience for me. I would like to express my gratitude to Dr. Scott

Gordon for the guidance, support and motivation throughout the project. Artificial

Intelligence was the first course that I took in my Masters. I would like to thank Dr. Scott

Gordon for sharing his knowledge, sparking my interest in this field and encouraging me.

I would also like to thank Dr. Cui Zhang for being my second reader and taking

out the time to review the work. I am thankful to Dr. Cui Zhang for always having faith

in me and inspiring me to work hard.

This would not have been possible without the love and support from my family. I

want to thank my husband, Siddharth Gadkari for always being with me in the journey. I

would like to thank my parents, Rajendra Kumar Sharma and Archana Sharma for the

upbringing and I give credit to them for all the achievements in my life. I also want to

thank my in-laws, Ravindra Gadkari and Gauri Gadkari for the love and encouragement

to pursue my dreams.

viii

TABLE OF CONTENTS

Page

Acknowledgements .................................................................................................... vii

List of Tables ................................................................................................................ x

List of Figures ............................................................................................................. xi

Chapter

1. INTRODUCTION…………………………………………………………….…..1

2. BACKGROUND .................................................................................................... 4

2.1 Go Game ..................................................................................................... 4

2.2 Minimax with Alpha Beta Pruning and Selective Minimax ....................... 8

2.3 Supervised Learning ................................................................................. 10

2.4 Artificial Neural Networks ....................................................................... 11

2.5 Encog Framework for Neural Networks ................................................... 15

3. PREVIOUS RELATED WORK........................................................................... 17

4. TRAINING DATA ............................................................................................... 19

5. ALGORITHM....................................................................................................... 21

5.1 Graphical User Interface of the NNGo9x9 ............................................... 21

5.2 Description of Methods............................................................................. 22

5.3 Data Collected ........................................................................................... 26

5.4 Flow of Control ......................................................................................... 27

6. SYSTEM SPECIFICATIONS .............................................................................. 29

ix

7. RESULTS ............................................................................................................. 31

7.1 Comparison of Minimax and NNGo9x9................................................... 31

7.2 Performance of Neural Network ............................................................... 35

8. CONCLUSION ..................................................................................................... 37

9. FUTURE WORK .................................................................................................. 38

Appendix A Perl Script for 9x9 board in SGF Type I ................................................ 40

Appendix B Perl Script for mapping 19x19 to 9x9 board in SGF Type II ................. 42

Appendix C Java Code for Neural Network using Encog Framework ....................... 46

Appendix D Java code for Encoding Training Data ................................................... 48

Appendix E Java Code for NNGo9x9 ........................................................................ 50

References ................................................................................................................... 98

x

LIST OF TABLES

Tables Page

1. Performance Comparison of Backpropagation and Resilient

Backpropagation……………………………………………………………....36

xi

LIST OF FIGURES

Figures Page

1. 9x9 Go Board explaining Liberties, Group and Eye .…………………………. 5

2. Go Board explaining Capture and Self-Kill…….. ……………………………. 6

3. Go Board explaining Ko…………… .. ………….……………………………. 7

4. Minimax Search Tree…………………………….……...…………………….10

5. Perceptron and Functionality ..................... .…………………………………. 12

6. Feedforward Neural Network………….… ........ ……………………………. 13

7. Encoding Training Data…………… . ………….……………………………. 20

8. Graphical User Interface of NNGo9x9…………. .. …………………………. 21

9. Diagrammatic Representation of NNGo9x9 .. .………………………………. 27

10. Flowchart of the Algorithm………….… ........... ……………………………. 28

11. Go Board showing Minimax moves… ………….…………………………….32

12. Go Board showing moves from NNGo9x9.……… ....... ……………………. 33

13. Comparison of Minimax and NNGo9x9........ .………………………………. 34

14. Weight Distribution Histogram of Neural Network.… .. ……………………. 36

1

Chapter 1

INTRODUCTION

Go is an ancient and popular Chinese 2-player board game. Go board is in the

form of a grid and there are stones of two colors. The players play alternatively by

placing their stones at the empty intersections of lines. The aim is to occupy more

territory than the opponent and capturing the opponent’s stones as prisoners by following

the Go rules. The aim of the project is to present a theory on Computer Go. This theory

uses Artificial Intelligence concepts and combines them to form a system that plays Go.

The Wikipedia entry on Computer Go describes the field as follows: “Computer

Go is the field of Artificial Intelligence which is dedicated to creating computer programs

that can play Go” [1]. While the machines have conquered many games like Chess and

Checkers, there is lot of potential in the field of Computer Go. Unlike Chess and

Checkers, Computer Go has not been very challenging for the humans. There are

combinations of problems that are faced while programming for Computer Go. The

regular Go board size is 19x19 which is equivalent to 361 intersections on the board. The

search space of the game tree and the branching factor are huge which makes it difficult

to perform a deep search. The search process is costly and increases the time and space

complexity. Even though the rules of Go are simple, it is challenging to formulate them.

The number of allowed moves is great in number and it gets difficult to come up with an

optimal evaluation function that would suggest the best move. There are many

dependencies before selecting the best move. The worth of a move is dependent on

2

factors like number of opponents stones captured, the number of player’s stones that get

captured, the territory gained or lost by the opponent, the territory gained or lost by the

player, number of connecting groups, the liberties lost or gained by the player and the

opponent. The factors and the extent to which these factors are responsible in determining

the worth of a move is variable and it varies during different stages of the game. Another

factor is the variable number of stones on the board. In the game of chess, as the game

progresses the number of pieces on the board reduces. In Game of Go, the number of

stones might increase or decrease thus adding more complexity.

There are many patterns of stones encountered in Game of Go. Since in the Game

of Go, the pieces do not move around, it gives the human an advantage to speculate by

observing the patterns of stones. Humans are good at mapping those patterns in the game

whereas it becomes difficult to attain that by the computer program. The program can be

trained using supervised learning but there is lot of dependency on data for pattern

matching. The program might be able to recognize the pattern it saw in one section of the

game but might not be able to relate if it sees the pattern in other sections of the game or

along with other stones on the board.

In this project, a Computer Go program NNGo9x9 is implemented. The

limitations of Minimax approach are addressed by using the concepts of Selective

Minimax tree and Neural Networks. The Selective Minimax tree will help in reducing the

search space and limiting the search space to potential moves. The Neural Network is

trained by providing data/game records from Go databases. Since its challenging to come

up with the optimal evaluation function, the Neural Network is used to evaluate the board

3

states using supervised learning. Using Selective Minimax search, the search tree is

generated to a certain depth. After finding the possible and potential moves that can be

played by the computer and after the search to a certain depth the leaf nodes which are

game board states is passed as input to the trained Neural Network. The trained Neural

Network takes the board as input and estimates the winner of the game. The output of

Neural Network is combined with a static evaluation function and this combined measure

is used to determine the value of the leaf nodes of the Selective Minimax tree and suggest

a move for the computer to play. This measure is used as the evaluation function for the

Selective Minimax tree.

4

Chapter 2

BACKGROUND

2.1 Go Game

Go is an ancient Chinese game. It is a 2-player game and usually played on a

19x19 grid board. The other board size can be 13x13, 9x9, 7x7, etc. The Go game

requires a Go board along with stones that come in two colors, black and white stones.

The two players choose their stone and they play alternatively. The player with black

stones starts the game. The game ends when either there are no moves to be played or

both the players choose to pass their turn. The players are supposed to place only one

stone at any empty intersection in the grid. The scoring of the game is using either the

Chinese scoring system or the Japanese scoring system. The Chinese scoring system

accounts for the number of stones of the player on the board and the empty area known as

territory surrounded by the player’s stones. The Japanese scoring system accounts for the

territory surrounded by the player’s stone and the prisoners captured. The scoring method

used for the project is Japanese scoring method.

The players can form their chain of stones by placing the stones of same color

next to each other. The liberty of the whole group is counted. If the opponent surrounds

the whole group and there is no remaining liberties, that group dies and is captured by the

opponent and the empty area also known as the territory now belongs to the opponent. It

is important to keep the liberties of the group greater than one to keep it alive. An eye is a

pattern in Go such that there is an empty space in a group and the opponent cannot play

5

that move. An eye is formed when the stones of same color surround an empty

intersection. A group can also have two eyes and in this case, the group will never die as

it will never be captured. Some tips for the game are to connect the groups of same color

to increase the liberties so that it’s difficult for the opponent to capture, target the

opponents group which is smaller, easier to capture and lastly create eyes in the group to

make the group living.

Figure 1 – 9x9 Go Board explaining Liberties, Group and Eye

In Figure 1, the number of liberties of stone at A1 and I1 is two, at C1 and G1 is

three, at C3 and G3 is four. The number of liberties for a group is calculated by taking

into account the liberties of the all the stones in the group. If the opponent captures the

group then all the stones of the group become prisoners and the territory belongs to the

6

opponent. In Figure 1, the group starting at C5 has five black stones and the number of

liberties of this group is twelve. The number of liberties of group at D8 is eight.

(a) (b)

Figure 2 – Go Board explaining Capture and Self-kill

In Figure 2(a), the group starting at C3 has two white stones about to be captured

by black stones if the black stone is placed at C5. The white stone group will lose all its

liberties and will be captured. The group at C7 has black stone group surrounded by

white stone group but the black stone group still survives because it has two eyes that

makes it a living group and it can never be captured. The group with two eyes will never

run out of liberties and hence will never die. Looking at the group at G1, the black stone

group is surrounded by white stone group and it has one eye. The black stone group can

be captured in this case if it is surrounded from all sides. If a white stone is placed at I1

then the liberties of the black stone group will be zero and it will be captured. A suicide

7

move is one when the player plays a move at an eye in the opponent’s group and the

group to which the eye belongs has more than one liberty. Figure 2(b) explains the

concept of self-kill. If the black stone plays at A4 then it would kill its own group

therefore this move is not allowed.

Figure 3 – Go Board explaining Ko

Figure 3 explains the Ko condition which occurs in Go. Ko is a situation where

the game returns back to a previous board state. The player is not allowed to play the

move if it results in Ko. The first board state in Figure 3 shows a board state and the

second board state shows that the black stone played at the grey highlighted area and

captured the white stone. In the third board state, the white player plays the move at the

white highlighted area and captures the black stone. This brings the game back to its

previous state and hence results in Ko. Therefore the white player must be prevented

from playing the move shown in the third board state. For more information about the

Game of Go and its strategies, see KGS [2] and Sensei Library [3].

8

2.2 Minimax with Alpha Beta Pruning and Selective Minimax

The standard algorithm for two-player perfect-information games such as chess,

checkers is Minimax search with heuristic static evaluation. The Wikipedia entry on

Minimax describes the method as: “Minimax is a decision rule used in decision theory,

game theory, statistics and philosophy for minimizing the possible loss for a worst case

scenario” [4].

Alpha-beta pruning is an adversarial search algorithm which is combined with

Minimax search to prune the branches of the tree which will not be chosen. It helps in

making the Minimax search faster and efficient.

Minimax algorithm is a recursive algorithm that is used to generate the game tree

up to a certain depth or to the end of the game. The root of the Minimax tree represents

the move to be played by the computer. The levels or the tiers of the Minimax tree are

assigned alternatively for the maximizing and the minimizing player. In general, the

computer is the maximizing player and the opponent is the minimizing player. When the

algorithm reaches it base case (depth) or at the leaf nodes, the value or worth of the leaf

nodes are evaluated. The leaf nodes are the board states and their values are propagated to

the root of the tree. The maximizing player takes the maximum valued move from the

minimizing player while the minimizing player takes the minimum valued move from the

maximizing player. This process continues for every level of the tree until the root is

reached. The move whose value was assigned to the root is the move suggested by

Minimax for the computer. It would be very expensive if the tree is generated till the end

of the game. The branching factor of the tree is average number of children of the nodes.

9

The number of nodes searched increases with depth by a power of the branching factor.

Therefore, the tree is not searched until the end of the game and instead the non-final

game states are evaluated using an evaluation function.

In case of Game of Go, the branching factor is 361 which is a huge number and

therefore the search will be expensive. Hence the concept of Selective Minimax is used

which will make the search efficient and focused. The moves that are considered to be

potential moves are selected to be a part of the search process. In 9x9 Go, the moves

played are generally concentrated around the already placed stones. The empty

intersections which are in the vicinity of the existing stones on the board are selected for

the search process. It is assumed that the player would like to place the stone around

already placed stones to either increase its liberties, expand its own group, to occupy a

territory, capture a prisoner, save its own stone and many more. A focused search space is

generated for the Minimax algorithm. This version of Minimax is known as Selective

Minimax search. The search gets faster when the Selective Minimax is combined with

Alpha Beta pruning.

In Figure 4, there is a Minimax tree which is generated for every legal move that

can be played by the computer. The example shows the tree for one of the moves. The

depth is three and the value of the leaf nodes is evaluated and is propagated to the root.

The MAX layer choses the child node with maximum value and the MIN layer choses the

child node with the minimum value. The process continues until it reaches the root of the

tree and the value defines the worth of the move. Similarly, all the legal moves are

10

evaluated in this way and the move with maximum value is chosen as the move to be

played by the computer.

Figure 4 – Minimax Search Tree

2.3 Supervised Learning

Supervised Learning is a machine learning method where the target function is

learnt by looking at the set of training examples. The training examples are in the form of

a set of inputs and outputs. The system searches for patterns from the input and tries to

learn a function which is used to predict the output of the unseen cases.

11

2.4 Artificial Neural Networks

Artificial Neural Network is a computational model inspired and based on the

biological method of learning through biological neurons. Artificial Neural Network is a

structure of interconnected neurons that learns complex target functions by looking for

patterns from input and predicting the output. The Neural Network can approximate the

target function without the knowledge of the algorithm required to solve a problem. The

structure of the Neural Network consists of three or more layers. The first layer is the

input layer which represents the data and the inputs are passed to the next layer. The

number of neurons in the input layer is dependent on the number of inputs in the data set.

The next layer is the hidden layer that can be zero or more layers. The hidden layer gets

the input from the input layer. The number of neurons in hidden layers is variable. The

last layer is the output layer of the Neural Network and this layer is connected to the

hidden layer. The number of neurons is dependent on the number of outputs in the data

set. The neurons of the network are connected to each other in either cyclic or acyclic

way. There is a weight associated with each connection in the Neural Network. The

weight is updated according to the error. When the neurons are connected in acyclic way,

it is known as feed forward network. When the neurons are connected in cyclic way, it is

known as feedback network.

The behavior of the neuron is dependent on the input from the previous layer, the

weight associated with each input and the activation function. The weighted inputs and

the activation function are applied to hidden layer and the output layer but not on the

input layer. The activation function is of three types: the step function, the linear function

12

and the sigmoid function. In step function, there is a threshold and the weighted sum of

all the inputs to the neuron should be greater than or equal to the threshold for the neuron

to fire. If the weighted sum is greater than or equal to the threshold then the output of the

neuron is one else its zero. The linear activation function’s output is directly proportional

to the weighted sum of all the inputs to the neuron. The sigmoid function is a nonlinear

function which is useful in classifying non linearly separable data.

A perceptron is the simplest form of Neural Network with a set of inputs and a

single output with a step activation function. Assuming the number of inputs for a neuron

is “n”, the weight is represented by “w” and the value of the input is represented by “x”.

The activation function is applied on the weighted sum of the inputs for a neuron as

shown in Figure 5.

Figure 5 – Perceptron and Functionality

13

Figure 6 – Feedforward Neural Network

Multilayer Perceptron is network of multiple layers consisting of neurons and

these layers are connected as a feed forward network as shown in Figure 6. Each neuron

has a sigmoidal activation function. The network is trained using a learning process

known as Backpropagation. To train the network, the input is passed through the input

layer and the output of the network is compared to the target output. The idea of

Backpropagation is to propagate the error back to the network and adjust the weights of

the connection to reduce the error incurred [5]. This whole process continues until either

the value of the error has reached an acceptable level or the number of training iterations

has reached its maximum limit. The reason for attaching a limit to the error value and the

number of iterations is to avoid the problem of overfitting. If the network is trained on

training data for too long then the performance on training data might be good but the

network might not be able to generalize the unseen cases well. The two important

parameters to the Backpropagation functions are learning rate and momentum. The

14

learning rate is the factor that defines how much will the value be changed or modified. If

the learning rate is high then there are chances of the network falling into local minima.

Momentum is used to help the network learn quickly in those cases where large weight

changes are required to learn the training set.

For a neural network to learn a target function, it needs to follow the algorithm.

The data is normalized and encoded for the network. The number of nodes in the input

layer and output layer is fixed according to the number of inputs and outputs in data set

respectively. The number of hidden layers and the number of nodes in each hidden layer

is determined. The activation function of each neuron is set. The nodes of the network are

connected and the initial weight (random weights) of each connection is set. The input is

passed to the network and the network output is compared to the target output. If there is

error, the error is propagated back to the network and the weights are adjusted to

minimize the error. The learning process continues until the error has reached an

acceptable value, or the number of iterations has reached a maximum limit. One of the

most widely used metrics used for performance evaluation is Mean Squared Error. Mean

Squared Error measures the average of the square of difference in the target output and

the network’s output as shown in Equation 1. Neural Networks are widely used for

regression, data classification, robotics and control. Other applications of Neural Network

are pattern recognition, face recognition, stock prediction etc.

Equation 1 – Mean Squared Error

15

2.5 Encog Framework for Neural Networks

Encog is a Machine Learning Framework which supports many algorithms like

Neural Networks, Support Vector Machines, etc. This framework has classes in Java that

can be used to develop Neural Network [6]. The Encog Workbench provided capability

to convert the excel file into binary file (.egb) which makes it easy to load the training

data in the Neural Network. The trained network can be saved and loaded easily along

with updated weights. The training data consists of two sets, input set and the ideal set,

the ideal set is the output of the given inputs.

The class BasicNetwork is used to create the structure of the Neural Network.

The layers in the Neural Network are added by using the method AddLayer() of

BasicNetwork. The class used for loading the training data is MLDataSet.

EncogDirectoryPersistence.saveObject and EncogDirectoryPersistence.loadObject are

used to load and save the Neural Network with weights. The Resilient Backpropagation

learning method was used for training the neural network. Unlike Backpropagation, the

Resilient Backpropagation does not require the learning rate and momentum to be set by

the user. The reason is that the learning rate and the momentum chosen for the Neural

Network might not be optimal. Resilient Backpropagation is the training method which is

used to train the Feedforward Neural Networks. The performance observed for Resilient

Backpropagation is better than Backpropagation and the training is faster than

Backpropagation.

The Wikipedia entry for Resilient Backpropagation describes the algorithm as

follows: “Resilient Backpropagation takes into account only the sign of the partial

16

derivative and not the magnitude and it acts independently on each weight. For each

weight, if there was a sign change of the partial derivative of the total error function

compared to the last iteration, the update value for that weight is multiplied by a factor

that is less than 1. If the last iteration produced the same sign, the update value is

multiplied by a factor greater than 1. The update value are calculated for each weight in

the same manner and each weight are then changed by its own update value, in the

opposite direction of that weight’s partial derivative, so as to minimize the total error

function. Resilient Backpropagation algorithm is a fast weight update mechanism.” [7]

Refer to Appendix C for creating the structure of Neural Network along with the training

and testing of the neural network.

17

Chapter 3

PREVIOUS RELATED WORK

Evolving Neural Network to Focus Minimax search (David E. Moriarty and Risto

Miikkulainen) [8]: This paper compares a Neural Network which is evolved against a full

width Minimax tree. This method trains the network to look for promising moves only.

The Neural Network suggests the promising moves based on its training and Minimax

algorithm generates a tree based on that which results in focused search and faster

computation.

Evaluation in Go by Neural Network using Soft Segmentation (M. Enzenberger)

[9]: This paper talks about the Neural Network architecture which is built based on the

concept of soft segmentation. The board positions are evaluated by dividing the positions

into different segments. Since static evaluation function is not the optimal way to

evaluate, the approach uses segmentation for local and global search. This approach is

also used by NeuroGo program.

Evolutionary Swarm Neural Network Game Engine for Capture Go (M.

Enzenberger) [10]: This paper uses the hybrid approach of Neural Network, particle

swarm optimization and evolutionary algorithm to evaluate the leaf nodes of a game tree.

A Hybrid Neural Network and Minimax for zero sum games (Mathys C. du

Plessis) [11]: This paper describes the approach of Neural Network and Minimax on

Game of Tic Tac Toe. This approach trains the Neural Network to limit the size of the

tree searched by Minimax to reduce the processing time.

18

Evolving Neural Network to play Checkers without relying on Expert knowledge

(Kumar Chellapilla and David B. Fogel) [12]: This paper describes the approach used to

evolve Neural Network without any pre-programmed instructions or knowledge about the

game of Checkers. Using evolutionary algorithm, a population of Neural Network is

generated and made to compete with each other for each generation and the networks that

performed best would survive in the next generation.

Sensei Library and Gnu Go: The computer Go programs which are good are Zen,

Crazy Stone, MoGo and Many Faces of Go [3][13].

19

Chapter 4

TRAINING DATA

The training data for NNGo9x9 is generated by taking the games played from

various sources like KGS (Kiseido Go Server) [2]. The training data also includes

exercises from Sensei Library [3]. The training data is a combination of full games

played, opening moves, some strategy moves and sections of board. The data collected is

in form of SGF that is Smart Game Format. To make it compatible with the system, the

sgf files were converted to the desired format using Perl scripts as shown in Figure 7

[14]. Refer to Appendix A and Appendix B for the Perl scripts. Refer to Appendix D for

data encoding.

The game records collected are converted into format that can be interpreted by

the Neural Network. The entire data set is in the form of two-dimensional array where

each row is a game record and the input to the Neural Network. The input to the network

is a one-dimensional array of length 82 where 81 array elements represent the 9x9 board

and last element is the expected output. The value of the array elements is either “1”

(white), “-1” (black) or “0” (empty). The input represents the board state and the output

represents the winner of the game described by the input, either “1” for white stone or

“-1” for black stone.

20

Figure 7 – Encoding Training Data

21

Chapter 5

ALGORITHM

5.1 Graphical User Interface of NNGo9x9

Figure 8 – Graphical User Interface of NNGo9x9

The user interface of the NNGo9x9 system is shown in Figure 8. The user

interface gives the option to play either Player vs. Player or Player vs. Computer. The

user can choose which player gets to play first. Once the option is set, the user can begin

playing by pressing the “Play” button. The detailed description of each player is given in

the sections denoted by “Black Stone” and “White Stone”. The details are updated after

every move. The detailed description mentions the number of stones on the board,

22

number of prisoners, the number of territory occupied by the player, the number of single

eye and number of two eyes, the number of liberties left. The players can pass their turn

by using the “PASS” button. When both players pass, the game ends and the game results

are displayed. The user can also get the score at any stage of the game by using the “Get

Score!” button. The game board displayed is a 9x9 board with index and the intersections

are referenced by column first and then row. For example, the highlighted black stone

will be referenced as E5 and highlighted white stone is referenced as F5. The highlight

shows the last move played. The dark grey highlight shows the last move played by the

player with black stone and the white highlight shows the last move played by the player

with white stone [15].

5.2 Description of Methods

The system is implemented in Java using NetBeans Environment 7.0.1 [16]. The

primary data structure is a one-dimensional array of length 81 for 9x9 boards. Each index

of array represents one intersection of the game board. If the value at an index of array is

“0” then it represents an empty intersection. If the value at an array index is “1”, it

represents an intersection occupied by a black stone. If the value at an index of array is

“2”, it represents an intersection occupied by white stone.

In Neural Network, the black stone is represented as “-1”, the white stone is

represented as “+1” and an empty intersection is represented as “0”.

The array set[] represents the status of the intersections. The status can be “0”

(empty), “1” (black stone) or “2” (white stone). By default, the set[] array is set to 0.

23

The array root[] represents which group the set stone (black stone or white stone)

belongs to. By default, the root is 100 which indicate that the intersection does not belong

to any group.

The array liberties[] represents the number of liberties left for an intersection. For

example, the value of index 18 of array liberties[] is two. It means that the number of

liberties for the 18th

intersection on board is two. By default, the number of liberties is set

to two, three or four depending on the location of the intersection.

When the stone is set, the groups and their root are updated. The liberties are

updated and a check is conducted to see if stones or groups of stone are captured. After

the updates, each player’s game description is updated.

The init_set() method sets all the intersections to empty i.e. zero. The

init_liberties() method sets the number of liberties for each intersection to their default

value. The init_root() method sets all the empty intersection under root node 100 ( the

default group number).

The set() method is called after every move is played on the board. This method

checks whether the intersection is empty and the move played is a valid move. If the

conditions are satisfied, the empty intersection is set to either the black stone or white

stone depending on whichever player played the move. The liberty of the intersections

and the surrounding intersections are updated. If the stone placed becomes a part of larger

group of same stone then the root of the intersection is set to the root of the larger group.

The counter for the number of stones on the board is updated.

24

The capture() method is called after the stone is set on the board. This method

checks if any prisoners are captured after the move is played. For this the liberties of all

the groups on the board is checked and the groups for which the liberties is zero is

captured.

The captured() method does the task of removing the stones from the board and

setting the value of the intersection of all the captured stones to empty intersection and

the root is set to default. The method then updates the values of number of black/white

stones, number of black/white stones captured, the number of liberties of the captured

stones and the surrounding intersections.

The grp() and territory() method are used to assign the empty intersections as

black stone’s territory or white stone’s territory or a no man’s land. The method finds all

the groups of empty intersections and checks which player surrounds the group of empty

intersections. If a group of stone of same type surrounds an empty intersection, the

territory/group belongs to that group and if both the players surround the group, the

territory does not belong to any player.

The eye() method checks if the move played on an empty intersection is a single

eye or not. If the eye belongs to the opponent and the number of liberties of the

opponent’s group is greater than one (if the move is played) then it would be a suicide

move otherwise the player is allowed to play the move.

The self_kill() method checks for the moves where a player would play a move

and kill its own group. When the player plays such move, the liberties of the player’s

25

group become zero and the group dies. This method checks for such conditions and

prevents the player from playing the self-kill move.

The ko() method checks for ko conditions and prevents it from taking place. It

checks if the board state does not go back to the previous board state.

The makeMove(), min() and max() methods are used to implement the Minimax

tree. These methods are called when the user chooses to play against the computer. A

Minimax tree is generated until the specified depth. The eval_func() is called to evaluate

the board states at the leaf nodes. The eval_func() encodes the leaf nodes and passes the

input to the trained Neural Network. The Neural Network returns the player for whom the

board state is in favor. The output of the Neural Network is combined with the static

evaluation function to calculate the worth of the board state.

The focused_search() method is used to create the search space for the Minimax

algorithm. It is assumed that the empty intersections surrounding the stones is mostly

chosen and hence appears to be promising moves. The Minimax algorithm generates the

tree based on the moves returned by the focused_search() resulting in faster execution.

Refer to Appendix E for the NNGo9x9 program.

26

5.3 Data Collected

The information that is collected after move:

1. Number of Black stones

2. Number of White stones

3. Number of captured Black stones

4. Number of captured White stones

5. Number of territories occupied by Black stones.

6. Number of territories occupied by White stones.

7. Number of eyes of Black stones.

8. Number of eyes of White stones.

9. Number of two eyes for Black stones.

10. Number of two eyes for White stones.

11. Number of Black stone groups.

12. Number of White stone groups.

13. Number of liberties of each intersection of the board.

14. Status of each intersection of the board (black, white or empty).

15. Previous board state for checking Ko.

27

5.4 Flow of Control

Figure 9 – Diagrammatic Representation of NNGo9x9

The algorithm of NNGo9x9 system is described by the flowchart shown in Figure 10.

The algorithm describes how the move to be played by the computer is evaluated. When

its computer’s turn, a focused search space is created and the tree for each move in search

space is generated by Selective Minimax. The tree is generated until the depth and the

leaf nodes are sent as input to the trained neural network as shown in Figure 9. The value

of the leaf nodes is the cumulative output from Neural Network and the static evaluation

function. The value of each legal move from the focused search space is generated and

the one with the maximum worth is the move chosen as the computer’s move.

28

Figure 10 – Flowchart of the Algorithm

29

Chapter 6

SYSTEM SPECIFICATIONS

Board size: 9x9

Data structure to represent the board: one dimensional array of size 81. In the

system, “1” represents black stone, “2” represents white stone and “0” for empty

intersection. In Neural Network, “-1” represents black stone, “+1” represents white

stone and “0” for empty intersection.

Komi*: 6.5

Depth of Minimax tree**: 4

Language and Environment: Java using NetBeans Environment 7.0.1 [16].

Other Libraries used: Encog Library and POI library.

Evaluation function:

bscore = 0.5*territory_black + 2*cnt_white_cpt + 0.5*cnt_black;

wscore = 0.5*territory_white + 2*cnt_black_cpt + 0.5*cnt_white;

Evaluation function (white stone) = wscore - bscore;

Evaluation function (black stone) = bscore - wscore;

Neural Network Structure:

Input Layer: 1 layer with 81 nodes. Each node represents one intersection of

the 9x9 board.

30

Hidden Layer: 4 layers. The first, second, third and fourth hidden layer have

71, 51, 41 and 21 nodes respectively.

Output Layer: 1 layer with 1 node representing the winner of the game for the

given board state.

Activation Function: TANH

Bias used: Yes.

Learning method: Resilient Backpropagation.

Error Rate: 0.0001 (MSE). This is the acceptable amount of residual aggregate

error after training.

Iterations: 1000

* Komi: The player with black stone gets the advantage of playing first.

Therefore, Komi is added to the score of the player with white stone to

compensate for playing second.

** Bias: A bias node is connected to all the neuron nodes of the network except

for the input layer. It helps to predict the output for unseen cases.

31

Chapter 7

RESULTS

7.1 Comparison of Minimax and NNGo9x9

The performance of the game using only Minimax algorithm is dependent on the

evaluation function and the search depth. There are many factors like number of stones

on the board, number of prisoners and number of territories that are used in an evaluation

function. The optimal weight or degree of each of these factors is hard to find. If the

evaluation function has territory as the dominant factor then the search is directed

towards those board moves which would yield maximum number of territories. Due to

the greedy search, the computer does not look at other aspects and ends up falling for

local maxima. Another issue with Minimax is during the opening moves. Since there is

not much data during the opening moves and both the players are usually at the same

level, most of the leaf nodes end up getting the same value. The node that was evaluated

first is saved as the move with maximum value. This leads into computer playing a move

far off from majority of the stones on the board and wasting its move. In Figure 11, the

Minimax is representing the white stone player. In the board state displayed in Figure

11(a), if the number of territories is a part of evaluation function then the Minimax search

greedily fills out the whole board covering as many territories as possible. In Figure

11(b), the Minimax search expands the territory by playing at C3 instead of playing at H2

and saving the three white stones which might be captured by the opponent in the next

move if a black stone is placed at H2 .

32

(a) (b)

Figure 11 – Go Board showing Minimax moves

When a Neural Network is trained with the patterns from the game consisting of

opening moves, middle games and end game, it recognizes the patterns when it comes

across one while playing. This acts as an advantage especially during opening moves

when information is not adequate. The Neural Network predicts the player for which the

board state is in favor. This helps greatly in situations where it is difficult to suggest a

move using static evaluation functions. It becomes a drawback when Neural Network has

not seen a pattern before and it predicts poorly hence resulting in an unfavorable move

for the computer. One solution is to have as many patterns as possible covering every

strategy from all subsections of the board. But again that might not be feasible.

NNGo9x9’s solution is to use a hybrid method of Minimax and Neural Network.

Neural Network would predict based on its experience through training and the search is

33

guided by the static evaluation function to provide reasonable moves for the computer.

The hybrid method is effective as it minimizes the drawbacks of each method. In Figure

12 below, the Neural Network represents the white stone player. The Neural Network

player saves the white stone group by playing its moves at H2 in Figure 12(a) and E4 in

Figure 12(b).

(a) (b)

Figure 12 – Go Board showing moves from NNGo9x9

Figure 13 is showing the comparison of Minimax and the Hybrid system of

Neural Network with Selective Minimax. Figure 12(a) shows a board state and a scenario

and it is the turn of white stone player to play. Figure 12(b) shows the output from

Minimax. The Minmax player falls for local maxima and chooses F6 to capture the

territory at F5. But it does not look ahead that in next move the black stone can play at F5

34

and kill the white stone at E5. The white stone will not be able to play at E5 as it would

lead to Ko condition. On the other hand, NNGo9x9 looks ahead and chooses F5 and save

its white stones. Figure 12(c) shows the output from NNGo9x9 system.

(a)

(b) (c)

Figure 13 – Comparison of Minimax and NNGo9x9

35

7.2 Performance of Neural Network

The structure of the Neural Network consists of one input layer with 81 neurons,

four hidden layers with 71, 51, 41 and 21 neurons each and the output layer with 1

neuron. Each hidden layer and the output layer had a bias connected to it. The initial

weights were randomized and the activation function is set to TANH. TANH produces

output in the range of -1 to +1. Since the training data represented the board with -

1(black), 0(empty) and 1(white) and the output is the winner of the game represented

either by -1 or 1, the output of the Neural Network needs to be in the range of -1 to +1.

The terminating condition is if the number of training cycle goes beyond 1000 iterations

and the error rate is below the acceptable error rate. The acceptable error in MSE is

.0001.

Training with Backpropagation: The learning rate set for the Neural Network is

0.2 and the momentum is 0.7. The Neural Network is trained on the training dataset of

542 game records using Backpropagation learning. The Neural Network was tested on

test data set of 138 game records. The network could predict 58% of the training dataset

correctly and 50% of the test dataset correctly. When the network was trained on the

combined dataset consisting of 680 game records, the prediction accuracy of the Neural

Network on the training data set was 57%. The Neural Network took 1000 epochs to

complete the training with an error of 1.7.

Training with Resilient Backpropagation: The Neural Network is trained on the

training dataset of 542 game records using Resilient Backpropagation learning. The

Neural Network was tested on test data set of 138 game records. The network could

36

predict 100% of the training dataset correctly and 65% of the test dataset correctly i.e. 87

out of 138. When the network was trained on the combined dataset consisting of 680

game records, the prediction accuracy of the Neural Network on the training data set was

100%. It recognized all the learnt patterns correctly and the training period was fast. The

Neural Network took 42 epochs to complete the training with an error of less than 0.0001

as shown in Table 1. Figure 14 shows the weight distribution histogram of the trained

neural network.

Figure 14 – Weight Distribution Histogram of Neural Network

Backpropagation Resilient Backpropagation

Training Data ( 542) 58% (316/542) 100% (542/542)

Test Data ( 138 ) 50% ( 70/138) 65% (87/138)

Error ~1.7 <0.0001

Number of Epochs 1000 45

Backpropagation Resilient Backpropagation

Data Set (680) 57% (387/680) 100% (680/680)

Error ~1.7 <0.0001

Number of Epochs 1000 42

Table 1 – Performance Comparison of Backpropagation

and Resilient Backpropagation

37

Chapter 8

CONCLUSION

Neural Network performs well with more patterns provided as training data to it.

The performance of Minimax is improved when Neural Network is used for evaluating

the leaf nodes, rather than using a standard static evaluation function. The trained Neural

Network helps to look deeper without actually searching at greater depths. The Selective

Minimax is faster due to the contracted and focused search space. The evaluation of the

moves is faster and results in computer playing near the stones already on the board.

Resilient Backpropagation learns faster than standard Backpropagation. Neural

Network gets trained very quickly with Resilient Backpropagation. The Neural Network

learns with 100% accuracy on the training data when trained with Resilient

Backpropagation. The number of epochs and error rate is considerably smaller than the

standard Backpropagation method.

NNGo9x9 plays well against full width Minimax. The evaluation of NNGo9x9 is

faster and it wins almost every time against Minimax. NNGo9x9 plays well at Beginners

level when competing against other Computer Go programs. The Hybrid system of

Neural Network and Selective Minimax has proved to be successful and better in

performance than Minimax. The idea of using hybrid method was to minimize the

drawbacks to each method and utilize their strengths.

38

Chapter 9

FUTURE WORK

The future work would include the application to be playing on 19x19 board and

checking the performance of the implemented system on 19x19 board. Since there are

more game records and tournaments played on 19x19 boards, there would be large and

substantial database for training the Neural Network.

Another work would be to implement a hybrid system of Selective Minimax and

Support Vector Machine and compare its performance with the Selective Minimax and

Neural Network (trained using Resilient Propagation). Since the performance of SVM is

considered better than Neural Networks in some cases, it would be interesting to observe

the performance of SVM for Go. SVM is also applicable for this application because of

the small number of different possible outputs in the training data.

It would be very useful to find an efficient way where a game pattern (in the form

of training set) is replicated for all subsections of the board so that the Neural Network

recognizes the pattern irrespective of the pattern’s angle, location or the presence/absence

of other stones on board.

Another feature that can be added to the system is to train Neural Network to

create a focused search space for Minimax algorithm. The system can be further

optimized for faster execution.

39

One feature that is not added in the project is that the computer does not pass on

its own. It is legal and sometimes advantageous for the player to choose to “pass”, or not

place a stone, and the computer currently never does that.

40

Appendix A: Perl Script for 9x9 board in SGF Type I

Smart Game Format Type I

Go1.pl

use strict;

my $tp = "C:\\tgo.sgf";

my $outp = "C:\\ resgo.sgf";

open(my $outflh,'>>', $outp);

my $path = "C:\\Data";

opendir(my $dirh, $path);

my @file = readdir($dirh);

my @fsgf = grep(/.*\.sgf/,@file);

foreach my $files (@fsgf)

{

my $temp_path = "$path\\$files";

open(my $flh,'<', $temp_path);

while(my $str1 = <$flh>)

{

my $t;

chomp $str1;

if($str1 =~ /.*RE\[(B|W)\+.*/)

{

$str1 =~ s/.*RE\[(B|W)\+.*/$1/;

if( $str1 eq "B")

{

$t =-1;

}

elsif($str1 eq "W")

{

$t =1;

}

print $outflh "File $t"."\n";

}

if($str1 =~ /^;(B|W)\[/)

{

$str1 =~ s/\];/\]\n;/g;

$str1 =~ s/;([B|W])\[([a-j])([a-j])\]/$1#$2#$3/ig;

open(my $tempflh,'>', $tp);

print $tempflh "$str1"."\n";

close($tempflh);

open(my $tempflh,'<', $tp);

while(my $str2 = <$tempflh>)

{

41

if($str2 =~ /([B|W])#([a-j])#([a-j])/) {

my $i;

chomp $str2;

my @arr = split(/#/,$str2);

for($i=0;$i<scalar(@arr);$i++)

{

if( $arr[$i] eq "B")

{

$arr[$i]=-1;

}

elsif($arr[$i] eq "W")

{

$arr[$i]=1;

}

}

for($i=0;$i<scalar(@arr);$i++)

{

if( $arr[$i] eq "a")

{

$arr[$i]=1;

}

elsif($arr[$i] eq "b")

{

$arr[$i]=2;

}

elsif($arr[$i] eq "c")

{

$arr[$i]=3;

}

elsif( $arr[$i] eq "d")

{

$arr[$i]=4;

}

elsif($arr[$i] eq "e")

{

$arr[$i]=5;

}

elsif($arr[$i] eq "f")

{

$arr[$i]=6;

}

elsif( $arr[$i] eq "g")

{

$arr[$i]=7;

}

elsif($arr[$i] eq "h")

{

$arr[$i]=8;

}

elsif($arr[$i] eq "i")

{

$arr[$i]=9;

}

elsif($arr[$i] eq "j")

{

$arr[$i]=9;

}

}

my $tvar = (($arr[2]-1)*9)+($arr[1]-1);

print $outflh "$arr[0] $tvar"."\n";

}

}

}

}

}

print "Done";

exit(0);

42

Appendix B: Perl Script for mapping 19x19 to 9x9 board in SGF Type II

Smart Game Format Type II

Go2.pl

use strict;

my $tp1 = "C:\\tgo1.sgf";

my $tp = "C:\\tgo.sgf";

my $outp = "C:\\resgo.sgf";

open(my $outflh,'>>', $outp);

my $path = "C:\\Data";

opendir(my $dirh, $path);

my @file = readdir($dirh);

my @fsgf = grep(/.*\.sgf/,@file);

my $min0=20; my $min1=20; my $max0=-1; my $max1=-1;

foreach my $files (@fsgf)

{

my $temp_path = "$path\\$files";

open(my $flh,'<', $temp_path);

while(my $str1 = <$flh>)

{

chomp $str1;

if($str1 =~ /^(PL)\[/)

{

my $t;

my $ts = $str1;

$ts =~ s/^(PL)\[(B|W)\]/$2/;

if( $ts eq "B")

{

$t =-1;

}

elsif($ts eq "W")

{

$t =1;

}

print $outflh "File $t"."\n";

}

if($str1 =~ /^((AB)|(AW))/)

{

my $t = $str1;

$t =~ s/^A(B|W).*/$1/;

$str1 =~ s/^A(B|W)/$1/;

$str1 =~ s/\]\[/\]\n$t\[/g;

$str1 =~ s/([B|W])\[([a-s])([a-s])\]/$1#$2#$3/ig;

open(my $tempflh,'>', $tp);

print $tempflh "$str1"."\n";

close($tempflh);

open(my $tempflh,'<', $tp);

while(my $str2 = <$tempflh>)

{

43

my $i;

chomp $str2;

my @arr = split(/#/,$str2);

for($i=0;$i<scalar(@arr);$i++)

{

if( $arr[$i] eq "B")

{

$arr[$i]=-1;

}

elsif($arr[$i] eq "W")

{

$arr[$i]=1;

}

}

for($i=0;$i<scalar(@arr);$i++)

{

if( $arr[$i] eq "a")

{

$arr[$i]=1;

}

elsif($arr[$i] eq "b")

{

$arr[$i]=2;

}

elsif($arr[$i] eq "c")

{

$arr[$i]=3;

}

elsif( $arr[$i] eq "d")

{

$arr[$i]=4;

}

elsif($arr[$i] eq "e")

{

$arr[$i]=5;

}

elsif($arr[$i] eq "f")

{

$arr[$i]=6;

}

elsif( $arr[$i] eq "g")

{

$arr[$i]=7;

}

elsif($arr[$i] eq "h")

{

$arr[$i]=8;

}

elsif($arr[$i] eq "i")

{

$arr[$i]=9;

}

elsif($arr[$i] eq "j")

{

$arr[$i]=10;

}

elsif($arr[$i] eq "k")

{

$arr[$i]=11;

}

elsif( $arr[$i] eq "l")

{

$arr[$i]=12;

}

elsif($arr[$i] eq "m")

{

$arr[$i]=13;

44

}

elsif($arr[$i] eq "n")

{

$arr[$i]=14;

}

elsif( $arr[$i] eq "o")

{

$arr[$i]=15;

}

elsif($arr[$i] eq "p")

{

$arr[$i]=16;

}

elsif($arr[$i] eq "q")

{

$arr[$i]=17;

}

elsif($arr[$i] eq "r")

{

$arr[$i]=18;

}

elsif($arr[$i] eq "s")

{

$arr[$i]=19;

}

}

open(my $tpflh,'>>', $tp1);

print $tpflh "@arr"."\n";

close($tpflh);

if($arr[1]<$min0)

{

$min0 = $arr[1];

}

if($arr[1]>$max0)

{

$max0 = $arr[1];

}

if($arr[2]<$min1)

{

$min1 = $arr[2];

}

if($arr[2]>$max1)

{

$max1 = $arr[2];

}

}

}

}

open(my $tpflh,'<', $tp1);

while(my $str3 = <$tpflh>)

{

chomp $str3;

my @arr1 = split(/ /,$str3);

if((($max0-$min0)<=9) && $max0>9)

{

$arr1[1] = $arr1[1] - ( $max0 - 9);

}

if((($max1-$min1)<=9) && $max1>9)

{

$arr1[2] = $arr1[2] - ( $max1 - 9);

}

my $tvar = (($arr1[2]-1)*9)+($arr1[1]-1);

print $outflh "$arr1[0] $tvar"."\n";

}

close($tpflh);

open(my $tpflh,'>', $tp1);

print $tpflh "";

45

close($tpflh);

$min0 = 20;

$min1 = 20;

$max0 = -1;

$max1 = -1;

}

print "done";

exit(0);

46

Appendix C: Java Code for Neural Network using Encog Framework

TrainGo.java : Code creates the structure of the Neural Network followed by training

and testing.

package traingo;

import java.io.File;

import org.encog.Encog;

import org.encog.engine.network.activation.ActivationTANH;

import org.encog.mathutil.randomize.ConsistentRandomizer;

import org.encog.ml.data.MLData;

import org.encog.ml.data.MLDataPair;

import org.encog.ml.data.MLDataSet;

import org.encog.neural.networks.BasicNetwork;

import org.encog.neural.networks.layers.BasicLayer;

import org.encog.neural.networks.training.propagation.resilient.ResilientPropagation;

import org.encog.persist.EncogDirectoryPersistence;

import org.encog.util.simple.EncogUtility;

/* * @author Sonal */

public class TrainGo {

public static void main(String[] args) {

int correct=0,wrong=0;

BasicNetwork network = new BasicNetwork();

//Input Layer

network.addLayer(new BasicLayer(null,true,81));

network.addLayer(new BasicLayer(new ActivationTANH(),true,81));

//Hidden Layers

network.addLayer(new BasicLayer(new ActivationTANH(),true,71));

network.addLayer(new BasicLayer(new ActivationTANH(),true,51));

network.addLayer(new BasicLayer(new ActivationTANH(),true,41));

network.addLayer(new BasicLayer(new ActivationTANH(),true,21));

//Output Layer

network.addLayer(new BasicLayer(new ActivationTANH(),false,1));

network.getStructure().finalizeStructure();

network.reset();

new ConsistentRandomizer(-1,1,100).randomize(network);

// create training data

MLDataSet result = EncogUtility.loadEGB2Memory(new File("C:\\

gotraindata.egb"));

// train the neural network

final ResilientPropagation train = new ResilientPropagation

(network, result);

//final Backpropagation train = new Backpropagation(network,

result,0.7,0.2);

train.fixFlatSpot(false);

int epoch = 1;

do

{

train.iteration();

System.out.println("Epoch #" + epoch + " Error:" +

train.getError());

epoch++;

}

while(train.getError() > 0.0001 && epoch < 1000);

train.finishTraining();

EncogDirectoryPersistence.saveObject(new File("C:\\

network12.eg"), network);

MLDataSet testresult = EncogUtility.loadEGB2Memory(new

File("C:\\gotestdata.egb"));

// test the neural network

System.out.println("Neural Network Results:");

int ct = 0;

47

for(MLDataPair pair: testresult ) {

final MLData output = network.compute(pair.getInput());

ct++;

if(output.getData(0)>0 && pair.getIdeal().getData(0)>0 )

correct++;

else if(output.getData(0)<0 && pair.getIdeal().getData(0)<0 )

correct++;

else

wrong++;

System.out.println(ct+": actual=" + output.getData(0)+ ",

ideal=" + pair.getIdeal().getData(0));

}

System.out.println("Correct: "+correct+" Wrong: "+ wrong);

Encog.getInstance().shutdown();

}

}

48

Appendix D Java code for Encoding Training Data

ExcelNN.java package excelnn;

import java.io.IOException;

import java.io.File;

import java.io.FileInputStream;

import java.io.FileNotFoundException;

import java.io.FileOutputStream;

import java.util.Iterator;

import org.apache.poi.xssf.usermodel.XSSFSheet;

import org.apache.poi.xssf.usermodel.XSSFWorkbook;

import org.apache.poi.ss.usermodel.Cell;

import org.apache.poi.ss.usermodel.Row;

/**

* @author Sonal

*/

public class ExcelNN {

private double[][] matrix = new double[1000][82];

private int i=-1;

public void read() throws IOException{

try {

FileInputStream file = new FileInputStream(new File("C:\\

GoData.xlsx"));

XSSFWorkbook workbook = new XSSFWorkbook(file);

XSSFSheet sheet = workbook.getSheetAt(0);

Iterator<Row> rowIterator = sheet.iterator();

while(rowIterator.hasNext()) {

Row row = rowIterator.next();

if(row.getCell(0).getCellType()==Cell.CELL_TYPE_STRING)

{

if("File".equals(row.getCell(0).getStringCellValue()))

{

i=i+1;

matrix[i][81]=row.getCell(1).getNumericCellValue();

}

}

else

{

for(int x=0;x<81;x++)

{

if(row.getCell(1).getNumericCellValue()==x)

{

matrix[i][x]=row.getCell(0).getNumericCellValue();

}

}

}

}

file.close();

}

catch(FileNotFoundException e) {

System.out.println("Error1");

e.printStackTrace();

}

catch(IOException e) {

System.out.println("Error2");

e.printStackTrace();

}

}

public void write() throws IOException{

49

try {

FileOutputStream file = new FileOutputStream(new File("C:\\

GoData2.xlsx"));

XSSFWorkbook workbook = new XSSFWorkbook();

XSSFSheet sheet = workbook.createSheet("1");

for(int xi=0;xi<=i;xi++)

{

Row row = sheet.createRow(xi);

for(int j=0;j<82;j++)

{

row.createCell(j).setCellValue(matrix[xi][j]);

}

}

workbook.write(file);

file.close();

System.out.println("Done");

}

catch(FileNotFoundException e) {

System.out.println("Error1");

e.printStackTrace();

}

catch(IOException e) {

System.out.println("Error2");

e.printStackTrace();

}

}

public static void main(String[] args) throws IOException {

ExcelNN test = new ExcelNN();

test.read();

System.out.println(test.i);

test.write();

}

}

50

Appendix E Java Code for NNGo9x9

Go.java

package gamego;

import java.awt.*;

import java.util.ArrayList;

import javax.swing.*;

import java.awt.event.ActionListener;

import java.io.File;

import java.util.Arrays;

import org.encog.ml.data.MLData;

import org.encog.ml.data.MLDataPair;

import org.encog.ml.data.MLDataSet;

import org.encog.ml.data.basic.BasicMLDataSet;

import org.encog.neural.networks.BasicNetwork;

import org.encog.persist.EncogDirectoryPersistence;

/**

* @author Sonal

*/

public class Go extends javax.swing.JFrame implements ActionListener{

private static int turn;

private static ImageIcon img1;

private static ImageIcon img2;

private JButton[] point = new JButton[81];

private int[] prev_board = new int[81];

private int[][] set = new int[7][81];

private int[][] liberties = new int[7][81];

private int[][] root = new int[7][81];

private int[] NO_LEFT = {9,18,27,36,45,54,63};

private int[] NO_RIGHT = {17,26,35,44,53,62,71};

private int[] NO_TOP = {1,2,3,4,5,6,7};

private int[] NO_BOTTOM = {73,74,75,76,77,78,79};

private int TOP_LEFT = 0;

private int TOP_RIGHT = 8;

private int BOTTOM_LEFT = 72;

private int BOTTOM_RIGHT = 80;

private int[] cnt_black = new int[7];

private int[] cnt_white = new int[7];

private int[] cnt_black_cpt = new int[7];

private int[] cnt_white_cpt = new int[7];

private int[] territory_black = new int[7];

private int[] territory_white = new int[7];

private int[] single_eye_black = new int[7];

private int[] single_eye_white = new int[7];

private int[] double_eye_black = new int[7];

private int[] double_eye_white = new int[7];

int[] group = new int[50];

int node = 101,cnt=0,last_move=100,pass_cnt=0;

boolean key = false,eye_flag=false;

private ArrayList<Integer> search = new ArrayList<Integer>();

ArrayList<ArrayList<Integer>> search_tier = new

ArrayList<ArrayList<Integer>>();

BasicNetwork network = (BasicNetwork)EncogDirectoryPersistence

.loadObject(new File("C:\\network12.eg"));

int prev_move1=100,prev_move2=100;

int PP=1,PC=0,B1=1,B2=0;

/* Creates new form Go */

public Go() {

initComponents();

turn = 1;

img1 = new ImageIcon("C:\\black-32x32.png");

img2 = new ImageIcon("C:\\white-32x32.png");

check();

51

init_set();

init_liberties();

init_root();

}

public void check()

{

for(int i=0;i<81;i++)

point[i].addActionListener(this);

}

//0:null 1:black 2:white

public void init_set()

{

for(int i=0; i<81;i++)

{

set[0][i] = 0;

prev_board[i] = -2;

}

for(int i=0; i<7;i++)

{

territory_black[i] = 0;

territory_white[i] = 0;

single_eye_black[i] = 0;

single_eye_white[i] = 0;

double_eye_black[i] = 0;

double_eye_white[i] = 0;

}

}

public void init_root()

{

for(int i=0; i<81;i++)

{

root[0][i] = 100;

}

}

public void init_liberties()

{

for(int i=0; i<81;i++)

{

liberties[0][i] = 4;

}

for(int i=0; i<7;i++)

{

liberties[0][NO_LEFT[i]] = 3;

liberties[0][NO_RIGHT[i]] = 3;

liberties[0][NO_TOP[i]] = 3;

liberties[0][NO_BOTTOM[i]] = 3;

}

liberties[0][TOP_LEFT] = 2;

liberties[0][TOP_RIGHT] = 2;

liberties[0][BOTTOM_RIGHT] = 2;

liberties[0][BOTTOM_LEFT] = 2;

}

public int set(int index, int level)

{

int temp_root;

last_move = index;

int cnt_root = 0;

if(no_left(index))

{

if(set[level][index-9]==set[level][index])

{

cnt_root++;

temp_root = root[level][index-9];

for(int x=0;x<81;x++)

52

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

if(set[level][index+9]==set[level][index])

{

cnt_root++;

temp_root = root[level][index+9];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

if(set[level][index+1]==set[level][index])

{

cnt_root++;

temp_root = root[level][index+1];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

liberties[level][index-9]-=1; //UP

liberties[level][index+9]-=1; //DOWN

liberties[level][index+1]-=1; //RIGHT

}

else if(no_right(index))

{

if(set[level][index-9]==set[level][index])

{

cnt_root++;

temp_root = root[level][index-9];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

if(set[level][index+9]==set[level][index])

{

cnt_root++;

temp_root = root[level][index+9];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

if(set[level][index-1]==set[level][index])

{

cnt_root++;

temp_root = root[level][index-1];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

liberties[level][index-9]-=1; //UP

liberties[level][index+9]-=1; //DOWN

liberties[level][index-1]-=1; //LEFT

}

else if(no_top(index))

{

53

if(set[level][index+9]==set[level][index])

{

cnt_root++;

temp_root = root[level][index+9];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

if(set[level][index-1]==set[level][index])

{

cnt_root++;

temp_root = root[level][index-1];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

if(set[level][index+1]==set[level][index])

{

cnt_root++;

temp_root = root[level][index+1];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

liberties[level][index-1]-=1; //LEFT

liberties[level][index+9]-=1; //DOWN

liberties[level][index+1]-=1; //RIGHT

}

else if(no_bottom(index))

{

if(set[level][index-9]==set[level][index])

{

cnt_root++;

temp_root = root[level][index-9];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

if(set[level][index-1]==set[level][index])

{

cnt_root++;

temp_root = root[level][index-1];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

if(set[level][index+1]==set[level][index])

{

cnt_root++;

temp_root = root[level][index+1];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

liberties[level][index-9]-=1; //UP

54

liberties[level][index-1]-=1; //LEFT

liberties[level][index+1]-=1; //RIGHT

}

else if(index == TOP_LEFT)

{

if(set[level][index+9]==set[level][index])

{

cnt_root++;

temp_root = root[level][index+9];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

if(set[level][index+1]==set[level][index])

{

cnt_root++;

temp_root = root[level][index+1];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

liberties[level][index+9]-=1; //DOWN

liberties[level][index+1]-=1; //RIGHT

}

else if(index == TOP_RIGHT)

{

if(set[level][index+9]==set[level][index])

{

cnt_root++;

temp_root = root[level][index+9];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

if(set[level][index-1]==set[level][index])

{

cnt_root++;

temp_root = root[level][index-1];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

liberties[level][index+9]-=1; //DOWN

liberties[level][index-1]-=1; //LEFT

}

else if(index == BOTTOM_LEFT)

{

if(set[level][index-9]==set[level][index])

{

cnt_root++;

temp_root = root[level][index-9];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

if(set[level][index+1]==set[level][index])

{

55

cnt_root++;

temp_root = root[level][index+1];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

liberties[level][index-9]-=1; //UP

liberties[level][index+1]-=1; //RIGHT

}

else if(index == BOTTOM_RIGHT)

{

if(set[level][index-9]==set[level][index])

{

cnt_root++;

temp_root = root[level][index-9];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

if(set[level][index-1]==set[level][index])

{

cnt_root++;

temp_root = root[level][index-1];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

liberties[level][index-9]-=1; //UP

liberties[level][index-1]-=1; //LEFT

}

else

{

if(set[level][index-9]==set[level][index])

{

cnt_root++;

temp_root = root[level][index-9];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

if(set[level][index+9]==set[level][index])

{

cnt_root++;

temp_root = root[level][index+9];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

if(set[level][index-1]==set[level][index])

{

cnt_root++;

temp_root = root[level][index-1];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

56

}

if(set[level][index+1]==set[level][index])

{

cnt_root++;

temp_root = root[level][index+1];

for(int x=0;x<81;x++)

{

if(root[level][x]==temp_root)

root[level][x]=index;

}

}

liberties[level][index-9]-=1; //UP

liberties[level][index+9]-=1; //DOWN

liberties[level][index-1]-=1; //LEFT

liberties[level][index+1]-=1; //RIGHT

}

return cnt_root;

}

public void capture(int level)

{

int[] temp_array = new int[81];

int[] root_array = new int[81];

int len_r = 0,len_t=0, sum=0;

for(int x=0;x<81;x++)

{

if(root[level][x]!=100 &&

root[level][x]!=root[level][last_move])

{

boolean avail = false;

for(int y=0;y<len_r;y++)

{

if(root[level][x]==root_array[y])

avail = true;

}

if(avail == false)

{

root_array[len_r] = root[level][x];

len_r++;

}

}

}

// to add last move at the end of array

if(root[level][last_move]!=100)

{

boolean avail = false;

for(int y=0;y<len_r;y++)

{

if(root[level][last_move]==root_array[y])

avail = true;

}

if(avail == false)

{

root_array[len_r] = root[level][last_move];

len_r++;

}

}

sum=0;

for(int i=0;i<len_r;i++)

{

sum=0;

len_t = 0;

for(int j=0;j<81;j++)

{

if(root_array[i] == root[level][j])

{

57

temp_array[len_t] = j;

len_t++;

sum+=liberties[level][j];

}

}

if(sum<=0)

{

for(int k=0;k<len_t;k++)

{

root[level][temp_array[k]] = 100;

captured(temp_array[k],level);

}

}

}

}

public void captured(int i,int level)

{

if(set[level][i]==1)

{

cnt_black[level]--;

cnt_black_cpt[level]++;

}

else if(set[level][i]==2)

{

cnt_white[level]--;

cnt_white_cpt[level]++;

}

set[level][i] = 0;

if(level==0)

point[i].setIcon(null);

if(no_left(i))

{

liberties[level][i-9]+=1; //UP

liberties[level][i+9]+=1; //DOWN

liberties[level][i+1]+=1; //RIGHT

}

else if(no_right(i))

{

liberties[level][i-9]+=1; //UP

liberties[level][i+9]+=1; //DOWN

liberties[level][i-1]+=1; //LEFT

}

else if(no_top(i))

{

liberties[level][i-1]+=1; //LEFT

liberties[level][i+9]+=1; //DOWN

liberties[level][i+1]+=1; //RIGHT

}

else if(no_bottom(i))

{

liberties[level][i-9]+=1; //UP

liberties[level][i-1]+=1; //LEFT

liberties[level][i+1]+=1; //RIGHT

}

else if(i == TOP_LEFT)

{

liberties[level][i+9]+=1; //DOWN

liberties[level][i+1]+=1; //RIGHT

}

else if(i == TOP_RIGHT)

{

liberties[level][i-1]+=1; //LEFT

liberties[level][i+9]+=1; //DOWN

}

else if(i == BOTTOM_LEFT)

{

liberties[level][i-9]+=1; //UP

58

liberties[level][i+1]+=1; //RIGHT

}

else if(i == BOTTOM_RIGHT)

{

liberties[level][i-9]+=1; //UP

liberties[level][i-1]+=1; //LEFT

}

else

{

liberties[level][i-9]+=1; //UP

liberties[level][i+9]+=1; //DOWN

liberties[level][i+1]+=1; //RIGHT

liberties[level][i-1]+=1; //LEFT

}

}

public void grp(int i,int level)

{

if(set[level][i]==0 && root[level][i]==100)

{

node++;

root[level][i] = node;

group[cnt]=node;

cnt++;

}

if(set[level][i]==0 && root[level][i]!=100)

{

if(no_left(i))

{

if(set[level][i-9]==0 && root[level][i-9]==100)

{

root[level][i-9]=root[level][i];

grp(i-9,level);

}

if(set[level][i+9]==0 && root[level][i+9]==100)

{

root[level][i+9]=root[level][i];

grp(i+9,level);

}

if(set[level][i+1]==0 && root[level][i+1]==100)

{

root[level][i+1]=root[level][i];

grp(i+1,level);

}

}

else if(no_right(i))

{

if(set[level][i-9]==0 && root[level][i-9]==100)

{

root[level][i-9]=root[level][i];

grp(i-9,level);

}

if(set[level][i+9]==0 && root[level][i+9]==100)

{

root[level][i+9]=root[level][i];

grp(i+9,level);

}

if(set[level][i-1]==0 && root[level][i-1]==100)

{

root[level][i-1]=root[level][i];

grp(i-1,level);

}

}

else if(no_top(i))

{

if(set[level][i-1]==0 && root[level][i-1]==100)

{

root[level][i-1]=root[level][i];

59

grp(i-1,level);

}

if(set[level][i+9]==0 && root[level][i+9]==100)

{

root[level][i+9]=root[level][i];

grp(i+9,level);

}

if(set[level][i+1]==0 && root[level][i+1]==100)

{

root[level][i+1]=root[level][i];

grp(i+1,level);

}

}

else if(no_bottom(i))

{

if(set[level][i-9]==0 && root[level][i-9]==100)

{

root[level][i-9]=root[level][i];

grp(i-9,level);

}

if(set[level][i-1]==0 && root[level][i-1]==100)

{

root[level][i-1]=root[level][i];

grp(i-1,level);

}

if(set[level][i+1]==0 && root[level][i+1]==100)

{

root[level][i+1]=root[level][i];

grp(i+1,level);

}

}

else if(i==TOP_LEFT)

{

if(set[level][i+9]==0 && root[level][i+9]==100)

{

root[level][i+9]=root[level][i];

grp(i+9,level);

}

if(set[level][i+1]==0 && root[level][i+1]==100)

{

root[level][i+1]=root[level][i];

grp(i+1,level);

}

}

else if(i==TOP_RIGHT)

{

if(set[level][i+9]==0 && root[level][i+9]==100)

{

root[level][i+9]=root[level][i];

grp(i+9,level);

}

if(set[level][i-1]==0 && root[level][i-1]==100)

{

root[level][i-1]=root[level][i];

grp(i-1,level);

}

}

else if(i==BOTTOM_LEFT)

{

if(set[level][i-9]==0 && root[level][i-9]==100)

{

root[level][i-9]=root[level][i];

grp(i-9,level);

}

if(set[level][i+1]==0 && root[level][i+1]==100)

{

root[level][i+1]=root[level][i];

60

grp(i+1,level);

}

}

else if(i==BOTTOM_RIGHT)

{

if(set[level][i-9]==0 && root[level][i-9]==100)

{

root[level][i-9]=root[level][i];

grp(i-9,level);

}

if(set[level][i-1]==0 && root[level][i-1]==100)

{

root[level][i-1]=root[level][i];

grp(i-1,level);

}

}

else

{

if(set[level][i+9]==0 && root[level][i+9]==100)

{

root[level][i+9]=root[level][i];

grp(i+9,level);

}

if(set[level][i-9]==0 && root[level][i-9]==100)

{

root[level][i-9]=root[level][i];

grp(i-9,level);

}

if(set[level][i-1]==0 && root[level][i-1]==100)

{

root[level][i-1]=root[level][i];

grp(i-1,level);

}

if(set[level][i+1]==0 && root[level][i+1]==100)

{

root[level][i+1]=root[level][i];

grp(i+1,level);

}

}

}

}

public void territory(int level)

{

node = 101;cnt=0;

single_eye_black[level] = 0;

double_eye_black[level] = 0;

single_eye_white[level] = 0;

double_eye_white[level] = 0;

for(int i=0;i<81;i++)

{

if(root[level][i]==100 && set[level][i]==0)

{

root[level][i]=node;

group[cnt]=node;

cnt++;

break;

}

}

for(int i=0;i<81;i++)

{

grp(i,level);

}

int sum_black = 0,sum_white = 0,group_cnt=0;

boolean black=false,white=false;

for(int j=0;j<cnt;j++)

{

61

group_cnt=0;

black=false; white=false;

for(int i=0;i<81;i++)

{

if(set[level][i]==0)

{

if(root[level][i]==group[j])

{

group_cnt++;

if(no_left(i))

{

if(set[level][i-9]==1 || set[level][i+9]==1

|| set[level][i+1]==1 )

{

black=true;

}

if(set[level][i-9]==2 || set[level][i+9]==2

|| set[level][i+1]==2 )

{

white=true;

}

}

else if(no_right(i))

{

if(set[level][i-9]==1 || set[level][i+9]==1

|| set[level][i-1]==1 )

{

black=true;

}

if(set[level][i-9]==2 || set[level][i+9]==2

|| set[level][i-1]==2 )

{

white=true;

}

}

else if(no_top(i))

{

if(set[level][i-1]==1 || set[level][i+9]==1

|| set[level][i+1]==1 )

{

black=true;

}

if(set[level][i-1]==2 || set[level][i+9]==2

|| set[level][i+1]==2 )

{

white=true;

}

}

else if(no_bottom(i))

{

if(set[level][i-9]==1 || set[level][i-1]==1

|| set[level][i+1]==1 )

{

black=true;

}

if(set[level][i-9]==2 || set[level][i-1]==2

|| set[level][i+1]==2 )

{

white=true;

}

}

else if(i==TOP_LEFT)

{

if(set[level][i+9]==1 || set[level][i+1]==1)

{

black=true;

}

62

if(set[level][i+9]==2 || set[level][i+1]==2)

{

white=true;

}

}

else if(i==TOP_RIGHT)

{

if(set[level][i+9]==1 || set[level][i-1]==1)

{

black=true;

}

if(set[level][i+9]==2 || set[level][i-1]==2)

{

white=true;

}

}

else if(i==BOTTOM_LEFT)

{

if(set[level][i-9]==1 || set[level][i+1]==1)

{

black=true;

}

if(set[level][i-9]==2 || set[level][i+1]==2)

{

white=true;

}

}

else if(i==BOTTOM_RIGHT)

{

if(set[level][i-9]==1 || set[level][i-1]==1)

{

black=true;

}

if(set[level][i-9]==2 || set[level][i-1]==2)

{

white=true;

}

}

else

{

if(set[level][i+9]==1 || set[level][i-9]==1

|| set[level][i-1]==1 || set[level][i+1]==1)

{

black=true;

}

if(set[level][i+9]==2 || set[level][i-9]==2

|| set[level][i-1]==2 || set[level][i+1]==2)

{

white=true;

}

}

}

}

}

if(black==true && white==false)

{

sum_black+=group_cnt;

if(group_cnt == 1)

single_eye_black[level]+=1;

if(group_cnt == 2)

double_eye_black[level]+=1;

}

else if(black==false && white==true)

{

sum_white+=group_cnt;

if(group_cnt == 1)

63

single_eye_white[level]+=1;

if(group_cnt == 2)

double_eye_white[level]+=1;

}

}

territory_black[level] = sum_black;

territory_white[level] = sum_white;

for(int i=0;i<81;i++)

{

if(set[level][i]==0)

root[level][i]=100;

}

}

private boolean eyes(int i, int level, int p)

{

int[] temp_grp = new int[5];

int[] members = new int[5];

int count=0,sum_lib=0;

if(no_left(i))

{

if(set[level][i-9]==p && set[level][i+9]==p &&

set[level][i+1]==p )

{

count = 0;

temp_grp[count]=root[level][i-9];

members[count]++;

count++;

if(root[level][i-9] != root[level][i+9])

{

temp_grp[count]=root[level][i+9];

members[count]++;

count++;

}

else

members[0]++;

if(root[level][i+9] != root[level][i+1] &&

root[level][i-9] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

members[count]++;

count++;

}

else if(root[level][i-9] == root[level][i+1])

{

members[0]++;

}

else if(root[level][i+9] == root[level][i+1])

{

members[1]++;

}

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members[xi];

if(sum_lib<=0)

return false;

64

}

return true;

}

}

else if(no_right(i))

{

if(set[level][i-9]==p && set[level][i+9]==p

&& set[level][i-1]==p )

{

count = 0;

temp_grp[count]=root[level][i-9];

members[count]++;

count++;

if(root[level][i-9] != root[level][i+9])

{

temp_grp[count]=root[level][i+9];

members[count]++;

count++;

}

else

members[0]++;

if(root[level][i+9] != root[level][i-1] &&

root[level][i-9] != root[level][i-1])

{

temp_grp[count]=root[level][i-1];

members[count]++;

count++;

}

else if(root[level][i-9] == root[level][i-1])

{

members[0]++;

}

else if(root[level][i+9] == root[level][i-1])

{

members[1]++;

}

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members[xi];

if(sum_lib<=0)

return false;

}

return true;

}

}

else if(no_top(i))

{

if(set[level][i-1]==p && set[level][i+9]==p &&

set[level][i+1]==p )

{

count = 0;

temp_grp[count]=root[level][i-1];

members[count]++;

count++;

if(root[level][i-1] != root[level][i+9])

{

temp_grp[count]=root[level][i+9];

65

members[count]++;

count++;

}

else

members[0]++;

if(root[level][i+9] != root[level][i+1] &&

root[level][i-1] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

members[count]++;

count++;

}

else if(root[level][i-1] == root[level][i+1])

{

members[0]++;

}

else if(root[level][i+9] == root[level][i+1])

{

members[1]++;

}

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members[xi];

if(sum_lib<=0)

return false;

}

return true;

}

}

else if(no_bottom(i))

{

if(set[level][i-9]==p && set[level][i-1]==p &&

set[level][i+1]==p )

{

count = 0;

temp_grp[count]=root[level][i-9];

members[count]++;

count++;

if(root[level][i-9] != root[level][i-1])

{

temp_grp[count]=root[level][i-1];

members[count]++;

count++;

}

else

members[0]++;

if(root[level][i-1] != root[level][i+1] &&

root[level][i-9] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

members[count]++;

count++;

}

else if(root[level][i-9] == root[level][i+1])

{

members[0]++;

66

}

else if(root[level][i-1] == root[level][i+1])

{

members[1]++;

}

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members[xi];

if(sum_lib<=0)

return false;

}

return true;

}

}

else if(i==TOP_LEFT)

{

if(set[level][i+9]==p && set[level][i+1]==p )

{

count = 0;

temp_grp[count]=root[level][i+9];

members[count]++;

count++;

if(root[level][i+9] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

members[count]++;

count++;

}

else

members[0]++;

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members[xi];

if(sum_lib<=0)

return false;

}

return true;

}

}

else if(i==TOP_RIGHT)

{

if(set[level][i+9]==p && set[level][i-1]==p )

{

count = 0;

temp_grp[count]=root[level][i+9];

members[count]++;

count++;

if(root[level][i+9] != root[level][i-1])

67

{

temp_grp[count]=root[level][i-1];

members[count]++;

count++;

}

else

members[0]++;

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members[xi];

if(sum_lib<=0)

return false;

}

return true;

}

}

else if(i==BOTTOM_LEFT)

{

if(set[level][i-9]==p && set[level][i+1]==p )

{

count = 0;

temp_grp[count]=root[level][i-9];

members[count]++;

count++;

if(root[level][i-9] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

members[count]++;

count++;

}

else

members[0]++;

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members[xi];

if(sum_lib<=0)

return false;

}

return true;

}

}

else if(i==BOTTOM_RIGHT)

{

if(set[level][i-9]==p && set[level][i-1]==p )

{

count = 0;

68

temp_grp[count]=root[level][i-9];

members[count]++;

count++;

if(root[level][i-9] != root[level][i-1])

{

temp_grp[count]=root[level][i-1];

members[count]++;

count++;

}

else

members[0]++;

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members[xi];

if(sum_lib<=0)

return false;

}

return true;

}

}

else

{

if(set[level][i+9]==p && set[level][i-9]==p &&

set[level][i-1]==p && set[level][i+1]==p )

{

count = 0;

temp_grp[count]=root[level][i-9];

members[count]++;

count++;

if(root[level][i-9] != root[level][i+9])

{

temp_grp[count]=root[level][i+9];

members[count]++;

count++;

}

else

members[0]++;

if(root[level][i+9] != root[level][i+1] &&

root[level][i-9] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

members[count]++;

count++;

}

else if(root[level][i-9] == root[level][i+1])

{

members[0]++;

}

else if(root[level][i+9] == root[level][i+1])

{

members[1]++;

}

if(root[level][i+1] != root[level][i-1] &&

69

root[level][i+9] != root[level][i-1] && root[level][i-9] !=

root[level][i-1])

{

temp_grp[count]=root[level][i-1];

members[count]++;

count++;

}

else if(root[level][i-9] == root[level][i-1])

{

members[0]++;

}

else if(root[level][i+9] == root[level][i-1])

{

members[1]++;

}

else if(root[level][i+1] == root[level][i-1])

{

members[2]++;

}

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members[xi];

if(sum_lib<=0)

return false;

}

return true;

}

}

return false;

}

private boolean self_kill(int i, int level, int p)

{

int[] temp_grp = new int[5];

int [] members_cnt = new int[5];

int count=0,sum_lib=0,members=0;

if(no_left(i))

{

count = 0;

if(set[level][i-9]==p)

{

members++;

temp_grp[count]=root[level][i-9];

count++;

}

if(set[level][i+9]==p)

{

members++;

if(root[level][i-9] != root[level][i+9])

{

temp_grp[count]=root[level][i+9];

count++;

}

}

if(set[level][i+1]==p)

{

70

members++;

if(root[level][i+9] != root[level][i+1] &&

root[level][i-9] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

count++;

}

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

}

sum_lib = sum_lib - members + liberties[level][i];

if(sum_lib<=0)

{

count = 0;

if(set[level][i-9]!=p)

{

members_cnt[count]++;

temp_grp[count]=root[level][i-9];

count++;

}

if(set[level][i+9]!=p)

{

if(root[level][i-9] != root[level][i+9])

{

temp_grp[count]=root[level][i+9];

members_cnt[count]++;

count++;

}

else

members_cnt[0]++;

}

if(set[level][i+1]!=p)

{

if(root[level][i+9] != root[level][i+1] &&

root[level][i-9] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

members_cnt[count]++;

count++;

}

else if(root[level][i-9] == root[level][i+1])

{

members_cnt[0]++;

}

else if(root[level][i+9] == root[level][i+1])

{

members_cnt[1]++;

}

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

71

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members_cnt[xi];

if(sum_lib<=0)

{

return false;

}

}

}

return true;

}

else

return false;

}

}

else if(no_right(i))

{

count = 0;

members = 0;

if(set[level][i-9]==p)

{

members++;

temp_grp[count]=root[level][i-9];

count++;

}

if(set[level][i+9]==p)

{

members++;

if(root[level][i-9] != root[level][i+9])

{

temp_grp[count]=root[level][i+9];

count++;

}

}

if(set[level][i-1]==p)

{

members++;

if(root[level][i+9] != root[level][i-1] &&

root[level][i-9] != root[level][i-1])

{

temp_grp[count]=root[level][i-1];

count++;

}

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

}

sum_lib = sum_lib - members + liberties[level][i];

if(sum_lib<=0)

{

count = 0;

if(set[level][i-9]!=p)

72

{

members_cnt[count]++;

temp_grp[count]=root[level][i-9];

count++;

}

if(set[level][i+9]!=p)

{

if(root[level][i-9] != root[level][i+9])

{

temp_grp[count]=root[level][i+9];

members_cnt[count]++;

count++;

}

else

members_cnt[0]++;

}

if(set[level][i-1]!=p)

{

if(root[level][i+9] != root[level][i-1] &&

root[level][i-9] != root[level][i-1])

{

temp_grp[count]=root[level][i-1];

members_cnt[count]++;

count++;

}

else if(root[level][i-9] == root[level][i-1])

{

members_cnt[0]++;

}

else if(root[level][i+9] == root[level][i-1])

{

members_cnt[1]++;

}

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members_cnt[xi];

if(sum_lib<=0)

{

return false;

}

}

}

return true;

}

else

return false;

}

}

else if(no_top(i))

{

count = 0;

members = 0;

if(set[level][i-1]==p)

{

members++;

73

temp_grp[count]=root[level][i-1];

count++;

}

if(set[level][i+9]==p)

{

members++;

if(root[level][i-1] != root[level][i+9])

{

temp_grp[count]=root[level][i+9];

count++;

}

}

if(set[level][i+1]==p)

{

members++;

if(root[level][i+9] != root[level][i+1] &&

root[level][i-1] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

count++;

}

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

}

sum_lib = sum_lib - members + liberties[level][i];

if(sum_lib<=0)

{

count = 0;

if(set[level][i-1]!=p)

{

members_cnt[count]++;

temp_grp[count]=root[level][i-1];

count++;

}

if(set[level][i+9]!=p)

{

if(root[level][i-1] != root[level][i+9])

{

temp_grp[count]=root[level][i+9];

members_cnt[count]++;

count++;

}

else

members_cnt[0]++;

}

if(set[level][i+1]!=p)

{

if(root[level][i+9] != root[level][i+1] &&

root[level][i-1] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

members_cnt[count]++;

count++;

}

else if(root[level][i-1] == root[level][i+1])

74

{

members_cnt[0]++;

}

else if(root[level][i+9] == root[level][i+1])

{

members_cnt[1]++;

}

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members_cnt[xi];

if(sum_lib<=0)

{

return false;

}

}

}

return true;

}

else

return false;

}

}

else if(no_bottom(i))

{

count = 0;

members = 0;

if(set[level][i-9]==p)

{

members++;

temp_grp[count]=root[level][i-9];

count++;

}

if(set[level][i-1]==p)

{

members++;

if(root[level][i-9] != root[level][i-1])

{

temp_grp[count]=root[level][i-1];

count++;

}

}

if(set[level][i+1]==p)

{

members++;

if(root[level][i-1] != root[level][i+1] &&

root[level][i-9] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

count++;

}

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

75

{

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

}

sum_lib = sum_lib - members + liberties[level][i];

if(sum_lib<=0)

{

count = 0;

if(set[level][i-9]!=p)

{

members_cnt[count]++;

temp_grp[count]=root[level][i-9];

count++;

}

if(set[level][i-1]!=p)

{

if(root[level][i-1] != root[level][i-9])

{

temp_grp[count]=root[level][i-1];

members_cnt[count]++;

count++;

}

else

members_cnt[0]++;

}

if(set[level][i+1]!=p)

{

if(root[level][i-9] != root[level][i+1] &&

root[level][i-1] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

members_cnt[count]++;

count++;

}

else if(root[level][i-9] == root[level][i+1])

{

members_cnt[0]++;

}

else if(root[level][i-1] == root[level][i+1])

{

members_cnt[1]++;

}

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members_cnt[xi];

if(sum_lib<=0)

{

return false;

}

}

76

}

return true;

}

else

return false;

}

}

else if(i==TOP_LEFT)

{

count = 0;

members = 0;

if(set[level][i+9]==p)

{

members++;

temp_grp[count]=root[level][i+9];

count++;

}

if(set[level][i+1]==p)

{

members++;

if(root[level][i+9] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

count++;

}

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

}

sum_lib = sum_lib - members + liberties[level][i];

if(sum_lib<=0)

{

count = 0;

if(set[level][i+9]!=p)

{

members_cnt[count]++;

temp_grp[count]=root[level][i+9];

count++;

}

if(set[level][i+1]!=p)

{

if(root[level][i+1] != root[level][i+9])

{

temp_grp[count]=root[level][i+1];

members_cnt[count]++;

count++;

}

else

members_cnt[0]++;

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

77

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members_cnt[xi];

if(sum_lib<=0)

{

return false;

}

}

}

return true;

}

else

return false;

}

}

else if(i==TOP_RIGHT)

{

count = 0;

members = 0;

if(set[level][i+9]==p)

{

members++;

temp_grp[count]=root[level][i+9];

count++;

}

if(set[level][i-1]==p)

{

members++;

if(root[level][i+9] != root[level][i-1])

{

temp_grp[count]=root[level][i-1];

count++;

}

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

}

sum_lib = sum_lib - members + liberties[level][i];

if(sum_lib<=0)

{

count = 0;

if(set[level][i+9]!=p)

{

members_cnt[count]++;

temp_grp[count]=root[level][i+9];

count++;

}

if(set[level][i-1]!=p)

{

if(root[level][i-1] != root[level][i+9])

{

temp_grp[count]=root[level][i-1];

members_cnt[count]++;

78

count++;

}

else

members_cnt[0]++;

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members_cnt[xi];

if(sum_lib<=0)

{

return false;

}

}

}

return true;

}

else

return false;

}

}

else if(i==BOTTOM_LEFT)

{

count = 0;

members = 0;

if(set[level][i-9]==p)

{

members++;

temp_grp[count]=root[level][i-9];

count++;

}

if(set[level][i+1]==p)

{

members++;

if(root[level][i-9] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

count++;

}

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

}

sum_lib = sum_lib - members + liberties[level][i];

if(sum_lib<=0)

{

count = 0;

79

if(set[level][i-9]!=p)

{

members_cnt[count]++;

temp_grp[count]=root[level][i-9];

count++;

}

if(set[level][i+1]!=p)

{

if(root[level][i-9] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

members_cnt[count]++;

count++;

}

else

members_cnt[0]++;

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members_cnt[xi];

if(sum_lib<=0)

{

return false;

}

}

}

return true;

}

else

return false;

}

}

else if(i==BOTTOM_RIGHT)

{

count = 0;

members = 0;

if(set[level][i-9]==p)

{

members++;

temp_grp[count]=root[level][i-9];

count++;

}

if(set[level][i-1]==p)

{

members++;

if(root[level][i-9] != root[level][i-1])

{

temp_grp[count]=root[level][i-1];

count++;

}

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

80

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

}

sum_lib = sum_lib - members + liberties[level][i];

if(sum_lib<=0)

{

count = 0;

if(set[level][i-9]!=p)

{

members_cnt[count]++;

temp_grp[count]=root[level][i-9];

count++;

}

if(set[level][i-1]!=p)

{

if(root[level][i-9] != root[level][i-1])

{

temp_grp[count]=root[level][i-1];

members_cnt[count]++;

count++;

}

else

members_cnt[count]++;

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members_cnt[xi];

if(sum_lib<=0)

{

return false;

}

}

}

return true;

}

else

return false;

}

}

else

{

count = 0;

members = 0;

if(set[level][i+9]==p)

{

members++;

temp_grp[count]=root[level][i+9];

count++;

}

if(set[level][i-9]==p)

81

{

members++;

if(root[level][i-9] != root[level][i+9])

{

temp_grp[count]=root[level][i-9];

count++;

}

}

if(set[level][i+1]==p)

{

members++;

if(root[level][i+9] != root[level][i+1] &&

root[level][i-9] != root[level][i+1])

{

temp_grp[count]=root[level][i+1];

count++;

}

}

if(set[level][i-1]==p)

{

members++;

if(root[level][i+1] != root[level][i-1] &&

root[level][i+9] != root[level][i-1] &&

root[level][i-9] != root[level][i-1])

{

temp_grp[count]=root[level][i-1];

count++;

}

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

}

sum_lib = sum_lib - members + liberties[level][i];

if(sum_lib<=0)

{

count = 0;

if(set[level][i+9]!=p)

{

members_cnt[count]++;

temp_grp[count]=root[level][i+9];

count++;

}

if(set[level][i-9]!=p)

{

if(root[level][i-9] != root[level][i+9])

{

temp_grp[count]=root[level][i-9];

members_cnt[count]++;

count++;

}

else

members_cnt[0]++;

}

if(set[level][i+1]!=p)

{

if(root[level][i+9] != root[level][i+1] &&

root[level][i-9] != root[level][i+1])

82

{

temp_grp[count]=root[level][i+1];

members_cnt[count]++;

count++;

}

else if(root[level][i+9] == root[level][i+1])

{

members_cnt[0]++;

}

else if(root[level][i-9] == root[level][i+1])

{

members_cnt[1]++;

}

}

if(set[level][i-1]!=p)

{

if(root[level][i+1] != root[level][i-1] &&

root[level][i+9] != root[level][i-1] &&

root[level][i-9] != root[level][i-1])

{

temp_grp[count]=root[level][i-1];

members_cnt[count]++;

count++;

}

else if(root[level][i+9] == root[level][i-1])

{

members_cnt[0]++;

}

else if(root[level][i-9] == root[level][i-1])

{

members_cnt[1]++;

}

else if(root[level][i+1] == root[level][i-1])

{

members_cnt[2]++;

}

}

if(count != 0)

{

sum_lib=0;

for(int xi=0;xi<count;xi++)

{

sum_lib=0;

for(int xj=0;xj<81;xj++)

{

if(temp_grp[xi] == root[level][xj])

{

sum_lib+=liberties[level][xj];

}

}

sum_lib = sum_lib - members_cnt[xi];

if(sum_lib<=0)

{

return false;

}

}

}

return true;

}

else

return false;

}

}

return false;

}

public void routines(int level, int depth)

83

{

System.arraycopy(set[level], 0, set[depth], 0, 81);

System.arraycopy(root[level], 0, root[depth], 0, 81);

System.arraycopy(liberties[level], 0, liberties[depth], 0, 81);

System.arraycopy(cnt_black, level, cnt_black,depth, 1);

System.arraycopy(cnt_black_cpt, level, cnt_black_cpt,depth, 1);

System.arraycopy(cnt_white, level, cnt_white,depth, 1);

System.arraycopy(cnt_white_cpt, level, cnt_white_cpt,depth, 1);

System.arraycopy(territory_black,level,territory_black,depth,1);

System.arraycopy(territory_white,level,territory_white,depth,1);

System.arraycopy(single_eye_black,level,single_eye_black,depth,1);

System.arraycopy(single_eye_white,level,single_eye_white,depth,1);

System.arraycopy(double_eye_black,level,double_eye_black,depth,1);

System.arraycopy(double_eye_white,level,double_eye_white,depth,1);

}

//Methods for deciding the move for the computer

public void makeMove(int level)

{

key = false;

int depth=1,mi=0;

double score,alpha=-30000,beta=30000;

routines(level,depth);

focused_searchspace(level);

ArrayList<Integer> space_makeMove = new

ArrayList<Integer>(search_tier.get(level));

for (int i=0;i<space_makeMove.size();i++)

{

if (set[level][space_makeMove.get(i)]==0)

{

if(PC==1 && B1==1 && B2==0)

{

if ((eyes(space_makeMove.get(i),level,1)) ||

(self_kill(space_makeMove.get(i),level,2)))

{

continue;

}

}

else if(PC==1 && B1==0 && B2==1)

{

if (eyes(space_makeMove.get(i),level,2) ||

self_kill(space_makeMove.get(i),level,1))

{

continue;

}

}

if(PC==1 && B1==1 && B2==0)

set[depth][space_makeMove.get(i)]=2;

else if(PC==1 && B1==0 && B2==1)

set[depth][space_makeMove.get(i)]=1;

if(Arrays.equals(prev_board, set[depth]))

{

set[depth][space_makeMove.get(i)] = 0;

continue;

}

root[depth][space_makeMove.get(i)] =

space_makeMove.get(i);

if(PC==1 && B1==1 && B2==0)

cnt_white[depth]++;

else if(PC==1 && B1==0 && B2==1)

84

cnt_black[depth]++;

int grp_join = set(space_makeMove.get(i),depth);

capture(depth);

territory(depth);

score = min(depth+1,alpha,beta,depth);

score = score + grp_join/10;

if (score > alpha)

{

mi=space_makeMove.get(i);

alpha=score;

}

routines(level,depth);

}

}

key = true;

//make a move

if(PC==1 && B1==1 && B2==0)

{

set[0][mi] = 2; // do all settings

System.arraycopy(set[0], 0, prev_board, 0, 81);

point[mi].setIcon(img2);

if(prev_move2 != 100)

{

point[prev_move2].setOpaque(false);

point[prev_move2].setBackground(Color.gray);

}

point[mi].setOpaque(true);

point[mi].setBackground(Color.white);

prev_move2 = mi;

root[0][mi] = mi;

cnt_white[0]++;

}

else if(PC==1 && B1==0 && B2==1)

{

set[0][mi] = 1; // do all settings

System.arraycopy(set[0], 0, prev_board, 0, 81);

point[mi].setIcon(img1);

if(prev_move1 != 100)

{

point[prev_move1].setOpaque(false);

point[prev_move1].setBackground(Color.white);

}

point[mi].setOpaque(true);

point[mi].setBackground(Color.gray);

prev_move1 = mi;

root[0][mi] = mi;

cnt_black[0]++;

}

turn++;

set(mi,0);

capture(0);

score_system();

}

public double min(int depth,double alpha,double beta, int level)

{

double score;

key = false;

routines(level,depth);

if (depth == 4)

{

territory(level);

85

if((PC==1 && B1==1 && B2==0))

return (evaluation_func(level,2));

else if((PC==1 && B1==0 && B2==1))

return (evaluation_func(level,1));

}

focused_searchspace(level);

ArrayList<Integer> space_makeMove = new

ArrayList<Integer>(search_tier.get(level));

for (int i=0;i<space_makeMove.size();i++)

{

if (set[level][space_makeMove.get(i)]==0)

{

if(PC==1 && B1==1 && B2==0)

{

if (eyes(space_makeMove.get(i),level,2) ||

self_kill(space_makeMove.get(i),level,1))

{

continue;

}

}

else if(PC==1 && B1==0 && B2==1)

{

if (eyes(space_makeMove.get(i),level,1) ||

self_kill(space_makeMove.get(i),level,2))

{

continue;

}

}

if((PC==1 && B1==1 && B2==0))

set[depth][space_makeMove.get(i)]=1;

else if((PC==1 && B1==0 && B2==1))

set[depth][space_makeMove.get(i)]=2;

root[depth][space_makeMove.get(i)] =

space_makeMove.get(i);

if((PC==1 && B1==1 && B2==0))

cnt_black[depth]++;

else if((PC==1 && B1==0 && B2==1))

cnt_white[depth]++;

set(space_makeMove.get(i),depth);

capture(depth);

territory(depth);

score = max(depth+1,alpha,beta,depth);

if (score < beta) beta=score;

if(alpha>=beta)

{

return(beta);

}

routines(level,depth);

}

}

return(beta);

}

public double max(int depth,double alpha,double beta, int level)

{

key = false;

double score;

routines(level,depth);

86

if (depth == 4)

{

territory(level);

if((PC==1 && B1==1 && B2==0))

return (evaluation_func(level,1));

else if((PC==1 && B1==0 && B2==1))

return (evaluation_func(level,2));

}

focused_searchspace(level);

ArrayList<Integer> space_makeMove = new

ArrayList<Integer>(search_tier.get(level));

for (int i=0;i<space_makeMove.size();i++)

{

if (set[level][space_makeMove.get(i)]==0)

{

if(PC==1 && B1==1 && B2==0)

{

if (eyes(space_makeMove.get(i),level,1) ||

self_kill(space_makeMove.get(i),level,2))

{

continue;

}

}

else if(PC==1 && B1==0 && B2==1)

{

if (eyes(space_makeMove.get(i),level,2) ||

self_kill(space_makeMove.get(i),level,1))

{

continue;

}

}

if((PC==1 && B1==1 && B2==0))

set[depth][space_makeMove.get(i)]=2;

else if((PC==1 && B1==0 && B2==1))

set[depth][space_makeMove.get(i)]=1;

root[depth][space_makeMove.get(i)] =

space_makeMove.get(i);

if((PC==1 && B1==1 && B2==0))

cnt_white[depth]++;

else if((PC==1 && B1==0 && B2==1))

cnt_black[depth]++;

set(space_makeMove.get(i),depth);

capture(depth);

territory(depth);

score = min(depth+1,alpha,beta,depth);

if (score > alpha) alpha=score;

if(alpha>=beta)

{

return(alpha);

}

routines(level,depth);

}

}

return(alpha);

}

public boolean no_left(int id)

{

for(int i=0;i<7;i++)

{

if(id == NO_LEFT[i])

87

{

return true;

}

}

return false;

}

public boolean no_right(int id)

{

for(int i=0;i<7;i++)

{

if(id == NO_RIGHT[i])

{

return true;

}

}

return false;

}

public boolean no_top(int id)

{

for(int i=0;i<7;i++)

{

if(id == NO_TOP[i])

{

return true;

}

}

return false;

}

public boolean no_bottom(int id)

{

for(int i=0;i<7;i++)

{

if(id == NO_BOTTOM[i])

{

return true;

}

}

return false;

}

private double evaluation_func(int level, int player)

{

double bscore = 0.5*territory_black[level] +

2*cnt_white_cpt[level] + 0.5*cnt_black[level] ;

double wscore = 0.5*territory_white[level] +

2*cnt_black_cpt[level] + 0.5*cnt_white[level;

double sc= wscore - bscore;

if(player == 1)

sc = sc*(-1);

double[][] input = new double[1][81];

for(int xx=0;xx<81;xx++)

{

if(set[level][xx]==1)

input[0][xx]=-1;

else if(set[level][xx]==2)

input[0][xx]=1;

else if(set[level][xx]==0)

input[0][xx]=0;

}

double[][] ideal = new double[1][1];

MLDataSet result = new BasicMLDataSet(input, ideal);

MLData output = null;

88

for(MLDataPair pair: result ) {

output = network.compute(pair.getInput());

}

double var = output.getData(0);

if(sc == 0.0)

{

sc = var;

}

else

{

if((player==2 && sc>0 && var > 0))

sc = sc*var;

else if((player==1 && sc<0 && var < 0))

sc = sc*var*(-1);

else

sc = sc+var;

}

return (sc);

}

private void score_system()

{

int blib=0,wlib=0;

territory(0);

black_stone_txt.setText(Integer.toString(cnt_black[0]));

white_stone_txt.setText(Integer.toString(cnt_white[0]));

black_cpt_txt.setText(Integer.toString(cnt_white_cpt[0]));

white_cpt_txt.setText(Integer.toString(cnt_black_cpt[0]));

black_territory_txt.setText(Integer.toString

(territory_black[0]));

white_territory_txt.setText(Integer.toString

(territory_white[0]));

black_eye_txt.setText(Integer.toString(single_eye_black[0]));

white_eye_txt.setText(Integer.toString(single_eye_white[0]));

black_twoeye_txt.setText(Integer.toString

(double_eye_black[0]));

white_twoeye_txt.setText(Integer.toString

(double_eye_white[0]));

for(int xx=0;xx<81;xx++)

{

if(set[0][xx]==1)

{

blib+=liberties[0][xx];

}

if(set[0][xx]==2)

{

wlib+=liberties[0][xx];

}

}

black_lib_txt.setText(Integer.toString(blib));

white_lib_txt.setText(Integer.toString(wlib));

}

private void ScoreActionPerformed(java.awt.event.ActionEvent evt) {

double sc=0;

score_system();

sc = (territory_black[0] + cnt_white_cpt[0]) –

(territory_white[0] + cnt_black_cpt[0] + 6.5);

if(sc>0)

{

winnertxt.setText("B+"+sc);

}

else if(sc<0)

{

winnertxt.setText("W+"+Math.abs(sc));

}

else

89

winnertxt.setText("Draw");

}

private void focused_searchspace(int level)

{

int tval;

search.clear();

for(int x=0;x<81;x++)

{

if(set[level][x]!=0)

{

if(no_left(x))

{

tval = x-9;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+9;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+1;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+2;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+10;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-8;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-18;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+18;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

}

else if(no_right(x))

{

tval = x-9;

if(tval>=0 && tval<81 && !search.contains(tval))

90

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+9;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-1;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-10;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+8;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-18;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+18;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-2;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

}

else if(no_top(x))

{

tval = x+9;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-1;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+1;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

91

tval = x-2;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+2;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+10;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+8;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+18;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

}

else if(no_bottom(x))

{

tval = x-9;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-1;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+1;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-2;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+2;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-8;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

92

search.add(tval);

}

tval = x-10;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-18;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

}

else if(x==TOP_LEFT)

{

tval = x+9;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+1;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+2;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+10;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+18;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

}

else if(x==TOP_RIGHT)

{

tval = x+9;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-1;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-2;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

93

}

tval = x+8;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+18;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

}

else if(x==BOTTOM_LEFT)

{

tval = x-9;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+1;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+2;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-8;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-18;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

}

else if(x==BOTTOM_RIGHT)

{

tval = x-9;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-1;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-2;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

94

tval = x-10;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-18;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

}

else

{

tval = x-9;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+9;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-1;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+1;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-8;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+10;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+8;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-10;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-2;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

95

search.add(tval);

}

tval = x+2;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x-18;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

tval = x+18;

if(tval>=0 && tval<81 && !search.contains(tval))

{

if(set[level][tval]==0)

search.add(tval);

}

}

}

}

if(search.isEmpty())

{

for(int ii=0;ii<81;ii++)

{

search.add(ii);

}

}

search_tier.add(level, search);

}

private void passActionPerformed(java.awt.event.ActionEvent evt) {

turn++;

pass_cnt++;

if(pass_cnt == 2)

JOptionPane.showMessageDialog(null, "Game Over");

if((turn%2 == 0) && PC==1)

makeMove(0);

}

private void Rd_PPMouseClicked(java.awt.event.MouseEvent evt) {

Rd_B1.setText("Player 1 is Black Stone");

Rd_B2.setText("Player 2 is Black Stone");

PP = 1;

PC = 0;

}

private void Rd_PCMouseClicked(java.awt.event.MouseEvent evt) {

Rd_B1.setText("Player is Black Stone");

Rd_B2.setText("Computer is Black Stone");

PP = 0;

PC = 1;

}

private void playMouseClicked(java.awt.event.MouseEvent evt) {

Rd_PP.setEnabled(false);

Rd_PC.setEnabled(false);

Rd_B1.setEnabled(false);

Rd_B2.setEnabled(false);

if(PC==1 && B1==0 && B2==1)

makeMove(0);

}

private void Rd_B1MouseClicked(java.awt.event.MouseEvent evt) {

B1 = 1;

96

B2 = 0;

}

private void Rd_B2MouseClicked(java.awt.event.MouseEvent evt) {

B1 = 0;

B2 = 1;

}

@Override

public void actionPerformed(java.awt.event.ActionEvent e)

{

ImageIcon ig;

key = true;

if(turn%2==0)

ig = img1;

else

ig = img2;

for(int i = 0;i<81;i++)

{

if(e.getSource()==point[i])

{

if( set[0][i] == 0)

{

if(turn%2 != 0 && (PP==1 || ( PC==1 && B1==1 && B2==0)))

{

if ((eyes(i,0,2)))

{

JOptionPane.showMessageDialog(null, "Suicide

Move...Please try again");

break;

}

if (self_kill(i,0,1))

{

JOptionPane.showMessageDialog(null, "Self

Kill...Please try again");

break;

}

set[0][i] = 1;

if(Arrays.equals(prev_board, set[0]))

{

JOptionPane.showMessageDialog(null, "Ko

move...Please try again");

set[0][i] = 0;

break;

}

System.arraycopy(set[0], 0, prev_board, 0, 81);

point[i].setIcon(img1);

if(prev_move1 != 100)

{

point[prev_move1].setOpaque(false);

point[prev_move1].setBackground(Color.white);

}

point[i].setOpaque(true);

point[i].setBackground(Color.gray);

prev_move1 = i;

root[0][i] = i;

cnt_black[0]++;

turn++;

pass_cnt = 0;

set(i,0);

capture(0);

score_system();

}

else if(turn%2 == 0 && ((PP==1) || (PC==1 && B1==0 && B2==1)))

97

{

if ((eyes(i,0,1)))

{

JOptionPane.showMessageDialog(null, "Suicide

Move...Please try again");

break;

}

if (self_kill(i,0,2))

{

JOptionPane.showMessageDialog(null, "Self

Kill...Please try again");

break;

}

set[0][i] = 2;

if(Arrays.equals(prev_board, set[0]))

{

JOptionPane.showMessageDialog(null, "Ko

move...Please try again");

set[0][i] = 0;

break;

}

System.arraycopy(set[0], 0, prev_board, 0, 81);

point[i].setIcon(img2);

if(prev_move2 != 100)

{

point[prev_move2].setOpaque(false);

point[prev_move2].setBackground(Color.black);

}

point[i].setOpaque(true);

point[i].setBackground(Color.white);

prev_move2 = i;

root[0][i] = i;

cnt_white[0]++;

turn++;

pass_cnt = 0;

set(i,0);

capture(0);

score_system();

}

if(turn%2 != 0 && (PC==1 && B1==0 && B2==1))

{

makeMove(0);

}

if(turn%2 == 0 && (PC==1 && B1==1 && B2==0))

{

makeMove(0);

}

}

}

}

}

public static void main(String args[]) {

/* Create and display the form */

java.awt.EventQueue.invokeLater(new Runnable() {

public void run() {

new Go().setVisible(true);

}

});

}

}

98

References

[1] Wikipedia. (2013, Nov 20). Computer Go [Online]. Available:

http://en.wikipedia.org/wiki/Computer_Go

[2] KGS. (2013, Nov 20). The KGS Go Server [Online]. Available:

http://www.gokgs.com/

[3] Sensei’s Library. (2013, Nov 20). Sensei’s Library [Online]. Available:

http://senseis.xmp.net/?ComputerGo

[4] Wikipedia. (2013, Nov 20). Minimax Search Algorithm [Online]. Available:

http://en.wikipedia.org/wiki/Minimax

[5] David E. Rumelhart, Geoffrey E. Hinton and Ronald J. Williams. (1986).

Backpropagation [Online]. Available:

http://www.nature.com/nature/journal/v323/n6088/abs/323533a0.html

[6] J. Heaton. (2013, Nov 20). Encog Framework [Online]. Available:

http://www.heatonresearch.com/encog

[7] Wikipedia. (2013, Nov 20). Resilient Backpropagation [Online]. Available:

http://en.wikipedia.org/wiki/Rprop

[8] David E. Moriarty and Risto Miikkulainen. (1994). Evolving Neural Network to

Focus Minimax search [Online]. Available: http://nn.cs.utexas.edu/?moriarty:aaai94

[9] M. Enzenberger. (2003). Evaluation in Go by Neural Network using Soft

Segmentation [Online]. Available:

http://webdocs.cs.ualberta.ca/~emarkus/neurogo/neurogo3/

[10] Cai X, Venayagamoorthy GK and Wunsch DC 2nd

. (2009). Evolutionary Swarm

Neural Network Game Engine for Capture Go [Online]. Available:

http://www.ncbi.nlm.nih.gov/pubmed/20005671

[11] Mathys C. du Plessis. (2009). A Hybrid Neural Network and Minimax for zero sum

games [Online]. Available: http://dl.acm.org/citation.cfm?id=1632158

[12] Kumar Chellapilla and David B. Fogel. (1999). Evolving Neural Network to play

Checkers without relying on Expert knowledge [Online]. Available:

http://www.cs.nott.ac.uk/~gxk/courses/g5baim/papers/checkers-

002/TNNKChellapillaAndDBFogelText.pdf

99

[13] Gnu Go. (2013, Nov 20). Gnu Go [Online]. Available:

http://www.gnu.org/software/gnugo/

[14] Perl. (2013, Nov 20). Perl [Online]. Available: http://strawberryperl.com/

[15] Gogui. (2013, Nov 20). Board and stone images from Gogui [Online]. Available:

http://senseis.xmp.net/?Gogui

[16] Java NetBeans. (2013, Nov 20). Java NetBeans [Online]. Available:

https://netbeans.org/

[17] Stuart Russell & Peter Norvig. Artificial Intelligence : A Modern Approach, 1995.


Recommended