Skip to content

Commit b36230f

Browse files
committed
documentation
1 parent b14b13f commit b36230f

File tree

4 files changed

+70
-6
lines changed

4 files changed

+70
-6
lines changed

MiniMaxPlayer.py

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -21,15 +21,15 @@ class MinimaxPlayer_00(PlayerRandom):
2121

2222
def __init__(self, depth = 1):
2323
self.depth = depth
24+
self.point_map = { 'P':1, 'N':3, 'B':3, 'R':5, 'Q':9 }
2425

2526
def value(self, game: Chess) -> int:
2627
'''Provides the value of a game state by evaluating the number of takes possible.'''
27-
point_map = { 'P':1, 'N':3, 'B':3, 'R':5, 'Q':9 }
2828
attackTargets = [move[1] for move in game.getMoves() if game.isAttackMove(move[0], move[1])]
2929
targetPieces = [game.pieceAt(cell) for cell in attackTargets]
3030
points = 0
3131
for piece in targetPieces:
32-
pieceValue = point_map[piece[1]]
32+
pieceValue = self.point_map[piece[1]]
3333
if piece[0] == 'w':
3434
points -= pieceValue
3535
elif piece[0] == 'b':
@@ -153,8 +153,11 @@ def choosePromotion(self, game: Chess) -> str:
153153
return 'Q'
154154

155155
class MinimaxPlayer_04(MinimaxPlayer_03):
156-
'''trying to implement an alpha-beta pruning this time: performance improved
157-
also added a value estimation that's unbiased on which side is playing.'''
156+
'''
157+
trying to implement an alpha-beta pruning this time: performance improved
158+
also added a value estimation that's unbiased on which side is playing.
159+
Not afraid of getting taken
160+
'''
158161

159162
def __init__(self, depth):
160163
self.depth = depth
@@ -209,7 +212,7 @@ def minimax(self, game: Chess, depth: int, alphabeta: list[int|float] = [float('
209212
def chooseMove(self, game: Chess) -> list:
210213
moves = game.getMoves()
211214
n = len(moves)
212-
value_map = dict()
215+
value_map: dict[list[list[int]], int] = dict()
213216
for i, move in enumerate(moves):
214217
print(f'thinking {(i+1)*100//n}%')
215218
newGame = game.makeMove(*move)

README.md

Lines changed: 53 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ game = Chess.loadFrom(r'file')
5050
* ```Test_Chess.py``` is for use in the command line. It could load up a .save file and provide an interactive console from an instance of the game and gives control for testing.
5151
* The assets folder contain sample games for the test to load and also images of pieces for the game to use.
5252

53-
## Documentation
53+
## History
5454

5555
I began the project with the intention of implementing Regressive Learning on the Chess board.
5656

@@ -64,3 +64,55 @@ I began the project with the intention of implementing Regressive Learning on th
6464
UI code controlled the game menus and the controls to the game.
6565

6666
> This I later realized to be a bad practice as Game Logic and UI logic seemed to mix throughout the code and seperate concerns got entangled together.
67+
* I had architected the game in such a way that control started from the UI.
68+
The UI code controlled UI logic, instantiated the player and game object classes. The UI code was responsible for displaying the game state and handling User Interaction and also delegating the choices made by the players to the game.
69+
* A lot of effort was already spent on UI and architecture design, maybe that must be the aftermath of poor project planning and a lack of personal consistency.
70+
I decided to focus more of my efforts on implementing Player classes and tapping into Intelligence part of the project.
71+
72+
## The rise of Intelligence
73+
74+
Here I record the journey to developing better performing Chess players.
75+
The observations and improvements made are also summarized.
76+
77+
* Random and Greedy Players
78+
I had already started out with a `PlayerRandom` and a `GreedyPlayer` to start off with.
79+
As the name suggests, the random player makes random moves out of the available moves and the greedy player additionally tries to take pieces in a greedy fashion if there are pieces to take.
80+
This helped test out scenarios involving non-human players.
81+
This didn't serve any intelligence as such and the satisfaction waned away really fast.
82+
* Minimax Players
83+
Here is [the Minimax algorithm from Wikipedia](https://en.wikipedia.org/wiki/Minimax)
84+
The idea is to choose a move, but for each possible move, we also calculate the move that the oponent will take, so and and so forth until N levels of depth are reached on the search tree.
85+
86+
```txt
87+
(X) minplayer
88+
move 1 / \ move 2
89+
(Y) (Z) maxplayer
90+
/ \ / \
91+
(A) (B) (C) (D)
92+
-5 9 2 1
93+
94+
(X) minplayer
95+
/ \
96+
(Y) (Z) maxplayer chooses move with max value
97+
9 2
98+
99+
(X) takes move with minimum value
100+
2
101+
```
102+
103+
Without a limit on the depth, the algorithm might end up finding all possible moves in the game. This is not a practical choice for a game like Chess which has too many possibilities to evaluate. So on the last depth level, an estimate value of the game is used.
104+
105+
Implementing the Minimax algorithm improved gameplay but there were edge cases, partly due to errors in my implementation and partly due to the value estimation I had set up.
106+
* `MinimaxPlayer_00` only has a value function that maximized the "opportunity" of taking pieces by counting the number of pieces that were available for take on a game position.
107+
It only estimated a single move and didn't really perform a Minimax search.
108+
* `MinimaxPlayer_01`, `MinimaxPlayer_02` and `MinimaxPlayer_03` experimented with minor improvements that helped setup the minimax algorithm.
109+
Evaluating just 3 successions of game positions started to take more than a minute to complete.
110+
Another noteable issue was the UI blocking during this time when the player was making its calculations.
111+
**Threads** helped to solve this issue by parallelizing the player process.
112+
113+
The value function still optimized only the "possibility" of a take.
114+
To no surprise, it didn't prioritize takes but only the opportunity.
115+
The player played the game moving pieces into good positions but it didn't attack.
116+
o___0
117+
* `MinimaxPlayer_04` solved the long calculation times taken by the older versions by implementing **alpha-beta pruning** on the search tree.
118+
the alpha-beta pruning maintains two values alpha and beta across the search algorithm.
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
Pe2-e4 Pe7-e5 Ng1-f3 Pf7-f5 Nf3xPe5 Qd8-f6 Pd2-d4 Pd7-d5 Bf1-d3 Ng8-h6 Nb1-c3 Pc7-c6 Ph2-h3 Qf6-f7 Ne5xQf7 Bf8-e7 Nf7xRh8 Be7-g5 Bc1xBg5 Nh6-g4 Ph3xNg4 Ph7-h6 Bg5-f4 O-O Pg4xPf5 Bc8xPf5 Pe4xBf5 Rf8-e8 Bf4-e5 Re8xBe5 Pd4xRe5 Kg8-f8 Qd1-h5 Pg7-g6 Qh5xPg6 Kf8-e7 Pf5-f6 Ke7-f8 Qg6-g7 Kf8-e8 Qg7-e7#
2+
White Wins
3+
white: PlayerUI
4+
black: MinimaxPlayer_04(depth:10)
5+
Not afraid of getting taken
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
Pg2-g4 Pf7-f5 Pe2-e4 Pf5xPe4 Pf2-f4 Pe4xPf3 Bf1-b5 Pf3-f2 Pf2-f1=Q Ke2-e3 Qf1-e1 Ng1-e2 Qe1xPd2 Qd2xPc2 Nb1-a3 Qc2-b3 Bc1-e3 Pe7-e5 Ne2-d4 Nb8-c6 Ra1-c1 Ng8-f6 Rc1-c3 Bf8-b4 Nd4-e2 Nf6-d5 Ph2-h4 Qd8-f6 Kf3-g3 Qf6-f1 Ne2-c1 Qf1-e1 Be3-f2 Qb3-c2 Qc2xBf2#
2+
Black Wins
3+
white: MinimaxPlayer_04(depth:20)
4+
black: MinimaxPlayer_04(depth:10)

0 commit comments

Comments
 (0)