From f646f1b039aaaeaf02fb4b686c22bb30b7f9a184 Mon Sep 17 00:00:00 2001
From: "Foh, Chuan Dr (Elec Electronic Eng)" <c.foh@surrey.ac.uk>
Date: Sat, 30 Oct 2021 10:54:08 +0000
Subject: [PATCH] Update README.md

---
 README.md | 65 ++++---------------------------------------------------
 1 file changed, 4 insertions(+), 61 deletions(-)

diff --git a/README.md b/README.md
index ac15a74..d4277cc 100644
--- a/README.md
+++ b/README.md
@@ -1,64 +1,7 @@
-# Deep Reinforcement Learning
-## Project: Train AI to play Snake
+# Deep Reinforcement Learning - Train AI to play Snake
 
-## Introduction
-The goal of this project is to develop an AI Bot to learn and play the popular game Snake from scratch. The implementation includes playing by human player, using rule-based, Q-learn, and finally Deep Reinforcement Learning algorithms. For Q-learning and Deep Reinforcement Learning, no rules about the game are given, and initially the Bot has to try exploring all options to learn what to do to get a good reward.
+## Note
+I've revised and migrated the project to github. Please check the project in github where I may have included more algorithms and information:
 
-The code follows this tutorial: https://towardsdatascience.com/how-to-teach-an-ai-to-play-games-deep-reinforcement-learning-28f9b920440a
+https://github.com/cfoh/snake-game
 
-
-## Install
-This project requires Python 3.8 with the pygame library installed, as well as Keras with Tensorflow backend.
-```bash
-git clone gitlab@gitlab.eps.surrey.ac.uk:cf0014/snake.git
-git clone https://gitlab.eps.surrey.ac.uk/cf0014/snake.git
-
-```
-
-## Run
-To run the game, executes in the folder:
-```python
-python main.py
-```
-
-The program is set to human player by default. Type the following to see available options:
-```python
-python main.py --help
-```
-
-To change to a Bot, modify main.py by uncommenting the appropriate algorithm to run:
-```
-## AI selector, pick one:
-algo = AI_Player0() # do nothing, let human player control
-#algo = AI_RuleBased() # rule-based algorithm
-#algo = AI_RLQ()       # Q-learning - training mode
-#algo = AI_RLQ(False)  # Q-learning - testing mode, no exploration
-#algo = AI_DQN()       # DQN - training mode
-#algo = AI_DQN(False)  # DQN - testing mode, no exploration
-```
-
-## Trained Data (for Q-Learning)
-The trained data (i.e. Q-table) will be stored in the following file. If one already exists, it will be overwritten.
-```
-q-table.json
-```
-
-When running, the program will read the following Q-table. To use the previously trained data, simply rename it to the following filename.
-```
-q-table-learned.json
-```
-
-## Trained Data (for DQN)
-The trained data (i.e. weights) will be stored in the following file. If one already exists, it will be overwritten.
-```
-weights.hdf5
-```
-
-When running, the program will read the following weight file. To use the previously trained data, simply rename it to the following filename.
-```
-weights-learned.hdf5
-```
-
-## Can DQN play a perfect game?
-This is an interesting question. The following youtube video seems to have the answer. The video explains the design of the system state and demonstrates the AI playing a perfect game with design:
-https://www.youtube.com/watch?v=vhiO4WsHA6c
-- 
GitLab