Agent Genesis is an interactive artificial intelligence simulator that implements reinforcement learning (Q-Learning) in a maze navigation environment. Watch how an autonomous agent learns to navigate from a starting point to the goal, improving its strategy with each episode.
- Q-Learning Algorithm: Complete implementation with Q-table
- Exploration vs Exploitation: Adaptive epsilon-greedy strategy
- Real-Time Visualization: Watch Q-values as the agent learns
- Amnesia Mode: Agent resets its knowledge each session
- Intuitive Controls: Pause, speed up, restart, and manual navigation
- Customizable HUD: Draggable and minimizable information panel
- Advanced Visualizations: Agent trail, color-coded Q-values
- Notification System: Visual feedback through toast notifications
- Real-Time Statistics: Episodes, steps, rewards, and FPS
- Performance History: Average rewards and best episode
- Status Indicators: Training/observation mode, simulation speed
- Visual Effects: Celebration particles and screen effects
- Python 3.10 or higher
- Pygame-compatible operating system (Windows, macOS, Linux)
- Emoji font recommended for better visual experience
# Clone the repository
git clone https://github.com/josefdc/agente-genesis.git
cd agente-genesis
# Install dependencies with uv (recommended)
uv sync
# Or with pip
pip install pygame-ce numpy
# Method 1: Direct launch script
python run.py
# Method 2: Main module
python -m src.main
# Method 3: With uv
uv run python run.py
- SPACE: Pause/Resume simulation
- R: Reset agent to initial position
- ESC: Return to main menu
- H: Show/Hide help panel
- G: Toggle environment grid
- Q: Show/Hide color-coded Q-values
- T: Toggle agent trail with fade
- +/=: Increase simulation speed
- -: Decrease simulation speed
- P: Pause simulation
- Arrow Keys: Move agent manually
agente-genesis/
โโโ src/ # Main source code
โ โโโ game.py # Main game controller
โ โโโ agent.py # Q-Learning agent implementation
โ โโโ environment.py # Navigation environment and physics
โ โโโ emoji_fallback.py # Emoji compatibility system
โ โโโ components/ # Reusable UI components
โ โ โโโ button.py # Interactive buttons
โ โโโ scenes/ # Game scenes
โ โโโ base_scene.py # Base class for scenes
โ โโโ menu_scene.py # Main menu
โ โโโ simulation_scene.py # Main simulation scene
โโโ levels/ # Maze levels
โ โโโ 01_basic.txt # Basic level
โ โโโ 02_intermediate.txt # Intermediate level
โโโ config/ # Game configuration
โ โโโ settings.json # Visual and game settings
โโโ assets/ # Multimedia resources
โ โโโ fonts/ # Fonts
โ โโโ sounds/ # Sound effects and music
โโโ saved_models/ # Saved models (not used in amnesia mode)
โโโ run.py # Launch script
โโโ pyproject.toml # Python project configuration
The agent uses a classic Q-Learning implementation with the following characteristics:
- Learning Rate (ฮฑ): 0.1
- Discount Factor (ฮณ): 0.99
- Initial Epsilon: 1.0 (full exploration)
- Minimum Epsilon: 0.05
- Epsilon Decay: 0.9995
- States: Position (x, y) in the maze grid
- Actions: 4 cardinal directions (up, down, left, right)
- Rewards:
- +100 for reaching the goal
- -10 for wall collision
- Small penalty for each step
The agent uses an epsilon-greedy strategy:
- With probability
epsilon
: Exploration (random action) - With probability
1-epsilon
: Exploitation (best known action)
The config/settings.json
file allows customization of:
- Screen resolution and FPS
- Complete color scheme
- Animation speeds
- UI settings
Levels are defined in simple text format:
#
: Walls.
: Free spacesS
: Agent starting positionG
: Goal to reach
Custom level example:
##########
#S.......#
#.#####.##
#.....#.G#
##########
- Implements the Q-Learning algorithm
- Handles epsilon-greedy decision making
- Updates Q-table based on experiences
- Manages maze logic
- Calculates rewards and state transitions
- Renders environment with Q-value visualization
- Orchestrates the complete simulation
- Handles user input and visualization
- Implements interactive HUD and notification system
- Fork the repository
- Create a branch for your feature:
git checkout -b feature/new-feature
- Commit your changes:
git commit -m 'Add new feature'
- Push to the branch:
git push origin feature/new-feature
- Open a Pull Request
- Run the simulator
- Select a level
- Watch how the agent initially explores randomly
- Notice how it gradually improves its navigation strategy
- Enable Q-value visualization to see the agent's "mental map"
Modify parameters in src/agent.py
to experiment:
- Increase learning rate for faster convergence
- Adjust epsilon to change exploration/exploitation balance
- Modify rewards for different behaviors
- Verify Pygame-CE is installed:
pip install pygame-ce
- Confirm you have Python 3.10 or higher
- Reduce resolution in
config/settings.json
- Decrease simulation speed with the
-
key
- The game includes automatic ASCII fallbacks
- For better experience, install an emoji font like Noto Color Emoji
This project is under the MIT License. See the LICENSE file for more details.
Jose Felipe Duarte (@josefdc)
- Software developer specialized in AI and interactive visualization
- Passionate about machine learning and educational interfaces
- Pygame community for the excellent game development library
- Reinforcement Learning researchers for the theoretical foundations
- Open Source community for inspiration and tools
Like the project? โญ Give it a star on GitHub and share it with other AI enthusiasts!
Have suggestions? ๐ก Open an issue or contribute directly to the code.
"Artificial intelligence is not about replacing human intelligence, but amplifying it." - Agent Genesis