I built an agent for a competitive Tron-style game using Minimaxwith Alpha-Beta Pruning and Voronoi territory analysis. The agent features **iterative deepening* to maximize search depth (reaching 6-7 plies), a transposition table for 2-3x speedup through position caching, and dynamic weight adaptation that adjusts strategy based on game phase. The evaluation function uses boost-aware distance calculations and weighted scoring across territory control, trail length, remaining boosts, and contested space to make optimal strategic decisions. With these optimizations, the agent consistently outperforms simple pathfinding algorithms and is tournament-ready for competitive play against other undergraduate teams.

Log in or sign up for Devpost to join the conversation.