Week 1 of SoK 2025
Project Description
In SoK 2025, I will be working on adding Pallanguzhi, a traditional Indian Mancala variant, into the Mankala Engine. Collaborating with Srisharan V S, my focus includes two key goals:
- Developing a computerized opponent to enhance player engagement and ensure a seamless gameplay experience.
- Creating a Text-Based User Interface (TUI) for gameplay.
What I Did This Week
The first step in my journey was setting up the Mankala Engine repository. I forked the repository to my local system, successfully built it, and resolved some warnings during the build process. Afterward, I delved into the codebase, analyzing the existing algorithms and understanding how they work for other Mancala variants.
Research on Implementing a Computerized Opponent
To create a robust computerized opponent for Pallanguzhi, I began researching potential algorithms that could best fit the game mechanics. Here are the three techniques I explored.
1. Reinforcement Learning
Reinforcement Learning (RL) is an exciting approach where an agent learns optimal strategies by interacting with the environment and improving over time. For Pallanguzhi, RL could enable the computerized opponent to adapt and improve its gameplay dynamically. However, as I am new to RL, implementing and training models for this variant will take some time and effort. Despite its challenges, RL remains a promising option for advanced gameplay enhancement.
2. Monte Carlo Tree Search (MCTS)
MCTS is a powerful algorithm widely used for decision-making in games. It works by simulating potential moves to build a decision tree and then selecting the best move based on statistical evaluation. For Pallanguzhi, MCTS could efficiently explore the vast game state space and make informed decisions. By carefully tuning the number of simulations and exploration parameters, this algorithm can provide a balanced and competitive computerized opponent.
3. Alpha-Beta Pruning with Iterative Deepening
Alpha-Beta Pruning with Iterative Deepening is a highly effective technique for optimizing decision trees by eliminating unnecessary branches. This method is already implemented in the Mancala Engine for other variants and has proven its efficiency. Leveraging this existing implementation for Pallanguzhi will allow us to quickly develop a working version of the game with a competent computerized opponent.
Conclusion for Now
The immediate plan is to integrate the Pallanguzhi variant into the existing Alpha-Beta Pruning implementation. This ensures we have a functional version of the game ready for any further work. Once the TUI implementation is complete, I plan to revisit Reinforcement Learning for Pallanguzhi. Working with RL models and training them is a learning-intensive process, and I am excited to gain experience in this area. Even if RL proves too challenging, we will still have a polished Pallanguzhi variant running on the existing algorithm.
What’s Next
Next week, I will work on adding the longer version of Pallanguzhi, which consists of multiple rounds, while Srisharan V S focuses on completing the shorter version. Together, we aim to make significant progress toward integrating and refining this traditional game within the Mankala Engine.
Stay tuned for updates!