The use of Generative Pre-trained Transformers in strategic game experiments, specifically the ultimatum game and the prisoner's dilemma, shows the potential of GPT as a valuable tool in social science research, especially in experimental studies and social simulations.
This paper explores the use of Generative Pre-trained Transformers (GPT) in strategic game experiments, specifically the ultimatum game and the prisoner's dilemma. I designed prompts and architectures to enable GPT to understand the game rules and to generate both its choices and the reasoning behind decisions. The key findings show that GPT exhibits behaviours similar to human responses, such as making positive offers and rejecting unfair ones in the ultimatum game, along with conditional cooperation in the prisoner's dilemma. The study explores how prompting GPT with traits of fairness concern or selfishness influences its decisions. Notably, the"fair"GPT in the ultimatum game tends to make higher offers and reject offers more frequently compared to the"selfish"GPT. In the prisoner's dilemma, high cooperation rates are maintained only when both GPT players are"fair". The reasoning statements GPT produces during gameplay reveal the underlying logic of certain intriguing patterns observed in the games. Overall, this research shows the potential of GPT as a valuable tool in social science research, especially in experimental studies and social simulations.