This study uses the recently introduced ChatGPT based on the GPT 4 generative large language model and investigates to what extentChatGPT could solve the FCI accurately and could be prompted to solve theFCI as-if it were a student belonging to a different cohort (RQ1, RQ2, and RQ3).
Generative AI technologies such as large language models show novel potentials to enhance educational research. For example, generative large language models were shown to be capable to solve quantitative reasoning tasks in physics and concept tests such as the Force Concept Inventory. Given the importance of such concept inventories for physics education research, and the challenges in developing them such as field testing with representative populations, this study seeks to examine to what extent a generative large language model could be utilized to generate a synthetic data set for the FCI that exhibits content-related variability in responses. We use the recently introduced ChatGPT based on the GPT 4 generative large language model and investigate to what extent ChatGPT could solve the FCI accurately (RQ1) and could be prompted to solve the FCI as-if it were a student belonging to a different cohort (RQ2). Furthermore, we study, to what extent ChatGPT could be prompted to solve the FCI as-if it were a student having a different force- and mechanics-related misconception (RQ3). In alignment with other research, we found the ChatGPT could accurately solve the FCI. We furthermore found that prompting ChatGPT to respond to the inventory as-if it belonged to a different cohort yielded no variance in responses, however, responding as-if it had a certain misconception introduced much variance in responses that approximate real human responses on the FCI in some regards.