Generating Student Curriculum Plans with Local Large Language Models
Christopher Eng
Co-Presenters: Abijith Manikandan, Luis Velazquez Rodriguez, Joseph Tomasello
College: The Dorothy and George Hennings College of Science, Mathematics and Technology
Major: Information Technology
Faculty Research Mentor: Daehan Kwak
Abstract:
The increasing adoption of Large Language Models (LLMs) has prompted widespread exploration into their potential applications across numerous sectors of the labor market. Businesses and academic institutions are progressively examining how integrating AI-driven conversational models into their workflows can address specific organizational needs, optimize resource allocation, and significantly reduce the time and effort required for various tasks. A particularly promising area of investigation involves utilizing the Natural Language Processing (NLP) capabilities inherent in contemporary LLMs to automate labor-intensive administrative tasks. This project specifically aims to evaluate the efficacy of currently available LLMs in generating personalized academic curriculum plans for students enrolled in college degree programs. Central to this assessment was determining the capacity of LLMs, varying by origin, technical specifications, and deployment environments, to accurately interpret and execute a set of interrelated instructions. These instructions guided the placement of multiple courses into an organized, semester-based schedule format. Precise prompt engineering was essential for enhancing the models' accuracy and ensuring effective generation of individualized curriculum schedules. The outputs generated by these models were subsequently reviewed for adherence to specific constraints, such as course prerequisites and semester-specific course availability. The primary test configuration consisted of a structured, four-semester schedule, each semester comprising four courses, with only one correct arrangement meeting all prerequisite and availability requirements.