A Fine-Tuned Language Model Framework for Human-Centered Adaptive Learning
Matthew Ngoy
Co-Presenters: Individual Presentation
College: Hennings College of Science Mathematics and Technology
Major: BS.COMPSCI/CYBERS
Faculty Research Mentor: Adenuga, Iyadunni
Abstract:
This project is centered on a human-centered, AI-powered educational application designed to enhance learning by providing adaptive, personalized support for academic topics and career coaching. While large language models (LLMs) such as ChatGPT and Gemini can assist students with assignments, they often deliver direct answers without fostering deeper conceptual understanding. This leads to performance gaps on assessments when students are required to solve problems without the aid of AI.To address this, we fine-tuned OpenAI’s GPT-3.5-Turbo model using Google Research’s Education Dialogue Dataset, a curated, multi-turn teacher–student dialogue corpus. We enriched the dataset with metadata, including learning stage, major, and preferred learning style, to support adaptive responses. The backend application integrates three core components: (1) a fine-tuning pipeline built in Node.js to deliver context-aware and subject-specific responses; (2) real-time data logging with MongoDB and Mongoose schemas for tracking chats, quizzes, flashcards, and study sessions; and (3) secure authentication and user management using PostgreSQL, Express, and bcrypt hashing. The frontend, built with React and JavaScript, supports interactive, chat-based AI tutoring systems for tasks such as step-by-step problem solving, debugging, and adaptive quizzes.This human-centered AI application has the potential to transform the educational technology landscape by combining AI-driven personalization with robust session tracking, enabling data-informed feedback for both learners and educators. Its design prioritizes human control, transparency, and trust, ensuring that students receive meaningful support without replacing authentic engagement with learning materials. Future work will focus on scaling the dataset to over 30,000 curated interactions, expanding adaptive learning pathways, and exploring reinforcement learning techniques to further refine the educational experience.