Drones Detection via Skeletonization and Small-Object-Aware DETR

Gissell Gissell Torres

Co-Presenters: Delio Rincon

College: Hennings College of Science Mathematics and Technology

Major: BS.COMPSCI/CYBERS

Faculty Research Mentor: Kumar, Yulia  Li, J. Jenny

Abstract:

Detecting drones in video streaming environments remains challenging in computer vision due to scale variation and background complexity. Building up from prior work in real-time detection and skeletonization for streaming environments, this study aims to improve small object detection through Skeletonization and a Small-Object-Aware Detection Transformer framework, which uses DETR technology as a foundational step toward reliable motion prediction in dynamic aerial scenes. A transformer-based detection model was trained on a drone dataset converted to COCO format and evaluated using standard COCO metrics, including AP, AP50, and AP_small. Initial testing revealed low-confidence predictions, suggesting limitations in backbone freezing and training configuration. This research focuses on addressing said limitations through retraining the model using full fine-tuning and safer learning rates to enhance feature learning and improve small-object detection performance (AP_small). Preliminary analysis demonstrates the difficulty of small-object detection within transformer-based architectures and highlights the importance of training strategy optimization. This work establishes a foundation for future extensions involving cross-frame object tracking and motion prediction, ultimately contributing to the development of more robust drone monitoring systems. Videos were collected with the help of Rutgers ECE students.Keywords: Motion Detection, Generative AI, Deep Learning, Skeletonization, Computer Vision

Previous
Previous

Optimizing Portfolios in China’s A-Share Market: Risk–Return Dynamics

Next
Next

Cyclodextrin-coffee oil complex reduces lifespan and oxidative stress resistance of Caenorhabditis elegans