Abstract
Detecting icons in Graphical User Interfaces (GUIs) is essential for effective application automation. This study examines the impact of different annotation methods on the performance of object detection models for icon detection in GUIs. We compared manual, automated, and hybrid annotations using three models: Faster R-CNN, YOLOv8, and YOLOv9. The results show that manual annotations achieve the highest accuracy, with YOLOv9 reaching an Average Precision (AP) of 68.23% and Faster R-CNN achieving 61.82%. Hybrid methods that combine automated annotations with manual corrections also show significant improvements, though they do not perform as well as manual annotations alone. These findings underscore the importance of high-quality, consistent annotations for training effective detection models. While we used HTML code for automated annotations to simplify the process, we encountered inconsistencies that affected model performance. This highlights the need to develop better hybrid methods tailored to specific tasks, ensuring efficiency and accuracy in data annotation.
Citare
@Inproceedings{Dicu2024TheIO,
author = {Madalina Dicu and Enol García González and Camelia Chira and J. Villar},
booktitle = {Hybrid Artificial Intelligence Systems},
title = {The Impact of Data Annotations on the Performance of Object Detection Models in Icon Detection for GUI Images},
year = {2024}
}
