{"id":1228,"date":"2026-01-25T19:32:13","date_gmt":"2026-01-25T19:32:13","guid":{"rendered":"https:\/\/www.cs.ubbcluj.ro\/~meco\/the-impact-of-data-annotations-on-the-performance-of-object-detection-models-in-icon-detection-for-gui-images-2024\/"},"modified":"2026-02-01T12:07:48","modified_gmt":"2026-02-01T12:07:48","slug":"the-impact-of-data-annotations-on-the-performance-of-object-detection-models-in-icon-detection-for-gui-images-2024","status":"publish","type":"post","link":"https:\/\/www.cs.ubbcluj.ro\/~meco\/the-impact-of-data-annotations-on-the-performance-of-object-detection-models-in-icon-detection-for-gui-images-2024\/","title":{"rendered":"The Impact of Data Annotations on the Performance of Object Detection Models in Icon Detection for GUI Images (2024)"},"content":{"rendered":"<div class=\"entry-content\">\n<p>Hybrid Artificial Intelligence Systems<\/p>\n<h2>Authors<\/h2>\n<p>Madalina Dicu, Enol Garc\u00eda Gonz\u00e1lez, Camelia Chira, J. Villar<\/p>\n<h2>Abstract<\/h2>\n<p>Detecting icons in Graphical User Interfaces (GUIs)\u00a0is essential for effective application automation. This study examines the impact of different annotation methods on the performance\u00a0of object detection models for icon detection in GUIs. We compared manual, automated, and hybrid annotations using three models: Faster R-CNN, YOLOv8, and YOLOv9. The results show that manual annotations achieve the highest accuracy, with YOLOv9 reaching an Average Precision (AP) of 68.23% and Faster R-CNN achieving 61.82%. Hybrid methods that combine automated annotations with manual corrections also show significant improvements, though they do not perform\u00a0as well as manual annotations alone. These findings underscore\u00a0the importance of high-quality, consistent annotations for training effective detection models. While we used HTML code for automated annotations to simplify the process, we encountered inconsistencies that affected model performance. This highlights the need to develop better hybrid methods tailored to specific tasks, ensuring efficiency and accuracy in data annotation.<\/p>\n<h2>Citation<\/h2>\n<pre class=\"wp-block-preformatted\">@Inproceedings{Dicu2024TheIO,\n author = {Madalina Dicu and Enol Garc\u00eda Gonz\u00e1lez and Camelia Chira and J. Villar},\n booktitle = {Hybrid Artificial Intelligence Systems},\n title = {The Impact of Data Annotations on the Performance of Object Detection Models in Icon Detection for GUI Images},\n year = {2024}\n}<\/pre>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Detecting icons in Graphical User Interfaces (GUIs) is essential for effective application automation. This study examines the impact of different annotation methods on the performance of object detection models for icon detection in GUIs. We compared manual, automated, and hybrid annotations using three models: Faster R-CNN, YOLOv8, and YOLOv9. The results show that manual annotations achieve the highest accuracy, with YOLOv9 reaching an Average Precision (AP) of 68.23% and Faster R-CNN achieving 61.82%. Hybrid methods that combine automated annotations with manual corrections also show significant improvements, though they do not perform as well as manual annotations alone. These findings underscore the importance of high-quality, consistent annotations for training effective detection models. While we used HTML code for automated annotations to simplify the process, we encountered inconsistencies that affected model performance. This highlights the need to develop better hybrid methods tailored to specific tasks, ensuring efficiency and accuracy in data annotation.<\/p>\n","protected":false},"author":6,"featured_media":0,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":[],"categories":[4],"tags":[77,73,11,76],"_links":{"self":[{"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/posts\/1228"}],"collection":[{"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/comments?post=1228"}],"version-history":[{"count":1,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/posts\/1228\/revisions"}],"predecessor-version":[{"id":1451,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/posts\/1228\/revisions\/1451"}],"wp:attachment":[{"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/media?parent=1228"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/categories?post=1228"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/tags?post=1228"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}