{"id":1249,"date":"2026-01-25T19:33:28","date_gmt":"2026-01-25T19:33:28","guid":{"rendered":"https:\/\/www.cs.ubbcluj.ro\/~meco\/evaluating-large-language-models-for-diacritic-restoration-in-romanian-texts-a-comparative-study-2025\/"},"modified":"2026-02-01T12:07:30","modified_gmt":"2026-02-01T12:07:30","slug":"evaluating-large-language-models-for-diacritic-restoration-in-romanian-texts-a-comparative-study-2025","status":"publish","type":"post","link":"https:\/\/www.cs.ubbcluj.ro\/~meco\/evaluating-large-language-models-for-diacritic-restoration-in-romanian-texts-a-comparative-study-2025\/","title":{"rendered":"Evaluating Large Language Models for Diacritic Restoration in Romanian Texts: A Comparative Study (2025)"},"content":{"rendered":"<div class=\"entry-content\">\n<h2>Authors<\/h2>\n<p>Mihai Nad\u01ce\u015f, Laura Dio\u015fan<\/p>\n<h2>Abstract<\/h2>\n<p>Automatic diacritic restoration is crucial for text processing in languages with rich diacritical marks, such as Romanian. This study evaluates the performance of several large language models (LLMs) in restoring diacritics in Romanian texts. Using a comprehensive corpus, we tested models including OpenAI&#8217;s GPT-3.5, GPT-4, GPT-4o, Google&#8217;s Gemini 1.0 Pro, Meta&#8217;s Llama 2 and Llama 3, MistralAI&#8217;s Mixtral 8x7B Instruct, airoboros 70B, and OpenLLM-Ro&#8217;s RoLlama 2 7B, under multiple prompt templates ranging from zero-shot to complex multi-shot instructions. Results show that models such as GPT-4o achieve high diacritic restoration accuracy, consistently surpassing a neutral echo baseline, while others, including Meta&#8217;s Llama family, exhibit wider variability. These findings highlight the impact of model architecture, training data, and prompt design on diacritic restoration performance and outline promising directions for improving NLP tools for diacritic-rich languages.<\/p>\n<h2>Citation<\/h2>\n<pre class=\"wp-block-preformatted\">@Inproceedings{Nad\u01ce\u015f2025EvaluatingLL,\n author = {Mihai Nad\u01ce\u015f and Laura Dio\u015fan},\n title = {Evaluating Large Language Models for Diacritic Restoration in Romanian Texts: A Comparative Study},\n year = {2025}\n}<\/pre>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Automatic diacritic restoration is crucial for text processing in languages with rich diacritical marks, such as Romanian. This study evaluates the performance of several large language models (LLMs) in restoring diacritics in Romanian texts. Using a comprehensive corpus, we tested models including OpenAI\u2019s GPT-3.5, GPT-4, GPT-4o, Google\u2019s Gemini 1.0 Pro, Meta\u2019s Llama 2 and Llama 3, MistralAI\u2019s Mixtral 8x7B Instruct, airoboros 70B, and OpenLLM-Ro\u2019s RoLlama 2 7B, under multiple prompt templates ranging from zero-shot to complex multi-shot instructions. Results show that models such as GPT-4o achieve high diacritic restoration accuracy, consistently surpassing a neutral echo baseline, while others, including Meta\u2019s Llama family, exhibit wider variability. These findings highlight the impact of model architecture, training data, and prompt design on diacritic restoration performance and outline promising directions for improving NLP tools for diacritic-rich languages.<\/p>\n","protected":false},"author":6,"featured_media":1032,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":[],"categories":[4],"tags":[71,11,31,87],"_links":{"self":[{"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/posts\/1249"}],"collection":[{"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/comments?post=1249"}],"version-history":[{"count":2,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/posts\/1249\/revisions"}],"predecessor-version":[{"id":1432,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/posts\/1249\/revisions\/1432"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/media\/1032"}],"wp:attachment":[{"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/media?parent=1249"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/categories?post=1249"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/tags?post=1249"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}