LLM Output Compliance with Handcrafted Linguistic Features: An Experiment (2025)

Abstract

Can we control the writing style of large language models (LLMs) by specifying desired linguistic features? We address this question by investigating the impact of handcrafted linguistic feature (HLF) instructions on LLM-generated text. Our experiment evaluates various state-of-the-art LLMs using prompts incorporating HLF statistics derived from corpora of CNN articles and Yelp reviews. We find that LLMs demonstrate sensitivity to these instructions, particularly when tasked with conforming to concrete features like word count. However, compliance with abstract features, such as lexical variation, proves more challenging, often resulting in negative impacts on compliance. Our findings highlight the potential and limitations of utilizing HLFs for guiding LLM text generation and underscore the need for further research into optimizing prompt design and feature selection.

Citare

@Inproceedings{Olar2025LLMOC,
 author = {Andrei Olar},
 booktitle = {International Conference on Agents and Artificial Intelligence},
 title = {LLM Output Compliance with Handcrafted Linguistic Features: An Experiment},
 year = {2025}
}

Leave a Reply

Your email address will not be published. Required fields are marked *