Uniform Complexity for Text Generation

Research output: Working paper / PreprintPreprint

9 Downloads (Pure)


Large pre-trained language models have shown promising results in a wide array of tasks such as narrative generation, question answering, and machine translation. Likewise, the current trend in literature has deeply focused on controlling salient properties of generated texts including sentiment, topic, and coherence to produce more human-like outputs. In this work, we introduce Uniform Complexity for Text Generation or UCTG which serves as a challenge to make existing models generate uniformly complex text with respect to inputs or prompts used. For example, if the reading level of an input text prompt is appropriate for low-leveled learners (ex. A2 in the CEFR), then the generated text by an NLG system should also assume this particular level for increased readability. In a controlled narrative generation task, we surveyed over 160 linguistic and cognitively-motivated features for evaluating text readability and found out that GPT-2 models and even humans struggle in preserving the linguistic complexity of input prompts used. Ultimately, we lay down potential methods and approaches which can be incorporated into the general framework of steering language models towards addressing this important challenge.
Original languageUndefined/Unknown
Publication statusPublished - 11 Apr 2022


  • cs.CL
  • cs.LG

Cite this