With the advent of generative text AI systems many potentials to improve efficiency in technical documentation could be leveraged. The writing process is often supported with Large Language Models, with proofreading of the generated materials by humans. Using a Large Language Model with a human-in-the-loop for proofreading and checking against the required standards and often (unwritten) rules, the efficiency of the creation process may be further improved.
Good prompt engineering is key for generating novel texts and making these fit into a company’s language. When working with LLMs for proofreading, the quality of a good prompt and the provided additional material is getting even more critical to achieve the results that are expected. During the presentation, Thomas will provide an outline of what makes good input-data good and how to use the available data to make a Large Language Model an efficient proofreader that provides valuable insights. Using a human-in-the-loop approach, the LLM provides both an increased efficiency and also the required quality.