Part of the
Ethics and Technology
TU DelftTU EindhovenUniversity of TwenteWageningen University
Ethics and Technology


+31(0)6 48 27 55 61


Center for Engineering Education project

Project introduction and background information

In its Strategy 2030 document (Executive Board TU/e, 2018: 31), TU/e stresses the importance of digitization, to allow learning at any place, at any time and to support adaptive and personalized instruction and feedback. Intelligent systems could be used to fulfil this need, by providing personalized, automated and timely feedback. One place where this might be especially useful is in the area of writing.

Writing is an important skill for engineering students, where writing can be used to build scientific knowledge and report on research findings. However, engineering students (in general and at TU/e) often struggle with writing. This includes a broad range of underlying difficulties, such as formulating ideas, structuring arguments, critical reflection, etc. For courses using essays or other written (text-rich) assignments, this poses classical challenges:

  • Scaffolding instruction and feedback for students is difficult because of the broad spread of acquired competences
  • Providing instruction and feedback on writing is time consuming and often not core learning objective of a course
  • Correct and consistent grading is difficult, where one needs to distinguish a strong idea of a student having difficulty with writing from a weak idea of a good writing student. Current solutions such as oral exams are difficult to scale.

Luckily, there are currently several tools available that could assist students in their writing (for an overview see Allen et al., 2015). The most recent and increasingly popular tools are generative AI tools that can automatically generate essays based on some user input. Examples of these tools include Hyperwrite, Quillbot, or the more recent Microsoft Copilot powered by OpenAI's latest GPT model (GPT-4 in May 2024). Given the current advances in natural language processing and natural language generation these tools have become increasingly accurate, where the ability of humans to detect whether a text is written by an AI is reduced to a chance level (Uchendu et al., 2021). Accordingly, a natural (and common) reaction to these tools is to restrict or control their use and find ways to assess the actual authorship of essays or to redefine and reflect on teachers' and students' definition of authorship and plagiarism (see e.g., Fyfe, 2022; Sharples, 2022). 

However, we argue that these automated essay generation tools can also be used as a pedagogical method to scaffold student learning. Specifically, by allowing students to co-write essays with an AI, this might also solve the classical challenges. Accordingly, in the current project we aim to answer the following research questions:

  1. How can automated essay generation tools be used to scaffold students in acquiring writing skills? 
  2. What is the effect of automated essay generation tools on students' writing skills?
  3. What is the effect of automated essay generation tools on students' understanding of the topic they write about?
  4. Which factors influence the effects of automated essay generation tools?

In addition, we aim to evaluate the use of automated essay generation tools in specific courses, to determine the effects on students' writing skills as well as students' understanding of the topic. The latter is especially useful in courses that use the pedagogical approach of writing to support understanding, also known as writing-to-learn (see Klein & Boscolo, 2016).

Finally, we are interested whether specific factors influence the effects of automated essay generation tools. This includes both personal characteristics (such as perceived accuracy of the tool and trust in tool), as well as tool characteristics, and pedagogical context (such as genre of the assignment and type of course).


  • Allen, L. K., Jacovina, M. E., & McNamara, D. S. (2015). Computer-based writing instruction. Handbook of Writing Research, 316–329.
  • Fyfe, P. (2022). How to cheat on your final paper: Assigning AI for student writing. AI & SOCIETY 2022, 1, 1–11.
  • Klein, P. D., & Boscolo, P. (2016). Trends in research on writing as a learning activity. Journal of Writing Research, 7(3), 311–350.
  • Sharples, M. (2022). Automated Essay Writing: An AIED Opinion. International Journal of Artificial Intelligence in Education 2022, 1–8.
  • Uchendu, A., Ma, Z., Le, T., Zhang, R., & Lee, D. (2021). TURINGBENCH: A Benchmark Environment for Turing Test in the Age of Neural Text Generation. Findings of the Association for Computational Linguistics, EMNLP, 2001–2016.

Objective and expected outcomes

The main objectives and expected outcomes of the project are manifold. First, the application of on automated essay scoring system to scaffold students' writing will have several educational values directly in the courses in which it is implemented:

1. Increased objectivity in grading: Traditional essay-writing assignments are notoriously difficult to grade in an objective manner. The essay-improvement assignment is more constrained than a traditional essay-writing assignment, and thus, more easily assessed in an objective way. For one, students’ overall contribution (in terms of number of words) is likely to be less than in a traditional essay-writing assignment, and graders can focus on students’ contributions at specific points of the essay. For another, students’ contributions can be tracked with the word processor’s “track changes” function and compared to a known baseline (i.e., the pre-written essay).

2. Students can focus on quality rather than quantity (scaffolding writing): Much of the preliminary research for the essay (e.g., coming up with a basic thesis and general line of argument) will already have been performed by the AI language model, allowing students to concentrate on the task of increasing the quality of the final product. Indeed, whereas in an essay-writing assignment the aim is to produce a convincing argument, with the help of an AI the aim is to improve a poor argument / essay into a good one. Accordingly, the use of the automated essay grading tool will be helpful in scaffolding writing, and helps students to focus on the attributes that distinguish excellent essay writing. Notably, this focus on quality rather than quantity is reflective of future human-machine collaboration, in which machines are likely to take on simpler tasks, whereas humans will increasingly dedicate themselves to tasks that require high levels of creativity and ingenuity.

3. Reinforcing course content: The reflective component of the assignment requires students to critically reflect on the quality, sophistication, and originality of the AI-generated essay. By carefully selecting the courses in which we will implement and evaluate the assignment, the assignment will also reinforce the course content. For example, the course Thinking and Deciding  focuses on cognitive processes (including writing) and decision making, and applies this to technology. The assignment directly shows how an AI system can effect people's cognitive processes, and how this influences their decision making based upon the system. Similarly, the course Philosophy and Ethics of AI addresses philosophical questions about machines’ abilities to behave in creative and original ways. The essay-improvement assignment serves to transform abstract notions of e.g. creativity and originality into concrete examples with which the students can interact in a hands-on manner.

More generally, the project will improve our understanding of the effects of automated essay generation tools on student learning, and which factors influence these effects. This will be used to provide clear recommendations including a generic educational activity on how to use automated essay generation systems in our teaching at the TU/e.