Sending feedback emails to a large student group is a common educational task for many teachers. Often, it might be desirable to send personalized variants of an email to each different student based on the student’s personal data or study performance. The advantage of personalized feedback over one-fit-all feedback in educational setting is intuitive and has been supported by empirical studies (e.g., Gallien & Oomen-Early, 2008). It is likely that students perceive such feedback to be more personally relevant, richer in information, and useful for social comparison (cf. Huguet, Dumas, Monteil, & Genestoux, 2001). Despite the potential benefits, sending personalized feedback emails is rarely applied by university teachers because manually adjusting email contents is not practically feasible - with the number of students easily exceding 200 at the TU/e a manual approach would take many hours. Indeed, personalized education is under stress with continuously growing numbers of students. Thinking about the benefits of personalized feedback, we explored possible solutions to this challenge. We found that the open source mailR package in the R programming language might be an ideal tool to automate the work. This would allow us to generate a personalized report based on some data (e.g., grades, answers), generate a personalized word document or PDF file, and e-mail this file to individual students. The application of mailR is not widely known, and we have seen no applications of the mailR package to an educational context.
Although there are many technical solutions to automate emails, the real advantage of using mailR is its natural integration with the data analysis power of R. As the development of digital technologies in education, more and more data from students are collected and stored in digital forms. Moreover, student evaluation is now based on much more diverse information and criteria, including exam scores, professional skills, assignments, and peer reviews. By using mailR and other functions of R, a teacher can loop through a dataset with student names, emails, and performance data, to generate personalized results and then send the results to students in personalized emails. This is an efficient way to raise the quality of feedback for students without increasing the workload of teachers. We believe that this approach with mailR is worth examining in detail, testing in specific user cases, and spreading it to other teachers at the TU/e and 4TU. Some exemplar applications are listed below:
1) Personalized feedback of exam results: As teachers now get MC exams in an excel spreadsheet, it is possible to provide personalized feedback of exam results to motivate students, potentially using average performance as a reference. In this pilot project, we used MailR to merge Clicker quiz results, and send students individual results.
2) Personalized feedback of experiment data: For many courses on social and behavioral sciences at TU/e (e.g., Introduction to Psychology, 0HV10), participating in studies/experiments is a common way for student to learn. It is desirable to provide feedback based on personal data in the experiments. With R and mailR, teachers can automatically analyze data, and send students individualized feedback.
We focused on two implementations of MailR where we saw most potential. The first was as a way to organize quiz results using the clickers used at the TU/e. The second was a way to give automated feedback to students on data they collected themselves (a common task in several courses we teach at Psychology and Technology).
Automated feedback of clicker quiz results. Students in the course 0HV10 perform a Flow clicker quiz at the beginning of every lecture. Good performance on these quizzes provides a bonus (0.5 or 1 point) on the assignment grade for the course. The quiz results can be stored as separate spreadsheet files. Based on conversations with educators at the TU/e, we realized that combining quiz grades of different lectures is a challenge (not all students participate every week, which leads to quiz result spreadsheets that can not be merged easily). Because R is excellent at merging different data files this was seen as a perfect use case.
After all 12 quizzes were performed in the course, we created R code to merge all quiz results automatically, combine this information with student’s email addresses, and send them a personalized report about what their scores were, and whether they were entitled to the bonus grade or not. The code and a detailed ‘how to’ is available in Appendix 1.
This solution can easily be generalized to other types of data (e.g., we have already used it to provide feedback on multiple choice exams, where students are able to see which questions they got correct, without sharing their grades with all other students).
We also tried to upload these quiz grades to the Canvas gradebook, but this process was too cumbersome to be worth the effort. We found it easier to email students their grades, than to upload them into the gradebook. However, we do not believe this is desirable (even though it points to a benefit of our MailR solution). Ideally, future project would examine how educators can more easily organize grades from different tests, and add these to gradebook in the LMS.
Automated feedback on assignments. In the last three years, students in the course 0HV10 performed an experience sampling experiment as one course assignment, where they answer a set of questions 3 times a day regarding their daily emotions, feelings, and behaviors in a period of two weeks. For both educational purposes and as a standard practice of doing experiments, we decided to provide the students with some personalized feedback based on their individual data collected during the experiment. Students would eventually receive personalized reports in PDF in their mailboxes. The reports gave a summary of the patterns in their daily emotions, feelings, and behaviors. For example, students can see how their self-report stress levels change over the day during the weeks of the experiment, and how one’s own pattern compared to the averaged pattern of all the students in the course.
Experience sampling experiments have often been used as course assignments by our colleagues (e.g., courses in Psychology & Technology, HTI Master program, and to our knowledge also at Department of Industrial Design). To make educators’ lives easier, we have created an R function that can read typical experience sampling datasets, analyze and visualize the data, generate personalized reports in PDF, and finally send the reports to individual students. In the simplest use case (no substantial change to the report structure), the whole process takes under 30 minutes in order to send reports to over 150 students by emails. If an educator is familiar with R, she can also manually change the code for advanced use cases, for example, to adapt the code for a different data structure, or to change the structure of the reports. A step-by-step tutorial is available in Appendix 2.
It should be noted that the current solution assumes that an educator has processed experience sampling data in a structured format (e.g., csv). Pre-processing raw data is not in the scope of the project and it depends a lot on the experience sampling application used (e.g., commercial ones such as Metricwire or open source tools such as Experience Sampler) and on the data storage format (e.g., MySQL database, JSON format, or flat files). As behavioral data collection in daily lives becomes even more popular in education and research, we see a possibility that future project may seek for a more systematic and automated solution from data collection to personalized feedback at the level of the university. Furthermore, this solution can easily be generalized to other types of data, such as exported data from a Learning Management System of a student’s daily behavior over the course of the semester. As such, this use case demonstrates the feasibility of personalized feedback that can be applied to other relevant cases in education.
Students greatly appreciated the personal feedback on the data they collected as part of the assignment in the course.
The organization of the course in which automated feedback was appreciated by the students, and evaluated with a 4.3.
More importantly, we evaluated the automated feedback system by providing students with answers of multiple choice tests for 2 courses. Where Canvas can give students their final grade, it does not provide room for a detailed overview of their score for each answer on an exam. For example, for our Human Factors course, students get 25 multiple choice questions, 10 short open questions, and 2 larger essay questions. Instead of telling them they have received a 7.4 as a grade for the exam, using the automated feedback, students received detailed information about their score for all 37 individual questions. We believe students find this helpful based on our informal evaluation. One student had initially registered to look at his exam results after receiving his grade, but when we sent through the detailed individual report, replied with: “I will not be attending the meeting as this information is sufficient for me. Thank you!” Furthermore, the students who made an appointment to look at the exam had individual information about which questions they scored good at or worse at, and this made their visit to the teacher shorter and more focused.
We strongly believe that automating the merging of different types of datafiles deserves attention in education, and the benefits for teachers in reducing errors and saving time are substantial. We also believe that providing students with individual feedback about their performance on assignments or quizzes is appreciated. More attention should be giving to how teachers store and merge grades, and how easy it is to provide individual feedback. Our current solution works well for individuals with decent programming experience, but in the end, our project mainly highlights the difficulties in merging grades across teaching tools, and providing individual feedback to students. If some programming skills are available, these limitations can be overcome by solutions such as we have implemented in this project.
Limitations of the current solution
We chose a challenging time for our educational improvement. The recent switch to OSIRIS made it slightly difficult to get the information we needed. For both implementations above, teachers will need to manually request a list of students, their student ID, and their e-mail address from the education administration (this information is not available in Osiris, nor in Canvas). Furthermore, for the implementaion of grading quizzes, teachers will need to manually request a list of clicker ID’s and student ID’s from (from Peet van Leeuwen or Floris Verhagen). It would be very efficient for teachers if this information was more readily available. Furthermore, the information we requested from the education administration is not standardized, which means columns containing the information differs between requests. Because of the limited availability of standardized files, we will explain how to use MailR in two detailed ‘how-to’ documents, but we also believe it will be challenging for educators who do not have programming experience to get the MailR solution to work. If the information would be provided in more standardized form, the process would be easier. Nevertheless, we still believe that our automated feedback solution can be useful for educators (and we will keep using it in the future).
Although we believe the MailR solution worked well given our expertise in R and the goals we had, we have to admit that we believe the use of our approach is difficult to generalize across courses and teachers. First of all, some programming experience in R is required to use this approach. There is no way around manual adjustments (sometimes minor), and testing the code, before sending out the emails. One of our colleagues has used the code, and with some help, was able to use it for her course, but it required some time investment. This is even more strongly the case for the personalized feedback on an assignment
For the automated emails sharing quiz results, we envisioned a system where educators could just upload their Flow quiz results, standardized files containing clicker ID’s and email addresses, and organize these files and send personal feedback. We did not accomplish this goal, but we believe our MailR solutions demonstrates the feasibility of such an approach. We also believe our project has revealed the difficulty teachers currently have in merging quiz results from Flow quizzes, and believe a general solution is desirable.
For the automated emails sharing feedback on data students collected, we believe it provided added value in giving students personized feedback about their results, which they found interesting. The code we provide is ready to use without much R experience when standard data analyses in the example is sufficient. However, when other analyses and summaries are needed, it requires that researchers already use R to analyze data, which is perhaps a limitation, but it is possible to export individual level data using alternative analysis software, and read these in when sending emails using MailR if desired.
A final limitation was getting external grades (such as the quizzes) into Canvas. There seems to be an ability in Canvas to import grades, but it extremely cumbersome and was more effort than it was worth, in our opinion.