Part of the
4TU.
Centre for
Engineering Education
TU DelftTU EindhovenUniversity of TwenteWageningen University
4TU.
Centre for
Engineering Education
Close

4TU.Federation

+31(0)6 48 27 55 61

secretaris@4tu.nl

Website: 4TU.nl

Project introduction and background information

A challenge in the domain of multidisciplinary education is the assessment of students’ work.Students have to apply theories and concepts from different academic disciplines, but safeguarding the educational and assessment quality is difficult, because the assessors are typically specialists, not generalists. The goal of this project was to examine the boundary conditions for and optimal design of multidisciplinary assessment. The course Industrial Engineering (IE) Quick Scan was chosen as a case study. Insight into the effectiveness of multidisciplinary assessment and its boundary conditions was gained through a literature study. Interviews with assessors and students then generated insights into the current assessment process in the IE Quick Scan. Consequently, a field experiment was conducted to examine whether lower expertise on a subject impairs an assessor to reliably assess students’ work and make grading decisions. The results of this project provide features of effective multidisciplinary assessment, and provide information on the accuracy of grading procedures when specialist assessors assess multidisciplinary assignments. This results in clear guidelines to come to reliable, and valid assessments of such assignments, irrespective of the assessors’ expertise. Further research is needed to explore whether holistic or analytic approaches are preferred within the multidisciplinary course IE Quick Scan.

Objective and expected outcomes

The aim of this study is to investigate whether the current assessment procedure in the IE Quick Scan attains the minimum requirements in educational quality, and whether and how this procedure can be redesigned to improve its validity and reliability. To to so, this project combinestheory and empirics. First, a literature study on multidisciplinary assessment outlines the boundary conditions for effective assessment of multidisciplinary assignments. Second, interviews and a field experiment are conducted to provide more insights into the experiences of assessors, the validity and reliability of current assessment procedures, and potential improvement measures in the IE Quick Scan course.

More specifically, the following research questions were specified:

  1. What is multidisciplinary assessment and how can it support student learning?
  2. What criteria and standards are required to assess multidisciplinary assignments?
  3. To what extent does the current assessment procedure for IE Quick scan meet the requirements of reliability and validity of grading?

Assessing multidisciplinary student work is challenging. Students have to apply theories and concepts from different academic disciplines, but are generally assessed by specialists in certain fields, not generalists. Through a literature study and a field experiment, this study attempts to contribute toward recommendations for lecturers and program managers that want to apply multidisciplinary assessment in their courses. Within the fuzzy concept of multidisciplinary learning, standards from different disciplines are the starting point for assessing a multidisciplinary approach. Integrates occurs through the lens of a common theme (Drake, 2007). The IE major program has found that common lens in the Quick Scan, which represents theories and standards from several disciplines. Our literature study offers several suggestions to improve the assessment of the Quickscan and increase its reliability and validity. It is emphasized to clarify and share assessment criteria and standards between faculty and students. According to Ben-David (2000) the understanding of the criteria involved are crucial in producing agreement between assessors and crucial to provide an accurate evaluation of the student’s overall proficiency. According to our interviews, several lecturers are unaware of the importance to share and clarify assessment criteria at the start of the course. This implicates that training in assessment might be useful, to enhance assessors with knowledge and skills with regard to multidisciplinary assessment. In determining the grade, the lecturer preferably has an advantage over the students regarding superior knowledge and extensive experience. The interviewees stipulated the advantages of superior knowledge and extensive experience in a particular domain, in relation to the quality of feedback they have given to their students. However, several lecturers mentioned to feel less certain assessing subassignments outside their domain of expertise. According to Van Berkel (2012) assessors need to be supported in assessing the quality of a task. Although several lecturers seek support in improving their knowledge on domains outside their expertise through delving into research, they also express the need for extensive answer models and the possibility to consult colleagues. Therefore, elaborating on answer models can be helpful. Also, assessors should be facilitated to calibrate on the assessment of the Quickscan. To support the assessor in assessing the quality of a task, assessment models are often used in which the quality of a product is assessed based on criteria, standards and rating scales (Van Berkel, 2012). Given that an assessment form is used, criteria, standards and rating scales are provided. However, according to our interviews, the answer models do not always clearly indicate what type of answer should be graded “insufficient”, “sufficient”, or “good”. This implicates that a more extensive grading guideline needs to be included.Because learning and instruction are increasingly competence-based, holistic scoring to adequately determine competence is emphasized (Van der Vleuten & Schuwirth, 2005; Oosterheert, Eldik, & Kral, 2007; Van Berkel, 2012). However, many courses currently apply analytic judgment, such as the IE Quick Scan. The Quick Scan requires raters to assign scores to each of the criteria assessed in the product, after which the final score is summarized by all separate scores. The criteria act as primary reference points against which student submissions are judged, and then serve as the basis for the communication and feedback process. Analytic scoring is well applicable in this difficult task. However, when knowledge, skills and attitudes need to be assessed both separately and integrative, holistic scoring might be preferred with regard to grading explicit and implicit expertise. Further discussion on the main goals and the integrative grounding of this course are required to determine whether analytic judgment is emphasized. In multidisciplinary assessment variability in the assessment scores can appear. Besides disagreement caused by differences in experience, it has been reported that factors such as lecturers’ attitudes regarding students and content influence the rating of students’ work (Davidson, Howell, & Hoekema,2000). Also in the IE Quick Scan the major threat to reliability is the lack of consistency of the individual assessors. Therefore, developing assessment skills is necessary to improve the quality of individual assessors and to decrease inconsistency among them. Multidisciplinary assessments tend to be more effective when consistent and constructive feedback on students’ progress, processes, results and limitations is provided. Students report a large variety in the quality of feedback in the IE Quick Scan. These results converge with findings that lecturers differ in the frequency and amount of feedback they tend to give on students’ progress, processes and results. A common understanding on qualitative feedback is necessary, to ensure that each student receives consistent and constructive feedback on their learning process. In an ideal situation, the assessment and results are independent of those who score. Our experiment revealed difficulty in accomplishing homogeneity and reliability. Multiple assessors to assess each report can enhance quality assurance. Our empirical study shows that assessment by experts prevents inflation of grades. The possibility of assigning assessors to assess specific subassigments within their domain of expertise can improve reliability.

Recommendations

Improving assessment quality

Prior

  • Criteria are crucial; they should establish the level of achievement that is required for a student to pass the course and should be directly related to the course learning outcomes.
  • Clarifying and sharing criteria between assessors and students is required prior to the assessment, preferably at the start of the course or project.
  • The understanding of the criteria involved is crucial for producing agreement between assessors and for providing an accurate evaluation of the student’s overall proficiency.
  • Criteria for multidisciplinary assessment should include validity within and beyond the disciplines.
  • The assessed task needs to be consistent with the theory, and the scoring structure (such as criteria or rubric) must follow rationally from the domain structure.
  • Training in assessment may be useful to enhance assessors with knowledge and skills with regard to multidisciplinary assessment.
  • To prevent variability in the assessment scores due to disagreement caused by differences in experience and differences in lecturers’ attitudes regarding students or content, the development of assessment skills is necessary to improve the quality of individual assessments and to decrease inconsistency among them.

During

  • The assessment must stand on valid indicators of what counts as accomplished student work.
  • Evidence of learning should be authentic and demonstrate valid learning.
  • Depending on what knowledge, skills and attitudes need to be assessed either separately or integratively, holistic or analytic scoring can be preferred. Further discussion on the main goals and the integrative grounding of a course is required to determine whether analytic or holistic judgment is emphasized.
  • Interrater reliability is improved when clear standards are defined, and interpretation of criteria is discussed. To improve the intrarater reliability for assessors, a rubric can be useful.
  • Extensive grading guidelines need to be included, to clarify criteria, standards and rating scales.
  • Assessors need to be supported in assessing the quality of a task by seeking support in improving knowledge on domains outside their expertise through delving into research, by using extensive answer models, and by the possibility to consult colleagues.

After

  • Multidisciplinary assessments tend to be more effective when consistent and constructive feedback on students’ progress, processes, results and limitations is provided.
  • A common understanding on qualitative feedback is necessary, to ensure that each student receives consistent and constructive feedback on their learning process.
  • In an ideal situation, the assessment and results are independent of those who score. Multiple assessors to assess each report can enhance quality assurance. The possibility of assigning assessors to assess specific subassignments within their domain of expertise can improve reliability.