VARIABLE ASSESSMENT AND ITS RELEVANCE TO STUDENT ENGAGEMENT
Michael Chidubem (November 2024) | Module: LT765 Enhancing Academic Practice (PGCAP)
ABSTRACT
Assessment is important to all stakeholders in higher education. However, with an ever-changing educational landscape, relying solely on traditional assessment methods to gauge knowledge acquisition and retention could be misleading. It is important to explore opportunities for strategies which are not only dynamic but effective in harnessing the learning experience of students. The purpose of this study is to critically discuss variable assessment methods in the context of students' feedback regarding how it might impact their education in the animation course at the University of Brighton. The research and data collection process were approved by the ethics committee through the BREAM process to ensure fairness and responsibility in carrying out the project.
A total of 27 undergraduate animation students participated anonymously in an online survey. Findings suggest a range of sentiments about variable assessment and the effectiveness of current assessment methods lending itself to further research on how a range of assessment methods could potentially enhance students’ learning experience.
INTRODUCTION
Higher education plays a pivotal role in shaping the intellectual and professional futures of students. One-way educators measure students’ learning progress is through assessment making it an essential component of higher education. However, while traditional assessment methods, such as exams and standardized tests, have long been the primary means of evaluating student performance, there is an ongoing debate on whether these conventional methods effectively accommodate the diverse learning styles, backgrounds, and needs of students.
Newstead (2002) opines that current methods of assessment do not facilitate the development of conceptual understanding or a deep approach to learning and as the landscape of education evolves, there is a growing need to explore a variety of assessment techniques that cater to students’ unique requirements while fostering deeper engagement with course content.
Variable assessment methods within the context of this research can be defined as diversifying and providing alternative assessment strategies to better serve the student population through methods commensurate with students’ uniqueness. This approach realizes that each student is distinct and gain knowledge in a variety of ways and pace. This pedagogic research, as part of requirement for completing the Enhancing Academic Practice module seeks to advocate for innovative approaches to assessment types for theory and practical-type modules within the animation course at the University of Brighton.
AIMS AND OBJECTIVES
The aim of this research is to identify, discuss and critically evaluate basic concepts of variable assessment methods situating it against the backdrop of students’ perception and feedback on how it affects their learning experience in the animation course at the University of Brighton. The research is aimed at contributing to ongoing discussions around alternative assessment types as well as provide opportunities for further research. Students’ feedback from a survey conducted in the animation course would provide data that not only assist in understanding current assessment methods within the course but equally assist in identifying opportunities for codesigning assessment types that is inclusive and reflective of students’ core strengths. This process could assist students achieve maximal design outputs that meet the prescribed learning outcomes for each module.
In recent years, there has been a call for a paradigm shift that highlights the crucial role of students as active stakeholders and active learners, with implications for how learning outcomes and training goals are designed and developed (Flores & Simão, 2007; Simão, Santos, & Costa 2003). This research would seek to contribute to these discussions.
To achieve this, the research would seek to answer the following research questions.
RESEARCH QUESTIONS
- What are students’ understanding and sentiments around the concept of variable assessments in the animation course? (This would be answered via primary research)
- What possible variable assessment types could be explored, adopted or made available to animation students and why? (This would be answered via primary research with secondary research providing context)
- How can variable assessment be used as a tool for supporting deep learning and students’ engagement with course? (This would be answered via both primary and secondary research)
- How does variable assessment sit currently within higher education space? Are there previous or current usage of these assessment types in other higher education institutions? How effective or ineffective has the usage of this assessment type been in other higher education institution? (This would be answered via existing body of knowledge from secondary research and predominantly discussed in the literature review)
LITERATURE REVIEW
A BRIEF COMMENTARY ON ASSESSMENT
Assessment is at the core of student’s learning experience (Knight & Brown, 1994, 1) and plays a major part in the educational process essential for informing and enhancing continuous learning (Cowie & Bell, 1999). As one of the most important aspects of teaching, Taras (2005) argues that assessment allows teachers to ascertain the knowledge and skill levels of their students. According to Kırmızı & Kömeç, (2016), assessment is an essential component of all teaching and learning activities because it not only helps in informing instructional decisions but could equally help in identifying students' strengths and shortcomings in relation to classroom instruction as well as giving students targeted feedback to enhance their learning.
Additionally, assessment goes beyond mere grading, it is a window into the depth of students' comprehension and the effectiveness of the teaching methods employed. Biggs, Tang and Kennedy (2022) emphasize that well-designed assessments ensure constructive alignment between teaching objectives and learning outcomes, thus promoting deeper learning. With the right assessment, teachers can categorize and grade their students, provide feedback, and plan their lessons accordingly (Tosuncuoglu, 2018).
However, finding the right assessment lends itself to some debate because the distinction between deep and surface learning is significant. Surface learning is characterized by memorization and regurgitation of facts, often with the intent of merely passing exams, while deep learning involves critical thinking, understanding, and application of knowledge to situations (Entwistle & Ramsden, 2015). Furthermore, deep learning is indicated by students’ ability to connect new knowledge with prior experiences, apply concepts in novel contexts, and engage in metacognitive reflection (Marton & Säljö, 1976). Assessments such as essays, research projects, and open-ended exams are more effective at promoting deep learning because they require students to synthesize information, rather than recall isolated facts (McDowell, 2017). In this sense, assessments serve not only as a grading tool but also as a pedagogical strategy that aligns with the aim of fostering deep learning (Knight, 2002).
Students pick up knowledge in different ways. Thus, it follows that a range of testing methods should be required to ensure fairness in individual student assessment. Belle (1999) argues that in the past, traditional methods of assessment were generally accepted but many educators now believe that alternative assessments are the most reliable means of gauging pupils' higher order thinking abilities.
The Limitations of Traditional Assessment Methods
Traditional assessment methods, primarily in the form of exams and standardized tests, have been widely criticized for their inability to capture the full spectrum of student learning. According to Brown, Bull, and Pendlebury (2013), these assessments often prioritize rote memorization over critical thinking and creativity, thereby failing to reflect a student's true understanding and capability. Moreover, standardized tests can be biased against students from different cultural or linguistic backgrounds, as noted by Gipps and Murphy (1994), thus perpetuating inequities in educational outcomes.
The Role of Diverse Learning Styles
Gardner's (1983) theory of multiple intelligences underscores the need for varied assessment methods to cater to different learning styles. Gardner posits that intelligence is multifaceted, encompassing areas such as linguistic, logical-mathematical, spatial, and interpersonal intelligences. Traditional assessments typically favour linguistic and logical-mathematical intelligences, marginalizing students who excel in other areas. Armstrong (2009) argues that by recognizing and assessing multiple intelligences, educators can provide a more holistic and inclusive evaluation of student capabilities.
Alternative Assessment Methods
Several alternative assessment methods have been proposed and implemented with varying degrees of success. Formative assessments, such as peer reviews and self-assessments, allow students to engage in reflective learning and receive continuous feedback (Black & Wiliam, 1998). Project-based assessments and portfolios enable students to demonstrate their knowledge and skills through practical applications and sustained work (Barrett, 2007). Additionally, digital assessments and e-portfolios offer innovative ways to track and evaluate student progress, particularly in an increasingly digital world (JISC, 2008).
Engagement and Motivation
Research indicates that diverse assessment methods can significantly enhance student engagement and motivation. Hattie and Timperley (2007) highlight the importance of feedback in the learning process, suggesting that varied assessments provide more opportunities for meaningful feedback. Furthermore, the inclusion of creative and practical assessments can increase student interest and investment in their coursework (Ryan & Deci, 2000). This, in turn, can lead to improved learning outcomes and greater overall satisfaction with the educational experience.
The next section looks at the methodology employed in this research.
METHODOLOGY
This research was originally designed to explore the mixed method approach using qualitative analysis (focus groups, observation, interviews, etc.) and quantitative analysis (questionnaire and surveys). However, as the scope of the research was revisited, a quantitative approach was predominantly adopted. It may be helpful to state that a few elements of qualitative research may be identified such as additional feedback from participants, but this still doesn’t qualify as a mixed method as no formal interviews or focus groups were conducted. There was an overall quantitative approach to this research, but it was important to try and get some qualitative aspects to it. Qualitative research typically explores the depth and complexity of human experiences and behaviours, providing rich, descriptive data, while quantitative research focuses on numerical data, enabling the measurement of variables and the establishment of patterns or relationships (Creswell & Creswell, 2017).
Quantitative methods offer breadth through numerical data and statistical analysis as opposed to qualitative methods which provide depth by capturing the intricacies of human experiences (Tashakkori, 2010). The semblance of qualitative data in this research is partly because in many cases, quantitative data alone cannot fully explain certain behaviours or phenomena, which is where qualitative insights are valuable, and corroboration of findings is needed to minimize the biases that might arise (Denzin, 2012). Additionally, qualitative findings may inform the development of a quantitative survey, ensuring that the questions are grounded in real-world experiences (Bryman, 2006) and this is particularly valuable in fields such as education, healthcare, and social sciences, where both numerical trends and personal experiences are important (Johnson, Onwuegbuzie, & Turner, 2007).
The survey data from this research is analysed using content analysis and thematic analysis methods. Kondracki, Wellman, and Amundson (2002) noted that the foundation of content analysis is the idea that texts are a rich source of data that have a lot of potential to disclose important details about specific phenomena. This process involves sorting text into groups of related categories by taking the participant and context into account thereby finding patterns, associations, similarities and differences that are both overt and implicit in the text. Similarly, the choice of thematic content analysis (TCA) in understanding the data in this research is because according to Anderson (2007), it is important to identify and study recurring themes in the texts submitted for analysis which is fundamental to gaining insight about participants’ perceptions
DATA RESULT
A total of 27 students across levels 4 and level 6 animation students at the University of Brighton took part in the survey. The diagram below shows how many students from each level participated
24 students indicated that they are familiar with the concept of formative and summative assessment while only 17 students said they were familiar with the learning outcomes of their current modules. 15 students said they were familiar with the marking criteria for their current modules while 19 students noted that they are familiar with how they are currently assessed expressing differing views on how impactful the process is to their learning. 21 students indicated that current assessment method provide them with good feedback for improvement although only 16 students believed that current assessment types provide opportunities to give their very best. On the main subject of the research as indicated in the diagram below, only 2 students have an idea what the term variable assessment meant.
On whether they preferred the option of determining how they would be assessed, 15 students answered in the affirmative while 12 students were passive.
The remaining set of questions required students to input some additional texts as well as indicate which options were more applicable to them in view of the current assessment types, possible solution and perceptions about feedback. This section contains elements of qualitative data. The diagram below details these responses
Given the data in the diagram above, there is an overall impression that students were generally contented with the current assessment approach and feedback within the course with participants strongly agreeing with the prompts. Attempt will now be made to discuss the research findings from the data.
RESEARCH FINDINGS
Data from the survey presented informed insight into the research hypothesis. Firstly, it indicated that while the researcher opined that variable assessment would enhance students experience and retention, a lot of the students had never heard about the term before. Some of the responses are outlined below:
- “I think it means assessing someone not by the regular academic methods but by including the person’s actual skills, interests and personality…”
- “… assessment criteria varying based on a student”
- “Different assessments and marking criteria for different individuals”
- “Multiple ways to be assessed. You can pick out of different assessments”
- “Assessment from many different people”
Most students believe that current assessment methods have been beneficial and equally agreed that they enjoy good feedback. However, over half of the participants were open to the idea of having a say in how they are assessed. Some responses are listed below.
- “I don't mind how we do it already, but choosing how we are assessed could benefit some peoples’ skillsets than having to stick to one assessment type. Having a choice lets people tailor and show off what they know best.”
- “I would like to be able to choose between a few options of things to work on for my submission.”
- “… it allows more freedom and for me to get the best mark I can get”
- “Freedom of choice is good”
- “It would be nice to be able to control how you’re assessed in some way”
- “It could give more personalized feeling to the assessment”
- “It may improve the studying experience”
- “… everyone should be assessed by their unique and individual work process”
- “Having a direct line with the tutors will not only help us understand the marking criteria but it will also provide tutors with feedback as to how the students are understanding the learning objectives.”
Some students were passive on whether variable assessment would benefit them but were open to further enlightenment on the topic. On the assessment type, most participants appreciated the blend of theory and practice but majority of them wanted a more practical approach to the modules as they believe it fortifies them with the skillset needed to excel in the industry.
- “I personally prefer practical projects...”
- “I would prefer practical as I think animation is more practical than speculative.”
- “I would prefer a mix of practice projects and written essays as it showcases different skills, and different things can be determined by the different forms.”
- “I believe both are crucial to the development of a student and should both be integrated into the courses and modules”
- “I prefer to do practical projects as they may contribute to an end of year showreel/portfolio”
In general, the overall sentiment leaned towards the need to explore this research further by engaging in intentional discourse and re-evaluating the current assessment methods in the course.
CRITICAL REFLECTION/RECOMMENDATION
Students view assessment as the most significant component of their course and the UK National Student Survey in recent years show that the assessment process rates as the least favourable among students learning experience (Price, Carroll, O’Donovan, & Rust, 2011). It is imperative that assessment be handled in a way that recognizes the variety of interconnected issues and concerns, given its significance.
Lessons from this research show that in creative courses like animation, where divergent thinking, technical skills, and artistic expression are key, diverse assessment methods are critical. By incorporating various assessment methods like portfolios, peer reviews, presentations, and reflective essays, etc., educators can better evaluate not only technical skills but also creativity and problem-solving abilities (Biggs, Tang, & Kennedy, 2022). For instance, a final animation project can showcase technical prowess, while a reflective essay allows students to articulate their creative journey and thought processes (Race, 2001). Similarly, peer assessments can further develop students' critical thinking and collaborative skills, essential in professional creative environments (Boud & Falchikov, 2007).
It is important to state that despite the researcher’s hypothesis about the effectiveness of variable assessment in the animation course, current data from the research indicate that the concept would benefit from further research and discourse. Notwithstanding, it is the opinion of the researcher that a multi-modal assessment approach could ensure a more comprehensive evaluation of skills, support students' individual strengths and learning preferences.
CONCLUSION
Assessment in higher education is a multi-faceted tool that, when used effectively, supports deep learning providing valuable feedback for both tutors and students. However, the process of finding the best assessment type for each student is complex. Primary and secondary research were conducted, and the researcher opines the need to engage in continued discussion on best assessment practices as this may not only support student learning and retention but could enhance growth by promoting critical thinking, reflection, and application of knowledge. This research is far from conclusive, but rather lends itself to further discourse.
BIBLIOGRAPHY
Anderson, R., 2007. Thematic content analysis (TCA). Descriptive presentation of qualitative data, 3, pp.1-4.
Belle, D., 1999. Traditional Assessment versus Alternative Assessment.
Biggs, J., Tang, C. and Kennedy, G., 2022. Teaching for quality learning at university 5e. McGraw-hill education (UK).
Boud, D. and Falchikov, N., 2007. Developing assessment for informing judgement. In Rethinking assessment in higher education (pp. 191-207). Routledge.
Broadfoot, P. and Black, P., 2004. Redefining assessment? The first ten years of assessment in education. Assessment in education: Principles, policy & practice, 11(1), pp.7-26.
Brown, S., Rust, C. and Gibbs, G., 1994. Strategies for diversifying assessment in higher education. Oxford Centre for Staff Development.
Bryman, A., 2006. Integrating quantitative and qualitative research: how is it done?. Qualitative research, 6(1), pp.97-113.
Buhagiar, M. A. 2007. “Classroom Assessment within the Alternative Assessment Paradigm: Revisiting the Territory.” The Curriculum Journal 18 (1): 39–56. doi:10.1080/09585170701292174.
Burnard, P. 2011. “Creativity, Performativity and Educational Standards: Conflicting or Productive Tensions on Music Education in England.” In Re-thinking Standards for the Twenty-First Century: New Realities, New Challenges, New Proposition, edited by P. G. Woodford, 21–44. Ontario: University of Western Ontario.
Charlton, N., Weir, K. and Newsham-West, R. 2022. Assessment planning at the program-level: a higher education policy review in Australia. Assessment & Evaluation in Higher Education.
Cowie, B., and Bell, B. (1999). A Model of Formative Assessment in Science Education. Assessment in Education: Principles, Policy & Practice, 6(1), 101-116. https://doi.org/10.1080/09695949993026
Creswell, J.W. and Creswell, J.D., 2017. Research design: Qualitative, quantitative, and mixed methods approaches. Sage publications.
Denzin, N.K., 2012. Triangulation 2.0. Journal of mixed methods research, 6(2), pp.80-88.
Entwistle, N.J. and Entwistle, A., 1991. Contrasting forms of understanding for degree examinations: The student experience and its implications. Higher education, 22(3), pp.205-227.
Entwistle, N.J. and Ramsden, P., 1983. Understanding Student Learning. Croom Helm.
Flores, M.A. and Simão, A.M.V., 2007. Competências desenvolvidas no contexto do Ensino Superior: a perspectiva dos diplomados. In V Jornades de Xarxes d'Investigació en Docència Universitària: La construcció col· legiada del model docent universitari del segle XXI (p. 51). Instituto de Ciencias de la Educación.
Flores, M. A. et al. (2014) ‘Perceptions of effectiveness, fairness and feedback of assessment methods: a study in higher education’, Studies in Higher Education, 40(9), pp. 1523–1534. doi: 10.1080/03075079.2014.881348.
Gibbs, G., 2007. Part-time effort for full-time degrees. Times Higher Educational Supplement, 28(September), p.6.
Gibbs, G. (2010). Using assessment to support student learning. Leeds Metropolitan University.
Gibbs, G. & Simpson, C. 2005. Conditions Under Which Assessment Supports Students’ Learning. Learning and Teaching in Higher Education (1). pp. 3-31.
Jessop, T., El Hakim, Y. & Gibbs, G. 2014. The whole is greater than the sum of its parts: a large-scale study of students' learning in response to different programme assessment patterns. Assessment and evaluation in higher education, 39, 73-88.
Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a definition of mixed methods research. Journal of Mixed Methods Research, 1(2), 112-133.
Kırmızı, Ö. and Kömeç, F., 2016. An Investigation of Performance-Based Assessment at High Schools. Üniversitepark Bülten, 5 (1-2), 53-65.
Kleinheksel, A.J., Rockich-Winston, N., Tawfik, H. and Wyatt, T.R., 2020. Demystifying content analysis. American journal of pharmaceutical education, 84(1), p.7113.
Knight, P.T. 2002a. The Achilles’ heel of quality: The assessment of student learning. Quality in Higher Education 8, no. 1: 107–15
Knight, P.T., 2002. Summative assessment in higher education: practices in disarray. Studies in higher Education, 27(3), pp.275-286.
Kondracki, N.L., Wellman, N.S. and Amundson, D.R., 2002. Content analysis: Review of methods and their applications in nutrition education. Journal of nutrition education and behavior, 34(4), pp.224-230.
Lima, L., 2006. Bolonha à portuguesa. A Pagina da Educação.
Marton, F. and Säljö, R., 1976. On qualitative differences in learning: I—Outcome and process. British journal of educational psychology, 46(1), pp.4-11.
Newstead, S., 2002. Examining the examiners: Why are we so bad at assessing students?. Psychology Learning & Teaching, 2(2), pp.70-75.
Perrenoud, P., 1999. Avaliação: da excelência à regulação das aprendizagens-entre duas lógicas. In Avaliação: da excelência à regulação das aprendizagens-entre duas lógicas (pp. 183-183).
Price, M., Carroll, J., O’Donovan, B. & Rust, C., 2011. If I was going there I wouldn’t start from here: A critical commentary on current assessment practice. Assessment & Evaluation in Higher Education, 36(4), pp.479-492.
Race, P., 2001. The lecturer's toolkit: a practical guide to learning, teaching & assessment. Psychology Press.
Ramsden, P. 1992. Learning to teach in higher education. London: Routledge.
Sambell, K., McDowell, L. and Brown, S., 1997. “But is it fair?”: An exploratory study of student perceptions of the consequential validity of assessment. Studies in educational evaluation, 23(4), pp.349-371.
Simão, V., dos Santos, S.M., e Costa, A.D.A. & de Almeida, J.S., 2002. Ensino Superior: uma visão para a próxima década.
Struyven, K., Dochy, F. and Janssens, S., 2005. Students’ perceptions about evaluation and assessment in higher education: A review. Assessment & evaluation in higher education, 30(4), pp.325-341.
Taras, M. (2005). Assessment- Summative and Formative-Some Theoretical Reflections. British Journal of Educational Studies, 53(4), 466-478. https://doi.org/10.1111/j.1467-8527.2005.00307.x
Tashakkori, A., 2010. SAGE handbook of mixed methods in social & behavioral research. Sage Publications.
Tosuncuoglu, I., 2018. Importance of Assessment in ELT. Journal of education and training studies, 6(9), pp.163-167.
Twigg, C. A. (2003). Improving learning and reducing costs: New models for online learning. EDUCASE Review, 38(5), 28-38. Retrieved from http://www.educause.edu/ir/library/pdf/erm0352.pdf
Twigg, C. A. (2013). Improving learning and reducing costs: Outcomes from changing the equation. Change: The Magazine of Higher Learning. Philadelphia, PA: Taylor & Francis Group. Retrieved from http://www.changemag. org/Archives/Back%20Issues/2013/July-August%202013/improving_full.html
Create Your Own Website With Webador