Introduction
Project Team
Details of the Project
Project Activities
Case Studies
Assessment Criterion Bank
Constructing Performance Indicators
Evaluation
Useful Links
Publications
Contact Us
Engaging Science Students in the Design & Enactment of Assessment
Introduction

Two central principles underpin this project:
  1. That the planning of the undergraduate science curriculum needed to specifically consider students’ learning outcomes and the development of student capabilities, such as critical thinking, problem solving, self-managed learning, and interpersonal and communicative skills. The planning of teaching needed to overtly consider how these capabilities were developed.
  2. That assessment was an integral part of the process of the development of these important capabilities. Students worked more diligently on tasks where they were rewarded. In a marks-oriented culture (as is the case in Hong Kong), this was especially important.
In the table below some links between desired graduate capabilities, possible teaching and learning activities and implications for assessment were outlined. For consistency with existing policy of The Chinese University of Hong Kong the list of capabilities used in the Student Engagement Questionnaire were used.

Table: Links between desired graduate capabilities, learning activities and assessment
Desired graduate capability Possible teaching and learning activities to support the development of the capability Some possible implications for assessment
Critical thinking Presentation & discussion of open-ended problems where there is no one ‘correct’ answer Higher order questions in examinations and the use of a wider range of assessment methods
Creative thinking Inclusion of real-life or inter-disciplinary cases or problems Design projects and project-assessing criteria that encourage creative thinking
Self-managed learning Responsibility given to individuals or groups to prepare and teach some topics to peers Involvement of students in designing assessment tasks and/or marking criteria
Adaptability Inclusion of topics where the accepted theory has evolved to a new mode Tasks where students examine the credibility of evidence for a proposition. Research papers can be used here.
Problem solving Case-based teaching. There are several examples which have been developed by colleagues in the Faculty of Science. A range of assessment tasks from group presentations to higher order questions in examinations
Communication skills The need for students to actively discuss and write (in both Chinese and English) Presentations, essays, project reports
Interpersonal skills & groupwork Class time spent in observing and supporting students in group situations. Self and peer assessment of contribution to groups
Computer literacy The use of eLearning where appropriate. The exposure to software commonly used in the discipline. The use of computers in examinations and other assessment methods

back to the top

This is not an exhaustive list. Nor does it imply that conventional lectures are not valuable. What it does suggest, however, is that other teaching and learning activities are needed besides lectures, and that other assessment strategies are needed besides conventional examinations. What is important for course and programme design is the overall ‘mix’ of strategies so as to produce a curriculum where the desired learning outcomes, learning activities and assessment are all aligned together (Biggs, 1996, 2003).

Suitable assessment strategies may involve students being involved in consultations about assessment. Suitable assessment tasks may take many forms (Anderson, 1998); for example: the use of authentic tasks (Gulikers et al., 2004); self and peer assessment (Topping & Ehly, 1998; Johnson & Johnson, 2004); case-based activities (Shulman, 2001); problem-solving and experimental project work (Barnard et al., 2001); the development of portfolios (Banta, 1996); etc.

The project aimed at investigating the applicability of the above-mentioned methods in the teaching of Science at The Chinese University of Hong Kong. This could lead to the framing of system-level changes to the assessment methods used in the Faculty of Science. A progressive and multi-stage strategy was suggested. Stages included: conducting a literature review, facilitating ongoing discussion by teachers in the Faculty, designing a number of studies where students were actively engaged in the design and enactment of assessment, closely monitoring and evaluating this series of studies conducted by motivated teachers, extracting general principles for effective assessment from the results, and then promoting the methods to a wider audience in the Faculty.

Pilot case studies were conducted in a number of science courses (see Case Studies for pilot case studies completed to date). Learning activities, such as group projects or experiments, were specifically designed for various science courses. Students were invited to participate in the decision-making process whereby they determined the tasks to be assessed in these activities and the criteria to be used in the assessment process. They were encouraged to consider both criteria in relation to the specific subject area and general capabilities. The process engaged students in self-assessment and reflection that would assist them to realize what aspects they had not yet mastered, and so should become areas of focus. Students were also asked to conduct peer and self-evaluations, both within groups and between groups, based on the chosen criteria. Statistical methods were used to develop a user-friendly procedure for analyzing the peer and self-evaluation results with a view to arriving at a credible performance indicator that could be used to reflect students’ performance levels. This general model of engaging students in the design and enactment of assessment could be applied to courses in a wide range of disciplines. Each case study was carefully monitored with an agreed evaluation plan. Upon the completion of these case studies, general principles for effective assessment and user-friendly procedures for developing assessment performance indicators were made available for further applications by a wider audience.

back to the top

References
Anderson, R. S. (1998). Why talk about different ways to grade? The shift from traditional assessment to alternative assessment. New Directions for Teaching and Learning, 74, 5–16.
Banta, T. W. (1996). Assessment in practice: Putting principles to work on college campuses. San Francisco : Jossey-Bass.
Barnard, C., Gilbert, F., & McGregor, P. (2001). Asking questions in biology: Key skills for practical assessments and project work. (2nd ed.). New York: Prentice Hall.
Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education 32(3), 347–365.
Biggs, J. (2003). Teaching for quality learning at university (2nd ed.). Buckingham, UK: SRHE & Open University Press.
Gulikers, J. T. M., Bastiaens, T. J., & Kirschner, P. A. (2004). A five-dimensional framework for authentic assessment. Educational Technology, Research and Development, 52(3), 67–87.
Johnson, D. W., & Johnson, R. T. (2004). Assessing students in groups: Promoting group responsibility and individual accountability. Thousand Oaks, California: Corwin Press.
Shulman, L. S. (1996). Just in case: Reflections on learning from experience. In J. A. Colbert, K. Trumble & P. Desberg (Eds.), The case for education: Contemporary approaches for using case methods (pp. 197–217). Boston, MA: Allyn and Bacon.
Topping, K., & Ehly, S. (Eds.). (1998). Peer-assisted learning. Mahwah, N.J.: L. Erlbaum Associates.

back to the top