Asia-Pacific Forum on Science Learning and Teaching, Volume 2, Issue 2, Article 2 (Dec., 2001)
Describing and supporting effective science teaching and learning in Australian schools - validation issues
Validation of the Component Map as a monitoring instrument
Even if the SiS Components are accepted as a valid description of science teaching and learning, effective in supporting improvement in practice, there is a further question as to whether the Component Mapping process provides a valid measure of the practice of individual teachers. The questions below focus on this validation issue.
- Do Coordinators believe the interview process provides a valid description of teachers' practice?
Given that the Component Map measure is based on an interview between colleagues, there is a possibility that teachers may judge their practice superficially, and either under or overrate themselves on particular components. There is also an issue with learning to use the language of the Component Map. We suspect, for instance, that the November component mapping result is more reliable than the April result since in the first year of the project it took some time for coordinators and teachers to come to terms with the meaning of the components and arrive at an agreed language.
In February of the second year each coordinator was interviewed as part of a verification exercise, to elicit their opinion on the validity of the scores they had negotiated with each teacher. Table 2 gives the percentage of scores judged by SiS Coordinators to be high, appropriate, or low. It can be seen that there is a high degree of confidence in the results. The component mapping process, like many monitoring instruments, needs to be learnt and understood, and this takes time. We believe it will become more reliable over time as coordinators and teachers become more familiar with it and develop a shared language and experience surrounding it. For the third year of the project we are designing a training program for coordinators, that will focus on component mapping amongst other things, and a PD program for teachers to clarify the meaning of the components.
Table 2: Validity of the March 2001 Component Map
Judgment of validity. Were the scores: Primary teachers (N=230) Secondary Teachers (N=203) High? 6.5 9.9 Appropriate? 80 76.8 Low? 13.5 13.3
Do the Component Map results align with student views of the classroom?
A student attitude survey administered in April and November has items that relate to each component so that teacher and student judgments about how well each component is represented can be aligned . This analysis is currently under way.
Do differences in the Component Map results reflect reported differences in the practice of primary and secondary teachers?
An interesting outcome of the component mapping process is the comparison it allows between the classroom practice of primary and secondary teachers. Figure 2 shows this comparison based on the November 2000 mapping exercise. A score of 3 or more on any component is an indication of good practice on that component.
Figure 2: Comparison of primary and secondary teachers component map profiles
Primary teachers, who were found on a state wide survey (Gough et al. 1998) to exhibit a wider range of pedagogical practices, and who for reasons of organisation tend to develop closer personal relations with their students, scored higher on student engagement, catering to student lives and interests, catering for individual differences, and community links. Secondary teachers scored higher on meaningful understandings, denoting a greater emphasis on science concepts, use of ICT, and aspects of the nature of science. The latter is possibly due to a perceived inappropriateness of this component for primary school children, and also limited experience of the different ways science can be represented in primary school classrooms.
Do the results align with other evidence of changes in classroom practice?
An analysis of the change scores for each school, on each component, was scrutinised by members of the research team who had close knowledge of the schools. The scores were judged to reflect the different commitments of each school. The picture presented by the component mapping, and from the school reports and anecdotal evidence, seemed reasonably consistent.
Do the Component Map scores align with differences in student achievement and attitude outcomes?
At the beginning and end of the year for schools entering the project, and at the end of subsequent years, all students in selected year levels undertake multiple choice achievement tests and an attitude survey. The component mapping exercise took place in the 27 Phase 1 schools in April and November 2000, and in Phase 2 schools in March 2001. Each teacher was identified by a code which was matched against their classes, so that links with student attitudes and outcomes could be made. If we can demonstrate statistically significant links between component mapping scores and student attitude and achievement outcomes, then this will demonstrate the validity of both the Components, and the Mapping process.
Teacher component map scores were linked to the November student achievement testing results. Based on the mean scores from the November component mapping exercise, students were separated into two groups. The first group comprised students who were in a class with a teacher who was measured to be high on the SiS components ('high-SiS' classes). The second group comprises students who were in a class with a teacher who was measured to be low on the SiS components ('low-SiS' classes). Three broad patterns emerged from the analysis.
The testing of new schools in March-April 2001, which was intended to produce a baseline for comparison of results, again showed considerable influence of the teacher on achievement scores. Students in high-SiS classes achieved at a level 8-12 months in advance of students in Low-SiS classes, across the primary school years and to a lesser extent in Year 7. There was no discernible evidence by March-April of the effect of high SiS teachers in Years 8-10.
- Early years (Prep-2) students in high-SiS classes grow at a faster rate than students in low-SiS classes.
- In the middle and later years of primary schooling (Yr 3 - Yr 6) and the first year of secondary schooling (Year 7), students in high-SiS classes were already outperforming students in low-SiS classes as early as April. Both groups of students then demonstrated growth with students in the high-SiS classes either showing slightly faster growth than students in low-SiS classes, or at least maintaining the differential.
- In years 8-10 the picture became very complex, with no discernible pattern of advantage of high-SiS over low-SiS classes, and with results in general showing no consistent growth between April and November. This was due, we believe, to difficulties in secondary schools with the web based test regime we put in place in November. We expect to generate more reliable analyses from the November 2001 testing.
Copyright (C) 2001 HKIEd APFSLT. Volume 2, Issue 2, Article 2 (Dec., 2001)