Asia-Pacific Forum on Science Learning and Teaching, Volume 4, Issue 1, Article 1 (Jun., 2003)
John LOUGHRAN, Amanda BERRY, Pamela MULHALL and Dick GUNSTONE
Teaching and testing about the Nature of Science: problems in attempting to determine students' perceptions
Previous Contents Next

Method

Susan's unit was conducted in the second term of a four-term school year. In the following year (approximately one year after the unit had been taught) we developed a pencil and paper test (see Appendix) and administered it to all year 11 students at the school. The test attempted to place the students in a situation whereby the purpose of the unit and its impact on the students might be tracked across the whole year level to determine how students' understanding from Susan's former year 10 class compared with students who had not been involved in her unit.

The students from Susan's class (n = 28) along with the rest of the students at that same level (n = 65) were together given a 45 minute test. Part 1 of the test was a newspaper article (see Appendix) that we believed illustrated an interesting finding in a way not dissimilar to the intent of Susan's unit. This section of the paper, when completed was then collected and Part 2 was distributed for completion. The reason for the separation of the two sections of the paper was so that a student's initial response to the task could not be revisited and then altered by their understanding of their answers to the second section of the paper - which could have been used to inform their initial response.

From the total cohort (N = 93), we had anticipated reconnecting students' sections for analysis so that we could track those students from Susan's class (n = 28) and compare their results to the remainder of the year level.

Methodological problems

Unfortunately, not all students wrote their names on the papers (both sections), hence when we came to 'reconnect' the papers, Parts 1 and 2 did not always match and tracking the students became difficult. This therefore created a major methodological problem for us, as tracking for Susan's class now was not as straightforward as we had initially envisaged. Therefore, in an attempt to address this problem, two of the researchers analysed the papers for handwriting styles and colour (type) of pen/pencil used in completing the test. Eventually this allowed us to match up 87 of the papers while 6 were unable to be appropriately matched. Hence we were able to confidently see the overall views of the students with regard to the paper and pencil test. However, the major problem was that of the 28 students from Susan's class that we had hoped to track, only 9 papers were absolutely identifiable as coming from that cohort. We were therefore in the unenviable position of having interesting data on the whole year level's views of the nature of science through this pencil and paper test, but the strength of difference between Susan's class and the rest of the year level was not going to be strong.

Despite these difficulties, we decided to analyse the data. The findings which emerged we found to be very interesting, despite our methodological problems, and we therefore decided to fully analyse and communicate the findings to others as we thought a pilot study like this could be equally interesting for others involved in similar work. The pencil and paper test, we believed, to be a useful instrument for investigating students' views of the nature of science.

Each paper was numbered and the papers were divided between the four researchers. At an initial meeting, each researcher analysed 3 papers. We then discussed our approach to analysis to develop a consistent method for completing the task. After further discussion we constructed a proforma to document the results of the papers by question number. The analysis of the findings (which follows) illustrates these results in a sequential manner by moving through the paper question by question.

Although we had initiated this project to see how Susan's unit influenced her students' thinking, the big picture of the process (i.e. all students' views), we believed, was still worth pursuing. We therefore offer the following analysis referring to Susan's class only where it is reasonable and helpful for exploring aspects of students' views that are of particular interest. Otherwise, it is the big picture (the overall student cohort views) that really matters most in this analysis.


Copyright (C) 2003 HKIEd APFSLT. Volume 4, Issue 1, Article 1 (Jun., 2003). All Rights Reserved.