Asia-Pacific Forum on Science Learning and Teaching, Volume 9, Issue 1, Article 2 (Jun., 2008)
Feral OGAN-BEKIROGLU
Utilization of attitude maps in evaluating teachers' attitudes towards assessment
 

Previous Contents Next


Methodology

Participants

The participants of the study were ten volunteer pre-service physics teachers enrolled in a teacher education program. Anonymity was preserved by using codes for the participants (e.g. P-1 represents Pre-Service Teacher One). The participants had completed the assessment and evaluation course successfully in the previous semester, and therefore had knowledge of various assessment methods including alternative forms of assessment.

Instrument and Data Collection

The instrument developed by the researcher was composed of 14 open-ended questions. Twelve of these 14 questions were designed to determine the pre-service teachers’ attitudes towards assessment and categorized under the following four dimensions: instruction, determination of assessment, assessment methods, and evaluation criteria.

The purpose of the first dimension was to determine if the participants perceived any relationship between assessment and instruction. The questions under the instruction dimension were: Which teaching methods do you use?; Which factors do you consider when you plan your instruction?; and Do you think there is a relationship between instruction and assessment? How?

The intention behind the determination of assessment dimension was to elicit how the participants define assessment, and to obtain their ideas about the purpose of assessment. Thus, the participants were asked the following questions during the interview: What is assessment?; Why do you need to do assessment?; How can you elicit students’ different ideas and skills?; When do you assess your students?; and How often do you assess your students?

The aim of the assessment methods dimension was to detect the participants’ thoughts about assessment methods by asking these questions: Which assessment methods do you use when you assess your students' learning?; What is (are) the most effective assessment method(s)? Why?; and Do you think alternative forms of assessment are necessary? Why?

Finally, the participants’ views about evaluation criteria, in addition to academic performance, were determined with the help of the question in the evaluation dimension (i.e., Are there any criteria besides academic performance that you consider when you evaluate your students’ performance?).

The remaining two questions in the instrument were related to knowledge of subject matter and external difficulties. The rationale behind these questions was to find any obstacles that the pre-service teachers might encounter during assessment. The participants’ attitudes were mainly dichotomized as constructivist and traditional.

The instrument was used in the semi-structured interview protocol. The audiotaped interviews were conducted by the researcher in her university office . Each interview lasted 30 to 45 minutes. The participants were the researchers’ former students, but were not current students when the data were collected. 

Data Analysis and Construction of Attitude Maps

Qualitative data analysis involved verbatim transcripts of the tapes. The collected data were analyzed inductively to identify themes that described the participants’ attitudes. One attitude map was constructed for each pre-service teacher based on the themes to evaluate and represent her/his attitude towards assessment.

Attitude maps can be considered analogous to cognitive maps. Liebman and Paulston (1993) used cognitive maps to enhance their research in social discourse by developing and including them in their research findings. Cognitive maps determine how people derive meaning from the world around them, specifically how individuals encode, process and decode meanings (Heun, 1975). Therefore, Heun designed cognitive maps to measure the gain or growth of an individual's knowledge, learning skills, or abilities. Dochy and Gorissen (1992) also constructed cognitive maps for students in order to study the development and use of domain-specific prior knowledge. Cognitive maps are also images of beliefs and values (Schwartz, 1978). Simmons (1986) utilized cognitive maps to examine the university supervisors’ beliefs concerning the purposes of student teaching and supervision, and to identify their criteria of effective student teacher performance. Irez (2006), in addition, generated cognitive maps to display an overall picture of pre-service science teacher educators’ beliefs about the nature of science.

According to Miles and Huberman (1994), cognitive maps have a way of looking more organized and systematic than they probably are in the person’s mind.  They can also be drawn from a particular text, such as an interview transcription (Miles & Huberman, 1994).

The following procedure was administered to construct the attitude maps: First, an ellipse was drawn for each dimension in the instrument. Second, the ellipses were named same as the dimensions (i.e., instruction, determination of assessment, assessment methods and evaluation criteria). Third, one more ellipse was drawn for the obstacles that the pre-service teachers came across due to the external factors and their subject matter knowledge. Fourth, the ellipses were filled in according to the sentences and themes derived from the transcripts. Fifth, each participant’s definition for assessment was written in a box near the ellipse drawn for the determination of assessment dimension. Finally, arrows were drawn between the ellipses to illustrate how participants established relationships between the assessment dimensions and how obstacles affected their assessment.

Ten attitude maps were constructed for ten participants. The participants’ attitudes towards assessment were determined and categorized as traditional, close to traditional, transitional, close to constructivist, and constructivist. Taking the structure and content of the attitude maps into account created these categorizations. For example, if a participant’s map presented that her/his definition of assessment was consistent with constructivist epistemology, s/he aimed to continually assess students using varied assessment methods, s/he planned activities which enabled her/him to teach as well as assess; s/he evaluated students’ effort and growth, and there were dialectical relationships between four dimensions (i.e., instruction, determination of assessment, assessment methods, evaluation criteria) in her/his map; her/his attitude towards assessment was considered as constructivist. The following three descriptors emerged during the data analysis: transitional, close to constructivist, and close to traditional. The notion of transition implied a movement from “traditional” attitude to “constructivist” attitude (refer to definitions in the introduction section). For instance, if the map showed that the participant defined assessment as determination of the durability of knowledge that s/he gave to students, s/he drew upon a variety of assessment methods at different times to bring out students’ diverse skills, s/he understood the idea that some assessment methods could enhance learning, but s/he neither combined the performance objectives with the appropriate assessment methods nor did s/he consider her teaching while deciding which assessment methods to apply, s/he graded effort but also respect, s/he might change her assessment style depending on obstacles, and there were three arrows between four dimensions, but there was not any dialectical relationship between these dimensions: her/his attitude towards assessment was classified as transitional. A map where constructivist themes and relationships were in the majority, but there were also some traditional themes was categorized as close to constructivist. For example, if the map demonstrated that the participant defined assessment as determination of whether students achieved the performance objectives, s/he set performance objectives by considering both expectations for students and content, s/he planned her/his teaching and assessment methods based on the performance objectives, s/he considered prior knowledge in designing her/his teaching, but not in setting of the performance objectives; s/he applied informal assessment as well as performance assessment and gave short-term research assignments; however, s/he neither believed that concept mapping was an effective method nor did s/he think that portfolio assessment might be fun for students, s/he evaluated both participation and curiosity, and there were four arrows or three arrows and one dialectical relationship between four dimensions in the map: the participant’s attitude was determined as close to constructivist. On the other hand, a map where traditional themes were the majority, but there were some constructivist themes and relationships was categorized as close to traditional. If the assessment was defined as determination of student knowledge about a subject, prior knowledge was assessed before starting teaching but, it was not taken into account in assessment so that student growth was not an issue in evaluation, assessment methods relyed more on exams and very little on alternative forms of assessment, there was no relationship between assessment and instruction, and there were only two arrows between four dimensions; such an attitude was evaluated as close to traditional. The dichotomy was done based on the separate factors and dimensions to avoid oversimplifying the complexity of the teacher’s attitude. The categorization was repeated a few times to avoid miscategorization. 

Criterion Validity

In order to ensure the criterion validity of the results, the participants’ attitudes towards assessment were also determined by the survey research. The categories determined from the attitude maps for the participants’ attitudes were compared with the categories derived from the survey.  The survey instrument was adapted from McMillan’s (2001) questionnaire, with a few small changes, and it comprised 44 Likert-type items distributed across the following four subscales: instructional practices, such as class discussion and lecture; cognitive level of assessments, such as recalling of knowledge; types of assessment, such as informal assessment and multiple-choice exam; and evaluation criteria, such as participation and growth. These subscales were consistent with the dimensions in the interview instrument. The survey instrument was administered to 46 pre-service teachers, including the participants of this study, and Cronbach’s alpha reliability was found to be 0.74. This result illustrated that the survey instrument had internal consistency.


Copyright (C) 2008 HKIEd APFSLT. Volume 9, Issue 1, Article 2 (Jun., 2008). All Rights Reserved.