首页    期刊浏览 2024年11月15日 星期五
登录注册

文章基本信息

  • 标题:Analysis and comprehension of multimodal texts.
  • 作者:Daly, Ann ; Unsworth, Len
  • 期刊名称:Australian Journal of Language and Literacy
  • 印刷版ISSN:1038-1562
  • 出版年度:2011
  • 期号:February
  • 语种:English
  • 出版社:Australian Literacy Educators' Association
  • 关键词:Language and languages;Learning;Literacy;Literacy programs;Reading comprehension

Analysis and comprehension of multimodal texts.


Daly, Ann ; Unsworth, Len


Introduction

As educators are increasingly acknowledging that reading comprehension necessarily entails the integrative construction of meaning from images and language in the majority of contemporary texts (Unsworth, Thomas & Bush, 2004), measures of students' reading comprehension achievement will also necessarily entail their negotiation of inter-semiotic meanings. The theoretical framework for this research sits within the realm of social semiotics. The research concerns an investigation of student comprehension of image-language relations in multi-semiotic texts, using a framework developed from systemic functional linguistics. The site for the investigation is the NSW Basic Skills Test (BST) and specifically the Aspects of Reading part of that test administered to students in 2005 and 2007 (NSW Department of Education and Training, 2005, 2007). The data considered in this investigation includes textual data and state-wide comprehension data. This paper outlines how the textual analysis was conducted.

Context of the study

The specific context for the research was analysis of the 2005 Year 3 (age 8) and Year 5 (age 10) NSW Basic Skills Tests (BST) reading materials and questions. The reading materials were analysed using established linguistic and visual textual frameworks, namely Functional Grammar (Halliday, 1994) and Visual Grammar (Kress & van Leeuwen, 1996), and an emerging framework for image-language relations in texts (Unsworth, 2006a; Unsworth, 2008). Test item difficulties were obtained from Rasch analysis which enabled the difficulty of items (test questions) and ability of students to be placed on the same linear scale across all year groups taking the tests using 'probabilistic equations' (Bond & Fox, 2001, p. 7). Early analysis of the 2005 reading materials suggested the draft model of image-language relations was appropriate. However, the number of items was limited, so reading materials and items from the 2007 BST and the Year 7 English Language and Literacy Assessment (ELLA) were later added to increase the data set for analysis. The Year 7 ELLA items are on the same common scale of difficulty as the Year 3 and Year 5 BST items.

Model of image-language relations and text analysis

The model of image-language relations applied during the research was developed around the notions of 'concurrence' and 'complementarity' in representational or ideational meaning (Unsworth, 2006b, 2008; Chan, in press). The following definitions were used to identify the different types of image-language relations which are further elaborated and exemplified by Unsworth and Chan (2009) and Chan (in press).

Concurrence is a relationship where one mode elaborates on the meaning of the other by further specifying or describing it while no new element is introduced by the written text or image. The elaboration can take four forms:

* exemplification, where the image may be an example or instance of what is in the text, or the text may include an example of what is depicted more generally in the image e.g. when words mention 'destructive behaviour of pets' and photo shows an instance of a puppy chewing a shoe;

* exposition, which refers to the re-expression or reformulation of the meanings of the image or the text in the alternative semiotic resource with both the written text and image representing the same level of generality e.g. when the word 'weighs' is reinterpreted visually as a balance scale (see Figure 1);

* equivalence, where there is ideational redundancy since the ideational content corresponds (completely or partially) across semiotic resources e.g. when a label or caption heading appears next to its image.

* homospatiality, as discussed by Lim (2004), which refers to texts where two different semiotic resources co-occur in one spatially bonded homogenous entity e.g. when the letters of the word 'seaweed' are created using fluid images of strands of seaweed.

Complementarity is a relationship where a new element (participant or process) is introduced by either the written text or image. It can be in the form of extension, enhancement (temporally, spatially or causally) or projection (locution or idea). Extension of meaning in one semiotic resource (either written text or image) by another can be in a relation of augmentation, distribution or divergence as follows:

* augmentation is where a new participant or attribute is introduced through one semiotic resource e.g. a commentary about a painting names a participant represented only by a non-representational shape in the abstracted image (see sketch at Figure 3);

* distribution is where juxtaposed image and text jointly construct an activity sequence with a new process or action introduced by either the image or the text e.g. a label states that a part of a stationary image rises (see Figure 2);

* divergence is where the two semiotic resources convey different meanings e.g. a picture book about a shepherd and his sheep creates humour with secondary stories created only in the images by sheep reacting to the events and a tiny mouse carrying off objects on each double page spread (Charlie Needs a Cloak by Tomie dePaola, Scholastic Inc., 1973).

Relations of complementarity in the form of projection or enhancement were not the subject of assessment items in the targeted tests, therefore those relations will not be discussed further.

Analysis of image-language relations in reading materials

Reading materials in the five tests were analysed and assessment questions (also henceforth referred to as items) that assessed image-language relations were identified. In the 2007 Year 5 BST, these items were associated with six texts and in each of the other four tests (2005 Year 3 and Year 5 BST, 2007 Year 3 BST and 2007 Year 7 ELLA), there were five texts with one or more items assessing image-language relations. Amongst the items, assessments of equivalence, exposition, distribution and augmentation were identified but no other relations of complementarity or concurrence.

The NSW Department of Education and Training routinely carried out statistical (Rasch) analysis of the state-wide student results, providing test item thresholds which are calculated through the differential performance of students on test items. These thresholds, commonly known as item facility indicators or logits of difficulty, were provided by the NSW Department of Education and Training for each of the questions about image-language relations from all five tests. The mean difficulty of items that assess understanding of different types of image-language relations was then calculated for 63 items across five tests to give the following mean logits of difficulty:

* 12 items about concurrence--complete equivalence, -1.4

* 13 items about concurrence--partial equivalence, -0.9

* 14 items about concurrence--partial exposition, -0.1

* 15 items about complementarity--distribution, 0.5

* 9 items about complementarity--augmentation, 1.4

The negative mean logits for relations of concurrence indicate these items are easier than the items testing understanding of relations of complementarity. The positive mean logits for relations of complementarity show that fewer students answered these items correctly. However, for a new model of image-language relations it is important to determine whether the differences between the different types of image-language relations are significant. Accordingly, a univariate analysis of variance was carried out across the mean logits for items assessing the different types of image-language relations (see Appendix A). Augmentation was significantly more difficult than any of the other image-language relations (p < .05). Partial equivalence and complete equivalence were also significantly easier (p < .05) than all other image-language relations; however, they were not significantly different from each other. Nor were exposition and distribution significantly different from each other.

These findings are probably because meanings in image and language reinforce each other in relations of concurrence, but in relations of augmentation, different participants or attributes must be connected and a relation inferred. The lack of significant difference between relations of distribution and exposition reflects a similarity in the range of difficulty in these items and closeness in the relations. The distribution of meaning across the text, Telling the Time Using Water, might be seen as exposition of similar levels of generality by a reader who easily infers the processes in the image (see the two questions and text at Figure 2). An example of an item assessing exposition is Water Animal Records (2005 Year 3 BST) which asked what the triangle represented in diagrams of weight balances. In order to select the correct answer, 'the centre of the balance', students had to recognise that the triangle images (see example in Figure 1) are part of balances. In categorising the image-language relation, it is a fine line as to whether there is exposition (a reformulation of processes from the word 'weighs' to the image of a balance) or distribution of processes across image and verbal text with the caption using the process, 'weighs', and the image showing the process, 'balances'; however, as the same process is intended in the image and words, exposition was identified.

[FIGURE 1 OMITTED]

It was noted that some items assessing augmentation also required comprehension of verbal text characterised by high structural (grammatical) complexity, for example, 'The sailfish is believed to be a cunning fish, able to feed amongst the various fish traps and nets shown by the dark areas, without being caught' (Year 5 BST, 2005). In view of this example of grammatical complexity associated with a test item, it was decided to ascertain the separate levels of complexity in the image and verbal text segments that students needed to comprehend to answer each test item involving image-language relations. An analysis of variance was

then used to compare the mean reading logits for items associated with low, medium and high levels of complexity in images and verbal text.

Analysis of complexity in verbal segments of text

The verbal text segments were analysed to identify levels of lexical difficulty and grammatical complexity in order to establish an overall measure of verbal complexity. Lexical difficulty is usually associated with density of lexical items or content words per clause. If verb forms such as 'detect' or 'concentrate' are nominalised (detection, concentration) then this is referred to as grammatical metaphor since what is actually a process is represented in language as a 'thing' or noun form (nominalisation). In this way the lexical difficulty is increased through abstraction from the more iconic verb form. But, nominalisation also facilitates the inclusion of more content words as qualifiers, such as 'a dangerous elevated concentration (of toxins)', hence increasing the density of content words. However, as the Basic Skills Tests are for primary school children, the reading stimulus texts are of relatively low lexical density. In fact, the whole 2005 Year 3 BST reading stimulus contains only one nominalisation. It was therefore decided to assess relative lexical difficulty of the BST texts through the number of instances of non-core vocabulary (Carter, 1987, p. 33), as opposed to core vocabulary. Core vocabulary items are generally seen to be the most basic or simple word choice. A test for core and non-core vocabulary is by using substitution, for example,

'in the lexical set, gobble, dine, devour, eat, stuff, gormandise each of the words could be defined using 'eat' as a basic semantic feature, but, it would be inaccurate to define eat by reference to any other of the words in the set (i.e. dine entails eat but eat does not entail dine)' (Carter, 1987, p. 35).

The measure of lexical difficulty adopted in this study was the non-core vocabulary measure which included the few instances of nominalisation in the BST reading assessments.

Some forms of language use, especially informal spoken language and related uses of written language, are typically not as lexically dense as written narrative and informational texts, but, they are more grammatically intricate in that they include more sentences that are made up of multiple clauses. In sentences with more than one clause, the additional clauses may be of equal status with the main clause, or they may be dependent on the main clause. It has been suggested that texts with more complex syntactical structures may be more difficult for children to read . 'Where the child's language skill is found to be limited' (Clay, 1971, p. 68) or children who have not developed a sound understanding of grammatical rules (Peverly & Kitzen, 1998) and 'the ability to maintain strong grammatical relationships when reading may contribute more to reading than previously realised' (Adams, 1990, cited in Beatty & Care, 2009, pp. 239-240).

Other grammatical features that may contribute to text difficulty are the use of the passive voice and ellipsis. These have also been included as indicators of complexity because reversible passives and ellipsis of the verb or object in compound sentences are two of the many grammatical constructions not fully understood when a child starts school. Some types of ellipsis are not frequently produced in oral language until adolescence (Perera, 1984, p. 156). However, there were very few instances of ellipsis or passive voice in the texts targeted for this research, and in most tests, except the 2005 Year 5 BST, there was only one instance.

Accordingly, for the purposes of this study, grammatical complexity was measured by counting the proportion of dependent clauses, the use of the passive voice, and the ellipsis of redundant words in the text segments that were previously identified as being relevant to each question. Although there were actually very few instances of passive voice or ellipsis, these features were included so as not to discount the difficulty they created. The density of dependent clauses and non-core words might also affect reading difficulty, so when the text segments were analysed for the two aspects of linguistic complexity (grammatical and lexical), the proportion of dependent clauses and the proportion of non-core words were taken into account. For example, where there were three non-core words and two dependent clauses in one sentence, the scores were recorded as follows:
Number of          Instances         Number of        total verbal
dependent          of passive        non-core words   complexity for
clauses /          voice/ellipsis    --clauses x      question
sentences x dep.                     non-CW
clauses

(2 / 1) x 2 = 4    1 + 2 = 3         3 / 2 = 1.5      8.5


Two independent coders identified the text segments and scored them, and thus the inter-coder reliability was established.

Once the scores for dependent clauses, passive voice and ellipsis were totalled, the levels of verbal text complexity for each assessment item were determined as follows: a total of 0 was low, a score of 1 or 2 was medium and a score of 3 or above was high.

Analysis of complexity in image segments of text

To identify how difficult it is to understand an image, and in order to answer the questions about image-language relations, three features were selected and scored 0 for simple or 1 for complex, against each category, as follows:

* a score of 0 for naturalistic style, or a score of 1 for abstract style of representation;

* a score of 0 for commonsense everyday content and features, or a score of 1 for technological content or features;

* a score of 0 for a represented participant, process or circumstance, or a score of 1 for an inferred/implicit feature, process or circumstance not directly represented.

This scoring produced a possible total score of 0 to 3 for each image. For example, if the question involves one image that is abstract and requires an inference, then the total image complexity is 2. The levels of image complexity for each assessment item were determined on the same basis as the verbal text complexity, that is, a score of 0 was low, a total of 1 or 2 was medium, and a total of 3 or more was high.

The abstract/naturalistic dichotomy for images has been loosely based on two of Kress and van Leeuwen's (1996, p. 170) coding orientations. It must be acknowledged that coding orientations are concerned with the modality, reality or 'truthfulness' of representation in images. For the purposes of this research, the terms abstract and naturalistic have been adopted to represent complexity and lack of complexity. The reason that abstraction has been selected to represent complexity in images is because, as Kress & van Leeuwen (1996) state, 'the ability to produce and/or read texts grounded in this coding orientation is a mark of social distinction, of being an "educated person" or a "serious artist" ' (p. 170).

The naturalistic coding orientation, 'which remains, for the time being, the dominant one in our society' (Kress & van Leeuwen, 1996, p. 170), has been adopted as the opposite of abstract complexity in images. Kress and van Leeuwen (1996) refer to this as 'the one coding orientation all members of the culture share when they are being addressed as 'members of our culture', regardless of how much education or scientific-technological training they have received' (pp. 170-171).

The latter quote implies that the interpretation of 'technological' content also requires education and has a degree of complexity. However, Kress and van Leeuwen's 'technological coding orientation' is different from the meaning ascribed in this research because their reference is to a 'blueprint' style of pictorial coding. The term 'technological' in this research is concerned with complex technological (including scientific or mathematical) ideational content. It is also used to categorise some symbolic features, such as arrows that represent processes in technological, mathematical and scientific diagrams. The opposite of this type of complexity is 'commonsense', or everyday ideational content.

The third aspect chosen to measure visual complexity is implicit or inferential aspects of images. This aspect has created difficulty for students during past reading assessments. For example, in 2003, a reading question in both the Year 3 and Year 5 BST with a positive difficulty logit of 1.04, required students to identify whether a picture from The Deep (Winton, 1998) showed Alice 'on the jetty', 'under the water' or 'diving into the water'. The image does not directly represent the circumstance of Alice's location so in order to interpret the image, students had to infer that Alice is on the jetty by recognising that her hand is reaching into the water because that is where the fish are and Alice is not diving because only one hand is going into the water. This inference was supported by the verbal text but only 48% of Year 3 students selected the correct answer to this question.

Relating text complexity and image-language relations to item difficulty

Analyses of variance between the mean logits for items at low, medium and high levels of image complexity revealed no significant differences. There are a number of possible interpretations of this result. Whilst it could be that children comprehended equally well, images varying along the parameters of naturalistic/abstract style, commonsense/technological content and explicit/implicit representation of participants, or circumstances, it may also be the case that there was not enough variation within the images or that the dichotomous coding of images on each of these parameters is not a sufficiently sensitive measure of the complexity in actually interpreting the meanings constructed visually in these images.

However, an analysis of variance between the mean logits of items associated with different levels of verbal text complexity indicated that the items requiring comprehension of sentences with high verbal complexity were significantly (p < .01) more difficult than items requiring comprehension of sentences with low verbal complexity, although neither was significantly different from items associated with medium verbal text complexity. The data for scores of image and verbal text complexity are in Appendix B and the results from the Analysis of Variance are in Appendix C. The differences in verbal complexity and item difficulty were evident within the range of items assessing comprehension of image-language relations of complementarity.

The difficulty for questions involving the image-language relation of distribution ranged from -1.38 to 1.55 (easy to moderately difficult). The easiest question about distribution asked students 'What happened when the boys went fishing?' Students were required to relate a commonsense naturalistic illustration of two boys in a dam with a capsized boat to information written in simple sentences in the narrative text, Two Summers, 'We tried fishing, but Rick wanted to see how far we could rock the boat before it tipped' (Year 5 BST, 2005). The distribution of processes is a slight change from 'rock ... before it tipped' in the verbal text to a later event in the image where the boat has been capsized. By contrast, the hardest of the question about distribution, which was in both of the 2005 BST assessments (question 31 in Year 3 and question 37 in Year 5), required students to relate information in a complex sentence to a technological diagram as did question 30 (see Figure 2). These questions assessed students' ability to connect the processes in a verbally complex caption to a static diagram and infer how the clepsydra worked, that is, to understand an image-language relation of distribution where material processes were distributed across image and caption in the following manner. The image shows dripping water which matches the words, 'Water trickles in', in the caption, but the process, 'raises', in the caption was not shown in the image. Students who did not perceive the image-language relation did not know that the float rises. Those who could not maintain the grammatical relationship inherent in 'the float which is attached to a clock hand' did not understand how the Greek water clock worked, and therefore could not infer the correct answer to the questions.

The difficulty for questions about augmentation in texts ranged from 2.04 to 0.67 (very hard to moderate). The hardest of these questions involved inferring the existence of a character mentioned in the words but not shown in the images of a comic strip story. The second hardest question, with a difficulty of 2.01, indicated four coloured shapes in the abstract background of a painting (see sketch of Question 28 at Figure 3) and asked students to identify which shape showed 'a fish trap or net'. The 'fish traps and nets' are participants named in the caption but not apparent in the image without the information in the caption. To identify the correct shape, students had to first understand the complex sentence, 'The sailfish is believed to be a cunning fish, able to feed amongst the various fish traps and nets shown by the dark areas, without being caught' (Year 5 BST, 2005).

[FIGURE 2 OMITTED]

The easiest item assessing augmentation in an extract from Zoo by Anthony Browne stated, 'One of the pictures suggests that the chocolate was eaten by ...'. Students had to select Dad by inferring that an empty chocolate wrapper (not mentioned in the words) was beside Dad in a picture because he had eaten the chocolate. The dialogue in which the boys ask for the chocolate and Dad refuses to give it to them, only contains independent clauses.

These examples suggest that complexity in both the verbal segments and the image-language relations of the BST reading materials were related to higher levels of difficulty of the assessment items. It therefore seems appropriate that teachers should focus their attention on assisting students to comprehend these aspects of multimodal texts. However, many teachers assume that pictures make texts easier to understand and do not realise that some students need explicit teaching about interpreting images and that they may need to model how to link the images to information in the verbal text. It would also seem to be important for teachers to explicitly teach students about complex sentences and how to maintain grammatical relationships when reading them.

[FIGURE 3 OMITTED]

Conclusion

This research has shown that ideational meanings in images and in written language can interact in concurrent or complementary ways, and this affects ease of text comprehension. It should not be assumed that the inclusion of images makes written texts easy to comprehend or that they are incidental to reading. The implications from these findings are that teachers need to draw students' attention to images in texts, discuss how the images and verbal text relate to each other and identify any new meanings that might arise from the interaction. Complexity of the written language component within the image-language relations was also relevant to how easy it was to comprehend the texts in this study. It should not be assumed that failure to comprehend an image-language relation is only a failure to connect the two different parts of a text. Indeed, some students may also need support to deconstruct complex grammatical structures to assist them to comprehend the relationships between clauses. However, this investigation did not examine teaching, so further research is needed in classrooms to see how students might be assisted to talk about the meaning of written language and images and how these semiotic resources and their parts relate to each other.

Appendix A
Table A.1. Univariate Analysis of Variance
Between-subjects Factors

                               N

I-T RELATION   augmentation    9
               distribution   15
               equiv_comp     12
               equiv_part     13
               exposition     14

Table A.2. Tests of Between-Subjects Effects
Dependent Variable: Logit (d)

                  Type III Sum
Source             of Squares    df    Mean Square     F      Sig.

Corrected Model    53.367 (a)     4      13.342      18.182   .000
Intercept             .353        1       .353        .480    .491
ITRELATION           53.367       4      13.342      18.182   .000
Error                42.560      58       .734
Total                97.073      63
Corrected Total      95.928      62

(a.) R Squared = .556 (Adjusted R Squared = .526)

Table A.3. Estimated Marginal Means
I-T RELATION

Dependent Variable: Logit (d)

                                     95% Confidence Interval

I-T RELATION    Mean    Std. Error   Lower Bound   Upper Bound

augmentation   1.449       .286         .878          2.021
distribution    .482       .221         .040          .925
exposition     -.080       .229         -.538         .378
equiv_part     -.876       .238        -1.351         -.400
equiv_comp     -1.356      .247        -1.851         -.861

Table A.4 Post Hoc Tests
I-T RELATION
Multiple Comparisons
Dependent Variable: Logit (d)

LSD

(I)                (J)        (I-J) Mean    Std.     Sig.
I-T RELATION   I-T RELATION   Difference    Error

augmentation   distribution     .9669 *     .36118   .010
                equiv_comp     2.8050 *     .37773   .000
                equiv_part     2.3252 *     .37145   .000
                exposition     1.5295 *     .36599   .000

distribution   augmentation    -.9669 *     .36118   .010
                equiv_comp     1.8381 *     .33177   .000
                equiv_part     1.3583 *     .32460   .000
                exposition       .5627      .31833   .082

equiv_comp     augmentation    -2.8050 *    .37773   .000
               distribution    -1.8381 *    .33177   .000
                equiv_part      -.4798      .34292   .167
                exposition     -1.2755 *    .33699   .000

equiv_part     augmentation    -2.3252 *    .37145   .000
               distribution    -1.3583 *    .32460   .000
                equiv_comp       .4798      .34292   .167
                exposition     -.7956 *     .32994   .019

exposition     augmentation    -1.5295 *    .36599   .000
               distribution     -.5627      .31833   .082
                equiv_comp     1.2755 *     .33699   .000
                equiv_part      .7956 *     .32994   .019

                              95% Confidence Interval
(I)                (J)
I-T RELATION   I-T RELATION   Lower Bound   Upper Bound

augmentation   distribution      .2439        1.6899
                equiv_comp      2.0489        3.5611
                equiv_part      1.5816        3.0687
                exposition       .7969        2.2622

distribution   augmentation     -1.6899       -.2439
                equiv_comp      1.1740        2.5022
                equiv_part       .7086        2.0081
                exposition      -.0745        1.1999

equiv_comp     augmentation     -3.5611       -2.0489
               distribution     -2.5022       -1.1740
                equiv_part      -1.1663        .2066
                exposition      -1.9500       -.6009

equiv_part     augmentation     -3.0687       -1.5816
               distribution     -2.0081       -.7086
                equiv_comp      -.2066        1.1663
                exposition      -1.4561       -.1352

exposition     augmentation     -2.2622       -.7969
               distribution     -1.1999        .0745
                equiv_comp       .6009        1.9500
                equiv_part       .1352        1.4561

Based on observed means. * The mean difference
is significant at the .05 level.

Appendix B--Image and Verbal Text Segment Complexity Data

                                             Number of     Proportion
                Question      Number of      Dependent       Clause
Year & test      Number       Sentences       Clauses      Complexity

2005 BST 3         1              1              0              0
2005 BST 3         2              1              0              0
2005 BST 3         3              1              0              0
2005 BST 3         4              1              0              0
2005 BST 3         5              0              0              0
2005 BST 3         6              0              0              0
2005 BST 3        11              1              1              1
2005 BST 3        12              0              0              0
2005 BST 3        19              1              0              0
2005 BST 3        24              1              1              1
2005 BST 3        29              1              0              0
2005 BST 3        30              1              1              1
2005 BST 3        31              1              1              1
2005 BST 5         1              0              0              0
2005 BST 5         2              1              1              1
2005 BST 5         3              0              0              0
2005 BST 5         4              0              0              0
2005 BST 5         5              0              0              0
2005 BST 5        15              1              1              1
2005 BST 5        16              2              0              0
2005 BST 5        17              1              0              0
2005 BST 5        28              1              2              4
2005 BST 5        29              1              0              0
2005 BST 5        30              1              0              0
2005 BST 5        24              4              1           0.25
2005 BST 5        25              2              0              0
2005 BST 5        35              1              0              0
2005 BST 5        36              1              1              1
2005 BST 5        37              1              1              1
2007 BST 3         5              1              0              0
2007 BST 3         6              1              1              1
2007 BST 3         7              1              0              0
2007 BST 3         8              1              0              0
2007 BST 3        13              1              0              0
2007 BST 3        14              1              0              0
2007 BST 3        16              0              0              0
2007 BST 3        19              1              0              0
2007 BST 3        23              1              0              0
2007 BST 3        25              2              0              0
2007 BST 5         1              1              0              0
2007 BST 5         2              3              0              0
2007 BST 5         3              4              0              0
2007 BST 5         4              4              0              0
2007 BST 5         7              2              0              0
2007 BST 5         8              0              0              0
2007 BST 5         9              1              0              0
2007 BST 5        11              2              0              0
2007 BST 5        17              1              1              1
2007 BST 5        23              1              0              0
2007 BST 5        29              1              3              9
2007 BST 5        30              2              1            0.5
2007 BST 5        32              0              0              0
2007 ELLA          1              0              0              0
2007 ELLA          2              1              0              0
2007 ELLA          3              0              0              0
2007 ELLA          4              0              0              0
2007 ELLA          7              0              0              0
2007 ELLA          9              0              0              0
2007 ELLA         17              0              0              0
2007 ELLA         18              0              0              0
2007 ELLA         22              2              0              0
2007 ELLA         24              1              0              0

                              Number of      Proportion      passive
               Number of       non-core       non-core        voice/
Year & test     Clauses         words          words         ellipsis

2005 BST 3         1              0               1              0
2005 BST 3         1              0               0              0
2005 BST 3         1              0               0              0
2005 BST 3         2              0               1              0
2005 BST 3         0              0               0              1
2005 BST 3         0              0               0              0
2005 BST 3         3              2            1.33              0
2005 BST 3         0              0               0              0
2005 BST 3         2              0               0              2
2005 BST 3         2              1             0.5              2
2005 BST 3         1              0               0              0
2005 BST 3         3              3               3              0
2005 BST 3         3              3               3              0
2005 BST 5         0              0               0              0
2005 BST 5         2              0               0              1
2005 BST 5         0              0               0              0
2005 BST 5         0              0               0              0
2005 BST 5         0              0               0              0
2005 BST 5         3              1            0.33              0
2005 BST 5         2              1             0.5              0
2005 BST 5         1              0               0              0
2005 BST 5         3              2            1.33              3
2005 BST 5         1              1               1              0
2005 BST 5         1              1               1              0
2005 BST 5         5              1             0.2              0
2005 BST 5         2              1             0.5              0
2005 BST 5         1              0               0              0
2005 BST 5         3              3               3              0
2005 BST 5         3              3               3              0
2007 BST 3         1              0               0              0
2007 BST 3         2              0               0              0
2007 BST 3         1              0               0              0
2007 BST 3         1              0               0              0
2007 BST 3         1              0               0              0
2007 BST 3         1              1               1              0
2007 BST 3         0              0               0              0
2007 BST 3         2              1             0.5              0
2007 BST 3         1              2               4              0
2007 BST 3         3              0               0              2
2007 BST 5         1              0               0              0
2007 BST 5         4              0               0              0
2007 BST 5         4              0               0              0
2007 BST 5         5              0               0              0
2007 BST 5         2              0               0              0
2007 BST 5         0              0               0              0
2007 BST 5         1              1               1              0
2007 BST 5         2              0               0              0
2007 BST 5         2              0               0              0
2007 BST 5         1              2               4              0
2007 BST 5         5              2             0.8              0
2007 BST 5         3              3               3              1
2007 BST 5         0              0               0              0
2007 ELLA          0              0               0              0
2007 ELLA          2              3               0              0
2007 ELLA          0              0               0              0
2007 ELLA          0              2               0              0
2007 ELLA          0              2               0              0
2007 ELLA          0              2               0              0
2007 ELLA          0              3               0              0
2007 ELLA          0              0               0              0
2007 ELLA          2              4               8              0
2007 ELLA          1              1               1              0

                                Image
                              Complexity
                             T=technical    Total Image
              Total Verbal   A=abs tract     Complexity
Year & test    Complexity    I=inference    for question

2005 BST 3            0             I               1
2005 BST 3            0             T               1
2005 BST 3            0             I               1
2005 BST 3            0            TI               2
2005 BST 3            0        TITITI               6
2005 BST 3            0           III               3
2005 BST 3         2.33             I               1
2005 BST 3            0           ITI               3
2005 BST 3            2             I               1
2005 BST 3          3.5             I               1
2005 BST 3            0             I               1
2005 BST 3            4            TI               2
2005 BST 3            4            TI               2
2005 BST 5            0             A               1
2005 BST 5            2             I               1
2005 BST 5            0             A               1
2005 BST 5            0             A               1
2005 BST 5            0           AAA               3
2005 BST 5         1.33             I               1
2005 BST 5          0.5            II               2
2005 BST 5            0            II               2
2005 BST 5         8.33            AI               2
2005 BST 5            1             A               1
2005 BST 5            1            AI               2
2005 BST 5         0.45                             0
2005 BST 5          0.5             I               1
2005 BST 5            0             I               1
2005 BST 5            4            TI               2
2005 BST 5            4            TI               2
2007 BST 3            0                             0
2007 BST 3            1                             0
2007 BST 3            0                             0
2007 BST 3            0                             0
2007 BST 3            0                             0
2007 BST 3            1                             0
2007 BST 3            0             A               1
2007 BST 3          0.5             I               1
2007 BST 3            4            TI               2
2007 BST 3            2            II               2
2007 BST 5            0                             0
2007 BST 5            0                             0
2007 BST 5            0             I               1
2007 BST 5            0                             0
2007 BST 5            0             I               1
2007 BST 5            0                             0
2007 BST 5            1                             0
2007 BST 5            0            II               2
2007 BST 5            1             I               1
2007 BST 5            4             T               1
2007 BST 5          9.8             I               1
2007 BST 5          4.5            TI               2
2007 BST 5            0             I               1
2007 ELLA             0             T               1
2007 ELLA             0             I               1
2007 ELLA             0                             0
2007 ELLA             0             T               1
2007 ELLA             0                             0
2007 ELLA             0                             0
2007 ELLA             0             I               1
2007 ELLA             0                             0
2007 ELLA             8             T               1
2007 ELLA             1             T               1


Appendix C--Univariate Analysis of Variance Verbal Complexity
Between-Subjects Factors

                      N

Verbal Complexity H   13
L                     29
M                     22

Tests of Between-Subjects Effects
Dependent Variable: Logit

                    Type III Sum
Source               of Squares    df   Mean Square     F     Sig.

Corrected Model        9.913a      2       4.956      3.412   .039
Intercept               .021       1       .021       .014    .905
Verbal_Complexity      9.913       2       4.956      3.412   .039
Error                  88.617      61      1.453
Total                  99.296      64
Corrected Total        98.530      63

R Squared = .101 (Adjusted R Squared = .071)

Estimated Marginal Means
Verbal_Complexity
Dependent Variable: Logit

Verbal_Complexity   Mean    Std. Error   95% Confidence Interval

                                         Lower Bound   Upper Bound

H                   .538       .334         -.130         1.207
L                   -.488      .224         -.936         -.041
M                   .007       .257         -.507         .521

Post Hoc Tests
Verbal_Complexity
Multiple Comparisons
Dependent Variable: Logit
LSD

(I) Verbal_Complexity (J)   Mean Difference   Std. Error   Sig.
Verbal_Complexity                (I-J)

H      L                       1.0266 *         .40230     .013
       M                         .5316          .42164     .212
L      H                       -1.0266 *        .40230     .013
       M                        -.4949          .34078     .152
M      H                        -.5316          .42164     .212
       L                         .4949          .34078     .152

Multiple Comparisons
Dependent Variable: Logit
LSD

                           95% Confidence Interval

(I) Verbal_Complexity (J)  Lower Bound   Upper Bound
Verbal_Complexity

H             L               .2221        1.8310
              M              -.3115        1.3748
L             H              -1.8310       -.2221
              M              -1.1763        .1865
M             H              -1.3748        .3115
              L              -.1865        1.1763

Based on observed means.

* The mean difference is significant at the .05 level

Univariate Analysis of Variance Image Complexity
Between-Subjects Factors

                       N

Image_Complexity  H    4
                  L    18
                  M    42

Tests of Between-Subjects Effects
Dependent Variable: Logit

Source             Type III Sum   df    Mean      F     Sig.
                    of Squares         Square

Corrected Model       3.754a      2    1.877    1.208   .306
Intercept              .003       1     .003    .002    .966
Image_Complexity      3.754       2    1.877    1.208   .306
Error                 94.776      61   1.554
Total                 99.296      64
Corrected Total       98.530      63

R Squared = .038 (Adjusted R Squared = .007)

Image_complexity
Dependent Variable: Logit

Image_Complexity   Mean    Std. Error   95% Confidence Interval

                                        Lower Bound   Upper Bound

H                  .438       .623         -.809         1.684
L                  -.456      .294        -1.044         .131
M                  -.013      .192         -.397         .372

Post Hoc Tests
Image_Complexity
Multiple comparisons
Dependent Variable: Logit
LSD

(I) Image_Complexity (J)   Mean Difference   Std. Error   Sig.
Image_Complexity                (I-J)

H        L                      .8938          .68901     .199
         M                      .4504          .65224     .492
L        H                     -.8938          .68901     .199
         M                     -.4435          .35115     .211
M        H                     -.4504          .65224     .492
         L                      .4435          .35115     .211

Based on observed means.


Acknowledgements

Permission kindly granted by Steve Moline of K-8 Visual to reproduce line drawings by Dorothy Dunphy from the Book of Animal Records, text [c] David Drew, Thomas Nelson Australia, 1992. www.k-8visual.info

Image of clepsydra from The Power of Water by Helen Chapman, Reed International Books Australia Pty Ltd (1966), as shown in 2005 BST, reproduced with permission from Pearson Australia Group.

Permission was sought to reproduce the image, Escaping the Nets by Moe Cunningham, Tobwabba Art website, as reproduced in 2005 BST with permission from Tobwabba Art--Fine Art Gallery, www.tobwabba.com.au, however, no reply was received so a sketch of the image is represented instead.

References

Beatty, L., & Care, E. (2009). Learning from their miscues: Differences across reading ability and text difficulty. Australian Journal of Language and Literacy, 32 (3), 226-244.

Bond, T.G., & Fox, C.M. (2001). Applying the Rasch model: Fundamental measurement in the human sciences. Mahwah, NJ: Lawrence Erlbaum.

Carter, R. (1987). Vocabulary: Applied linguistic perspectives. London: Allen & Unwin.

Chan, E. (in press). Integrating visual and verbal meaning in multimodal text comprehension: Towards a model of inter-modal relations. In S. Hood, S. Dreyfus & M. Stenglin (Eds.), Semiotic margins: Meaning in Multimodalities. Continuum (expected December, 2010).

Clay, M. (1971). Sentence repetition: Elicited imitation of a controlled set of syntactic structures by four language groups. Monographs of the Society for Research in Child Development, 36 (3, Serial No. 143).

dePaola, T. (1973). Charlie Needs a Cloak. New York: Scholastic Inc.

Halliday, M.A.K. (2004/1994). An Introduction to Functional Grammar (3rd ed.), revised by Christian Matthiessen. London: Arnold.

Kress, G., & van Leeuwen, T. (1996). Reading Images: The grammar of visual design. London: Routledge.

Lim, V.F. (2004). Developing an integrative multi-semiotic model. In K. O'Halloran (Ed.), Multimodal discourse analysis: Systemic functional perspectives. London and New York: Continuum.

New South Wales Department of Education and Training (2005--2007). Basic Skills Tests. Sydney: New South Wales Department of Education and Training.

Perera, K. (1984). Children's writing and reading: Analysing classrooml Language. Oxford: Basil Blackwell.

Peverly, S., & Kitzen, K. (1998). Curriculum-based assessment of reading skills: Considerations and caveats for school psychologists. Psychology in the Schools, 35 (1), 29-47.

Unsworth, L. (2006a). Plenary Paper for the International Systemic Functional Linguistics Congress July 2006: Image/text relations and intersemiosis: Towards multimodal text description for multiliteracies education. San Paulo, Brazil.

Unsworth, L. (2006b). Towards a metalanguage for multiliteracies education: Describing the meaning-making resources of language-image interaction. English Teaching: Practice and Critique, 1, 55-76. Retrieved from http://education.waikato. ac.nz/research/files/etpc/2006v5n1art4.pdf

Unsworth, L. (2008). Multiliteracies and metalanguage: Describing image/text relations as a resource for negotiating multimodal texts. In D. Leu, J. Corio, M. Knobel & C. Lankshear (Eds.), Handbook of research on new literacies. New Jersey: Lawrence Erlbaum.

Unsworth, L., & Chan, E. (2008). Assessing integrative reading of images and text in group reading comprehension tests. Curriculum Perspectives, 28 (3), 71-76.

Unsworth, L., & Chan, E. (2009). Bridging multimodal literacies and national assessment programs in literacy. Australian Journal of Language and Literacy, 32(3), 245-257.

Unsworth, Thomas, A., & Bush, R. (2004). The role of images and image-text relations in group 'basic skills tests' of literacy for children in the primary school years. Australian Journal of Language and Literacy, 27(1), 46-65.

Unsworth, L., Barnes, G., & O'Donnell, K. (2006-2008). New dimensions of group literacy tests for schools: Multimodal reading comprehension in conventional and computer-based formats. Australian Research Council funded project conducted by the University of New England and the NSW Department of Education and Training.

Winton, T. (1998). The Deep. Fremantle: Sandcastle Books.

Ann Daly & Len Unsworth

University of New England

联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有