4. PISA 2018 Science Framework
This chapter defines “scientific literacy” as assessed in the Programme for International Student Assessment (PISA) in 2018. It describes the types of contexts, knowledge and competencies that are reflected in the tasks that PISA uses to measure scientific literacy. The chapter also discusses how student performance in science is measured and reported.
Introduction: Scientific literacy and why it matters
This document provides a description and rationale for the framework that forms the basis of the PISA assessment of scientific literacy – the major domain in PISA 2015 and a minor domain in PISA 2018. Previous PISA frameworks for the science assessment (OECD, 1999[1]; OECD, 2003[2]; OECD, 2006[3]) have used scientific literacy as their central construct. This framework for PISA 2015/2018 has refined and extended the previous construct, specifically the PISA 2006 framework that was used as the basis for assessment in 2006, 2009 and 2012.
Scientific literacy is developed through science education that is both broad and applied. Thus, within this framework, the concept of scientific literacy refers both to a knowledge of science and of science-based technology. However, science and technology differ in their purposes, processes and products. Technology seeks the optimal solution to a human problem and there may be more than one optimal solution. In contrast, science seeks the answer to a specific question about the natural material world.
Scientific literacy also requires not just knowledge of the concepts and theories of science but also a knowledge of the common procedures and practices associated with scientific enquiry and how these enable science to advance. Therefore, individuals who are scientifically literate understand the major conceptions and ideas that form the foundation of scientific and technological thought; how such knowledge has been derived; and the degree to which such knowledge is justified by evidence or theoretical explanations.
For all of these reasons, scientific literacy is perceived to be a key competency (Rychen and Salganik, 2001[4]) which is defined in terms of the ability to use knowledge and information interactively. In other words, scientific literacy includes “an understanding of how it [a knowledge of science] changes the way one can interact with the world and how it can be used to accomplish broader goals” (ibid.: 10).
The rest of this document defines scientific literacy and describes how PISA attempts to measure this concept.
Scientific literacy: Towards a definition
There is a widespread belief that an understanding of science is so important that it should be a feature of every young person’s education (American Association for the Advancement of Science, 1989[5]; COSCE, 2011[6]; Fensham, 1985[7]; Millar and Osborne, 1998[8]; National Research Council, 2012[9]; Ständige Konferenz der Kultusminister der Länder in der Bundesrepublik Deutschland, 2005[10]; Ministry of Education, Chinese Taipei, 1999[11]). Indeed, in many countries, science is an obligatory element of the school curriculum from kindergarten until the completion of compulsory education.
Three science-specific competencies are required in order to understand and engage in critical discussion about issues that involve science and technology. The first is the ability to provide explanatory accounts of natural phenomena, technical artefacts and technologies and their implications for society. The second is the competency to use one’s knowledge and understanding of scientific enquiry to identify questions that can be answered by scientific enquiry; propose ways in which such questions might possibly be addressed; and identify whether appropriate procedures have been used. The third is the competency to interpret and evaluate data and evidence scientifically and evaluate whether the conclusions are warranted.
Thus, scientific literacy in PISA 2018 is defined by the three competencies of:
-
Explaining phenomena scientifically;
-
Evaluating and designing scientific enquiry; and
-
Interpreting data and evidence scientifically.
All of these competencies require knowledge. Explaining scientific and technological phenomena, for instance, demands a knowledge of the content of science – referred to hereafter as content knowledge. The second and third competencies, however, require more than just content knowledge. They also depend on an understanding of how scientific knowledge is established and the degree of confidence with which it is held. Recognising and identifying the features that characterise scientific enquiry requires a knowledge of the standard procedures that underlie the diverse methods and practices used to establish scientific knowledge – referred to here as procedural knowledge. Finally, these competencies require epistemic knowledge, defined here as an understanding of the rationale for the common practices of scientific enquiry, the status of the claims that are generated, and the meaning of foundational terms such as theory, hypothesis and data. Box 4.1 provides more examples of each of these three types of knowledge, all of which are also further developed later in this framework.
Procedural and epistemic knowledge are necessary to identify questions that are amenable to scientific enquiry, to judge whether appropriate procedures have been used to ensure that claims are justified, and to distinguish scientific issues from matters of values or economic considerations. Procedural and epistemic knowledge are also essential to deciding whether the many claims that pervade contemporary media have been derived using appropriate procedures and are warranted; after all, over their lifetimes, individuals will need to acquire knowledge, not through scientific investigations, but through the use of resources such as libraries and the Internet, and will need to evaluate such knowledge.
This document is based upon the view that scientific knowledge consists of three distinguishable but related elements. The first of these and the most familiar is knowledge of the facts, concepts, ideas and theories about the natural world that science has established, such as how plants synthesise complex molecules using light and carbon dioxide or the particulate nature of matter. This kind of knowledge is referred to as “content knowledge” or “knowledge of the content of science”.
Knowledge of the procedures that scientists use to establish scientific knowledge is referred to as “procedural knowledge”. This is the knowledge of the practices and concepts on which empirical enquiry is based, such as repeating measurements to minimise error and reduce uncertainty, the control of variables, and standard procedures for representing and communicating data (Millar et al., 1994[12]). More recently, these have been elaborated as a set of “concepts of evidence” (Roberts, Gott and Glaesser, 2010[13]).
Furthermore, understanding science as a practice also requires “epistemic knowledge”, which refers to an understanding of the role of specific constructs and defining features essential to the process of building scientific knowledge (Duschl, 2008[14]). Epistemic knowledge includes an understanding of the function that questions, observations, theories, hypotheses, models and arguments play in science; a recognition of the variety of forms of scientific enquiry; and understanding the role that peer review plays in establishing knowledge that can be trusted.
A more detailed discussion of these three forms of knowledge is provided in the later section on scientific knowledge in Table 4.4, Table 4.5 and Table 4.6
Scientific literacy requires all three forms of scientific knowledge. Therefore, PISA 2015 focussed on the extent to which 15-year-olds are capable of displaying these three forms of knowledge appropriately within a range of personal, local, national and global contexts. This perspective is broader than that of many school science programmes, where content knowledge often dominates.
It is such considerations that have led to the following definition of scientific literacy for PISA 2015 and 2018:
Scientific literacy is the ability to engage with science-related issues, and with the ideas of science, as a reflective citizen.
A scientifically literate person, therefore, is willing to engage in reasoned discourse about science and technology which requires the competencies of:
-
Explaining phenomena scientifically:
Recognising, offering and evaluating explanations for a range of natural and technological phenomena.
-
Evaluating and designing scientific enquiry:
Describing and appraising scientific investigations and proposing ways of addressing questions scientifically.
-
Interpreting data and evidence scientifically:
Analysing and evaluating data, claims and arguments in a variety of representations and drawing appropriate scientific conclusions.
Explanatory Notes
The following remarks are offered to clarify the meaning and use of this definition of scientific literacy for the purposes of the PISA 2018 assessment.
-
The use of the term “scientific literacy” rather than “science” underscores the importance that the PISA science assessment places on the application of scientific knowledge in the context of real-world situations.
-
For the purposes of the PISA assessment, these competencies will only be tested using the content, procedural and epistemic knowledge of science that 15-year-old students can reasonably be expected to have.
-
Finally, throughout this document, the term “natural world” is used to refer to phenomena taking place in or associated with any object in the living or the material world.
The competencies required for scientific literacy
Competency 1: Explaining phenomena scientifically
Science has managed to develop a set of explanatory theories that have transformed our understanding of the natural world. Moreover, such knowledge has enabled the development of technologies that support human life, such as treatments for various diseases and rapid communication across the globe. The competency to explain scientific and technological phenomena thus depends on a knowledge of these major explanatory ideas of science.
Explaining some scientific phenomena, however, requires more than just the ability to recall and use theories, explanatory ideas, information, and facts (content knowledge). Offering scientific explanations also requires an understanding of how such knowledge has been derived and the level of confidence one can hold about any scientific claims. Hence, individuals also require a knowledge of the standard forms and procedures used in scientific enquiry to obtain such knowledge (procedural knowledge) and an understanding of their own role and function in justifying the knowledge produced by science (epistemic knowledge).
Competency 2: Evaluating and designing scientific enquiry
Scientific literacy requires students to have some understanding of the goal of scientific enquiry, which is to generate reliable knowledge about the natural world (Ziman, 1978[15]). Data obtained by observation and experiment, either in the laboratory or in the field, lead to the development of models and explanatory hypotheses that enable predictions that can then be tested experimentally. New claims and hypotheses are always provisional and may not stand up when subjected to critical peer review (Longino, 1990[16]). Hence, scientists commit to publishing or reporting their findings and the methods used in obtaining the evidence that support these findings. Measurements, however, all contain a degree of error. Much of the work of the experimental scientist is, therefore, devoted to the resolution of uncertainty by repeating measurements, collecting larger samples, building instruments that are more accurate, and using statistical techniques that assess the degree of confidence in any result.
This competency draws on content knowledge, a knowledge of the common procedures used in science (procedural knowledge) and the function of these procedures in justifying any claims advanced by science (epistemic knowledge). Procedural and epistemic knowledge serve two functions. First, such knowledge is required by individuals to appraise scientific investigations, thus deciding whether appropriate procedures have been followed and whether the conclusions are warranted. Second, such knowledge allows individuals to propose, at least in broad terms, how a scientific question might be investigated appropriately.
Competency 3: Interpreting data and evidence scientifically
Interpreting data is a core activity for all scientists. It typically begins by looking for patterns, perhaps through constructing simple tables or graphical visualisations. Any relationships or patterns in the data must then be read using a knowledge of standard patterns. The scientifically literate individual can also be expected to understand that uncertainty is an inherent feature of all measurement, and that one criterion for expressing our confidence in a finding is the probability that it might have occurred by chance. All of this draws on a body of procedural knowledge.
It is not sufficient, however, to understand the procedures that have been applied to obtain a data set. The scientifically literate individual needs to be able to judge whether these procedures are appropriate and whether the ensuing claims are justified (epistemic knowledge). For instance, many sets of data can be interpreted in multiple ways, and scientists must argue in support of their own interpretation while defending it from the critique of others. Resolution of which interpretation is the best requires a knowledge of science (content knowledge). A critical and sceptical disposition towards all empirical evidence is indeed the hallmark of the professional scientist.
Organisation of the domain
For the purposes of assessment, the PISA 2018 definition of scientific literacy can be characterised as consisting of three interrelated aspects (see Figure 4.1).
Each of these aspects is discussed further below.
Contexts for assessment items
PISA 2018 assesses scientific knowledge using contexts that raised pertinent issues that were often relevant to the science education curricula of participating countries. However, assessment items are not limited to school science contexts. Items in the PISA 2018 science assessment may relate to the self, family and peer groups (personal), to the community (local and national) or to life across the world (global). The context may involve technology or, in some cases, a historical element that may be used to assess students’ understanding of the processes and practices involved in advancing scientific knowledge.
Contexts for items in the PISA science assessment have also been categorised into five applications of science and technology: health and disease, natural resources, environmental quality, hazards, and the frontiers of science and technology. The PISA science assessment, however, is not an assessment of contexts. Rather, it assesses competencies and knowledge in specific contexts. These contexts have been chosen in light of their relevance to students’ interests and lives and because they are the areas in which scientific literacy has particular value in enhancing and sustaining quality of life and in the development of public policy.
Table 4.2 shows how these five applications interact with the personal, local/national, and global contexts described above.
Scientific competencies
Table 4.3, Table 4.4 and Table 4.5 provide a detailed description of the tasks that make up each of the three competencies that comprise scientific literacy. This set of scientific competencies reflects a view that science is best seen as an ensemble of social and epistemic practices that are common across all of its subfields (National Research Council, 2012[9]). Hence, all of these competencies are framed as actions, conveying what the scientifically literate person both understands and is capable of doing.
Demonstrating the competency of explaining phenomena scientifically requires students to recall the appropriate content knowledge in a given situation and use it to interpret and provide an explanation for the phenomenon of interest. Such knowledge can also be used to generate tentative explanatory hypotheses for an observed phenomenon or when presented with data. A scientifically literate person is expected to be able to draw on standard scientific models to construct simple representations for everyday phenomena and then use these representations to make predictions. This competency includes the ability to describe or interpret phenomena and predict possible changes. In addition, it may involve recognising or identifying appropriate descriptions, explanations, and predictions.
The competency of evaluating and designing scientific enquiry is required to evaluate reports of scientific findings and investigations critically. It relies on the ability to discriminate scientific questions from other forms of enquiry, or in other words, to recognise questions that can be investigated scientifically. This competency requires a knowledge of the key features of a scientific investigation, such as what things should be measured, what variables should be changed or controlled, and what action should be taken so that accurate and precise data can be collected. It requires an ability to evaluate the quality of data, which in turn depends on recognising that data are not always completely accurate. It also requires the competency to identify if an investigation is driven by an underlying theoretical premise or, alternatively, whether it seeks to determine identifiable patterns.
A scientifically literate person should also be able to recognise the significance of previous research in judging the value of any given scientific enquiry. Moreover, students need to understand the importance of developing a sceptical disposition to all media reports in science, recognising that all research builds on previous work, that the findings of any one study are always subject to uncertainty, and that the study may be biased by its sources of funding. This competency requires students to possess both procedural and epistemic knowledge but may also draw on their content knowledge of science.
Students who can interpret data and evidence scientifically should be able to convey the meaning of a piece of scientific evidence and its implications to a specified audience in their own words, using diagrams or other representations as appropriate. This competency requires the use of mathematical tools to analyse or summarise data, and the ability to use standard methods to transform data to different representations.
This competency also includes accessing scientific information and producing and evaluating arguments and conclusions based on scientific evidence (Kuhn, 2010[17]; Osborne, 2010[18]). It may also involve evaluating alternative conclusions using evidence; giving reasons for or against a given conclusion; and identifying the assumptions made in reaching a conclusion. In short, the scientifically literate individual should be able to identify logical or flawed connections between evidence and conclusions.
Scientific knowledge
Content knowledge
Only a sample of the content domain of science can be assessed in the PISA 2018 science assessment. Hence, it is important that clear criteria are used to guide the selection of the knowledge that is assessed. The content knowledge that PISA assesses is selected from the major fields of physics, chemistry, biology, and earth and space sciences and:
-
Is relevant to real-life situations;
-
Represents an important scientific concept or major explanatory theory that has enduring utility; and
-
is appropriate to the developmental level of 15-year-olds.
Table 4.6 presents the categories of content knowledge selected by applying the criteria above.
Procedural knowledge
A fundamental goal of science is to generate explanatory accounts of the material world. Tentative explanatory accounts are first developed and then tested through empirical enquiry. Empirical enquiry is reliant on certain well-established concepts and methods such as the notion of dependent and independent variables, the control of variables, various types of measurement and forms of error, methods for minimising error, a recognition of common patterns observed in data, and methods of presenting data. It is this knowledge of the standard concepts and procedures essential to scientific enquiry that underpins the collection, analysis and interpretation of scientific data. Such ideas form a body of procedural knowledge, which has also been called “concepts of evidence” (Roberts, Gott and Glaesser, 2010[13]; Millar et al., 1994[12]). Such knowledge is needed both to undertake scientific enquiry and engage in a critical review of the evidence that might be used to support particular claims. The examples listed in Table 4.7 are some examples of procedural knowledge that may be tested.
Epistemic Knowledge
Epistemic knowledge is a knowledge of the constructs and defining features essential to the process of knowledge building in science (e.g. hypotheses, theories and observations) and their role in justifying the knowledge produced by science (Duschl, 2008[14]). Students use epistemic knowledge to explain, with examples, the difference between a scientific theory and a hypothesis or between a scientific fact and an observation. Epistemic knowledge includes the understanding that the construction of models, be they directly representational, abstract or mathematical, is a key feature of science and that such models are akin to maps rather than accurate pictures of the material world. Students should also recognise that the word “theory” is not used the same way in science as it is in everyday language, where it is a synonym for “guess” or “hunch”. Whereas procedural knowledge is required to explain what is meant by the control of variables strategy, epistemic knowledge is required to explain why the use of the control of variables strategy is central to establishing scientific knowledge.
Scientifically literate individuals will also understand that scientists draw on data to advance claims to knowledge and that argument is a commonplace feature of science. These students also understand the role and significance of peer review as the mechanism that the scientific community has established for testing new claims. Epistemic knowledge thus provides a rationale for the procedures and practices in which scientists engage and the foundation for the basis of belief in the claims that science makes about the natural world.
Table 4.8 represents what are considered to be the major components of epistemic knowledge necessary for scientific literacy.
Epistemic knowledge is most likely to be tested in a pragmatic fashion: student will typically be required to interpret and answer a question that requires some epistemic knowledge rather than being directly asked about the points in Table 4.8. For instance, students may be asked to identify whether the conclusions are justified by the data or what piece of evidence best supports the hypothesis advanced in an item and explain why.
Assessment of the Domain
Cognitive Demand
A key feature of the 2018 PISA framework is the definition of levels of cognitive demand within the assessment of scientific literacy and across all three competencies of the framework. In assessment frameworks, item difficulty, which is empirically derived, is often confused with cognitive demand. Empirical item difficulty is estimated from the proportion of the test-taking population that is successful in solving the item correctly, while cognitive demand refers to the type of mental processing required (Davis and Buckendahl, 2011[19]). An item can have a high difficulty level because it tests knowledge that is unfamiliar to most students while at the same time requiring only low cognitive demand because students only need to recall a piece of information. Conversely, an item can be cognitively demanding because it requires the individual to relate and evaluate many items of knowledge, yet still be of a low difficulty level because each of the pieces of knowledge is easily recalled (Brookhart and Nitko, 2011[20]).
Various classifications of cognitive demand schemes have been developed and evaluated since Bloom's Taxonomy was first published (Bloom, 1956[21]). These have been largely based on categorisations of knowledge types and associated cognitive processes that are used to describe educational objectives or assessment tasks.
Webb’s Depth of Knowledge (1997[22]) was specifically developed to address the disparity between assessments and the expectations of student learning. Webb’s levels of depth are determined by the complexity of both the content and the task required. His framework consists of four levels: level 1 (recall), level 2 (using skills and/or conceptual knowledge), level 3 (strategic thinking) and level 4 (extended thinking). Each level is defined by a large number of verbs (some of which appear in more than one level) that describe cognitive processes. This framework offers a more holistic view of learning and assessment tasks and requires an analysis of both the content and cognitive process demanded by any task.
All the frameworks described above have helped to classify knowledge and competencies in the PISA 2018 science framework. In drawing up such a framework, it was recognised that there were challenges in developing test items based on a cognitive hierarchy. The three main challenges were that:
-
1. Too much effort would be made to fit test items into particular cognitive frameworks, which could lead to poorly developed items;
-
2. The intended and actual cognitive demand might not have align, with frameworks defining rigorous, cognitively demanding goals, but items operationalising the standard in a much less cognitively demanding way; and
-
3. Without a well-defined and understood cognitive framework, item writing and development might often focus on item difficulty and thus use only a limited range of cognitive processes and knowledge types. These would then only be described and interpreted post hoc, rather than being built from a theory of increasing competency.
The PISA 2018 science framework uses an adapted version of Webb’s Depth of Knowledge grid (Webb, 1997[22]) alongside the desired scientific knowledge and competencies. As the competencies are the central feature of the framework, the cognitive framework needs to assess and report on them across the range of student abilities. Webb’s Depth of Knowledge levels offer a taxonomy for cognitive demand that identifies both the cognitive demand from the verbal cues that are used (e.g., analyse, arrange or compare) and the expected depth of knowledge required.
The grid above (Figure 4.2) provides a framework for mapping items against the dimensions of knowledge and competencies. In addition, each item can also be mapped onto a third dimension based on depth of knowledge, which categorises cognitive demand into the following levels:
-
Low (L)
Carrying out a one-step procedure, such as recalling a fact, term, principle or concept or locating a single point of information from a graph or table.
-
Medium (M)
Using and applying conceptual knowledge to describe or explain phenomena; selecting appropriate procedures involving two or more steps; organising or displaying data; or interpreting or using simple data sets or graphs.
-
High (H)
Analysing complex information or data; synthesising or evaluating evidence; justifying; reasoning given various sources; developing a plan or sequence of steps to approach a problem.
Thus items that merely require the recollection of one piece of information make low cognitive demands, even if the knowledge itself might be quite complex. In contrast, items that require the recollection of more than one piece of knowledge and require a comparison and evaluation made of the competing merits of their relevance would be seen as having high cognitive demand, even if the knowledge itself is relatively simple. The difficulty of any item is therefore a combination of both the complexity and the range of knowledge it requires and the cognitive operations that are required to process this knowledge and thus resolve the item.
Therefore, the major factors that determine the difficulty of items assessing science achievement are:
-
The number and the degree of complexity of the elements of knowledge demanded by the item;
-
The level of familiarity and prior knowledge that students may have of the content, procedural and epistemic knowledge involved;
-
The cognitive operation required by the item (e.g., recall, analysis, evaluation); and
-
The extent to which forming a response depends on models or abstract scientific ideas.
This four-factor approach allows for a broad measure of scientific literacy across a wide range of student abilities. It is relatively simple, therefore hopefully minimising the problems encountered in its application. This cognitive framework will also facilitate the development of an a priori definition of the descriptive parameters of the reporting proficiency scale (see Table 4.11).
Test Characteristics
Figure 4.3 relates the basic components of the PISA 2018 framework for the scientific literacy assessment to the structure and the content of assessment units (cf. Figure 4.1). As a starting point to construct assessment units, it shows the need to consider the contexts that will serve as stimulus material, the competencies required to respond to the questions or issues, the knowledge central to the units and the cognitive demand.
A test unit is introduced by specific stimulus material, which may be a brief written passage, or text accompanying a table, chart, graph or diagram. In units newly created for PISA 2015 (and reused in PISA 2018), the stimulus material may also include non-static stimulus material, such as animations and interactive simulations. The items within a unit are independently scored. Sample units can be found at www.oecd.org/pisa/test.
PISA groups items into units in order to use contexts that are as realistic as possible and that reflect the complexity of real-world situations, while making efficient use of testing time. Using situations about which several questions can be posed, rather than asking separate questions about a larger number of different situations, reduces the overall time required for a student to become familiar with the material in each question. However, score points (i.e. items) within a unit must remain independent of one another. Furthermore, because this approach reduces the number of different assessment contexts, it is important to ensure that there is an adequate range of contexts in order to minimise bias due to the choice of contexts.
PISA 2018 test units will require the use of all three scientific competencies and draw on all three forms of science knowledge. In most cases, each test unit will assess multiple competencies and knowledge categories. Individual items, however, will primarily assess only one form of knowledge and one scientific competency.
Students need to read the stimulus material and questions in the PISA 2018 science literacy assessment, therefore raising the issue that a certain level of reading literacy will be required to display science literacy. To address these concerns, stimulus material and questions will use language that is as clear, simple, brief and syntactically simple as possible while still conveying the appropriate meaning. The number of concepts introduced per paragraph will be limited. Questions within the domain of science that specifically assess reading or mathematical literacy will be avoided.
Item response formats
Three classes of items will be used to assess the competencies and scientific knowledge identified in the framework. The items will be divided approximately equally into these three classes:
Simple multiple-choice: Items calling for
-
The selection of a single response from four options; or
-
The selection of a “hot spot”, or an answer that is a selectable element within a graphic or text.
Complex multiple-choice: Items calling for
-
Responses to a series of related “Yes/No” questions that are treated as a single item for scoring purposes (the typical format in 2006);
-
The selection of more than one response from a list;
-
The completion of a sentence by selecting drop-down choices to fill multiple blanks; or
-
“Drag-and-drop” responses, allowing students to move elements on screen to complete a task requiring matching, ordering or categorising.
Constructed response: Items calling for written or drawn responses. Constructed response items in the scientific literacy assessment typically call for a written responses ranging from a phrase to a short paragraph (i.e., two to four sentences of explanation). A small number of constructed response items call for the drawing of, for example, a graph or diagram. In the computer-based assessment, any such items will be supported by simple drawing editors that are specific to the response required.
Also, in PISA 2018, some responses will be captured by interactive tasks, such as a student’s choices when manipulating variables in a simulated scientific enquiry. Responses to these interactive tasks will be scored as complex multiple-choice items. Some responses to interactive tasks are sufficiently open-ended that they are considered to be constructed responses.
Assessment Structure
Computer-based assessment will again be the primary mode of delivery for all domains, including scientific literacy, in PISA 2018. Science literacy items that were newly developed for the computer-based delivery of PISA 2015 will only be available in the computer-based assessment in PISA 2018. However, a paper-based assessment instrument (with a smaller selection of items) will be provided for countries choosing not to test their students on the computer.
PISA units are organised into 30-minute sections called “clusters.” Each cluster includes either only units new to PISA 2015 or only units that have been used in previous PISA cycles, known as “trend units”.
Each student will be assigned one two-hour test form. A test form is composed of four clusters, each designed to occupy thirty minutes of testing time. The clusters are placed in multiple computer–based test forms, according to a rotated test design.
Each student will spend a total of one hour on two clusters of reading literacy, with the remaining time assigned to either one or two of the additional domains of science, mathematics, and global competence. While the paper-based assessment will be limited to trend items and will not include any newly developed material, the computer-based instrument will include both newly developed items and trend items. Care will be taken when transposing paper-based trend items to an on-screen format so that the presentation, response format and cognitive demand remain comparable.
The desired score-point balance between the three types of knowledge (content, procedural and epistemic) and the three content knowledge categories is shown in Table 4.9. These weightings are broadly consistent with the previous framework and reflect a consensus view amongst the experts consulted in the writing of this framework.
The target score-point balance for the scientific competencies is given in Table 4.10. These weightings have been chosen so that the assessment is evenly split between items which draw predominantly on content knowledge and items that draw predominantly on procedural or epistemic knowledge.
Item contexts will be spread across personal, local/national and global settings roughly in the ratio of 1:2:1, as was the case in 2006 when science was first the major domain of assessment. A wide variety of areas of application will be selected, subject to the constraints imposed by the distribution of score points shown in Table 4.9 and Table 4.10.
Reporting scales
The development of scales of student achievement – or describing what students at different levels of attainment can do – is essential to report on and compare student achievement across the world. The 2015 framework (upon which this framework is largely based) explicitly defined the parameters of increasing competence and progression, allowing item developers to design items representing this growth in ability (Kane, 2006[23]; Mislevy and Haertel, 2006[24]). The scale has been extended down to Level “1b”, which specifically addresses and provides a description of students at the lowest level of ability. These students demonstrate very minimal evidence of scientific literacy and would previously not have been included in the reporting scales.
References
[5] American Association for the Advancement of Science (1989), Science for All Americans, Oxford University Press, New York, http://www.project2061.org/publications/sfaa/online/sfaatoc.htm.
[21] Bloom, B. (ed.) (1956), Taxonomy of Educational Objectives, Book 1: Cognitive Domain, Longmans Publishing.
[23] Brennan, R. (ed.) (2006), Validation, Praeger Publishers and the American Council on Education.
[20] Brookhart, S. and A. Nitko (2011), “Strategies for constructing assessments of higher order thinking skills”, Assessment of Higher Order Thinking Skills, pp. 327-359.
[6] COSCE (2011), Informe ENCIENDE, Enseñanza de las Ciencias en la Didáctica Escolar para edades tempranas en España, Confederación de Sociedades Científicas de España, Madrid, https://www.cosce.org/pdf/Informe_ENCIENDE.pdf.
[19] Davis, S. and C. Buckendahl (2011), “Incorporating cognitive demand in credentialing examinations”, Assessment of Higher Order Thinking Skills, pp. 327-359.
[14] Duschl, R. (2008), “Science Education in Three-Part Harmony: Balancing Conceptual, Epistemic, and Social Learning Goals”, Review of Research in Education, Vol. 32/1, pp. 268-291, https://doi.org/10.3102/0091732x07309371.
[7] Fensham, P. (1985), “Science for all: A reflective essay”, Journal of Curriculum Studies, Vol. 17/4, pp. 415-435, https://doi.org/10.1080/0022027850170407.
[17] Kuhn, D. (2010), “Teaching and learning science as argument”, Science Education, Vol. 94/5, pp. 810-824, https://doi.org/10.1002/sce.20395.
[16] Longino, H. (1990), Science as Social Knowledge, Princeton University Press, Princeton.
[12] Millar, R. et al. (1994), “Investigating in the school science laboratory: conceptual and procedural knowledge and their influence on performance”, Research Papers in Education, Vol. 9/2, pp. 207-248, https://doi.org/10.1080/0267152940090205.
[8] Millar, R. and J. Osborne (eds.) (1998), Beyond 2000: Science Education for the Future, School of Education, King’s College London, http://www.nuffieldfoundation.org/sites/default/files/Beyond%202000.pdf.
[11] Ministry of Education, Chinese Taipei (1999), General Senior High School Curriculum, Ministry of Education website, http://www.k12ea.gov.tw.
[24] Mislevy, R. and G. Haertel (2006), “Implications of evidence-centered design for educational testing”, Educational Measurement: Issues and Practice, Vol. 25/4, pp. 6-20.
[9] National Research Council (2012), A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas, The National Academies Press, Washington, D.C., https://doi.org/10.17226/13165.
[3] OECD (2006), Assessing Scientific, Reading and Mathematical Literacy: A Framework for PISA 2006, PISA, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264026407-en.
[2] OECD (2003), The PISA 2003 Assessment Framework: Mathematics, Reading, Science and Problem Solving Knowledge and Skills, PISA, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264101739-en.
[1] OECD (1999), Measuring Student Knowledge and Skills: A New Framework for Assessment, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264173125-en.
[18] Osborne, J. (2010), “Arguing to Learn in Science: The Role of Collaborative, Critical Discourse”, Science, Vol. 328/5977, pp. 463-466, https://doi.org/10.1126/science.1183944.
[13] Roberts, R., R. Gott and J. Glaesser (2010), “Students’ approaches to open‐ended science investigation: the importance of substantive and procedural understanding”, Research Papers in Education, Vol. 25/4, pp. 377-407, https://doi.org/10.1080/02671520902980680.
[4] Rychen, D. and L. Salganik (eds.) (2001), The Definition and Selection of Key Competencies, OECD website, http://www.oecd.org/pisa/35070367.pdf.
[10] Ständige Konferenz der Kultusminister der Länder in der Bundesrepublik Deutschland (2005), Beschlüsse der Kultusministerkonferenz: Bildungsstandards im Fach Biologie für den Mittleren Schulabschluss (Jahrgangsstufe 10), Wolters Kluwer Deutschland GmbH, München, Neuwied, https://www.kmk.org/fileadmin/Dateien/veroeffentlichungen_beschluesse/2004/2004_12_16-Bildungsstandards-Biologie.pdf.
[22] Webb, N. (1997), Criteria for Alignment of Expectations and Assessments in Mathematics and Science Education, National Institute for Science Education, Washington, D.C.
[15] Ziman, J. (1978), Reliable Knowledge: An Exploration of the Grounds for Belief in Science, Cambridge University Press, Cambridge.