2. PIAAC Cycle 2 assessment framework: Literacy

Jean-François Rouet
Chair
Centre national de la recherche scientifique, University of Poitiers
Mary Anne Britt
Northern Illinois University
Egil Gabrielsen
University of Stavanger
Johanna Kaakinen
University of Turku
Tobias Richter
University of Würzburg
In cooperation with Marylou Lennon
Educational Testing Service

The term literacy (from the Latin "litera": letter, written sign) refers to one's ability to comprehend and use written sign systems. Literacy may be defined both as a set of generalised abilities [e.g., decoding words and comprehending sentences; (Perfetti, 1985[1])] and a set of cultural practices and values that vary across human groups and communities (Street and Street, 1984[2]). Thus, the literate individual is both a person who is able to make use of a broad diversity of written materials in the service of wide range of activities, and a person who is knowledgeable of the cultural standards of their communities of practice (Rouet and Britt, 2017[3]).

Since the invention of written sign systems some five thousand years ago, written communication has played an increasing role in societies throughout the world. The percentage of humans who can read and write has increased steadily over the past centuries, even though an estimated 750 million adults still cannot read and write fluently, with the highest rates of illiteracy matching the lowest levels of economic development (UNESCO, 2017[4]). In countries where people are given a chance to become literate, teenagers' and adults' actual levels of mastery vary to a remarkable extent. Furthermore, individual levels of literacy are usually associated with better living conditions, jobs, and health (Morrisroe, 2014[5]; OECD, 2013[6]).

One reason why literacy has become so important is that, in the modern world, written communication pervades most aspects of people's lives, whether personal, social, or professional. A study found that typical American adults read on an average of nine occasions per day, slightly more on working days than on weekends and holidays, and mostly in relation with practical tasks (White, Chen and Forsyth, 2010[7]). Depending on the context and purpose, reading may take a wide diversity of forms. Adults sometimes read extended pieces of continuous texts for the sake of enjoyment or just to comprehend an author's main points, but they more often scan pages to search for information that matches specific needs or questions. To serve these purposes, adults read a wide variety of texts ranging from e-mails to leaflets to timetables and instruction manuals. While doing so, they use a broad diversity of strategies and tactics, which all belong to the construct of literacy (Alexander and The Disciplined Reading and Learning Research Laboratory, 2012[8]; Britt, Rouet and Durik, 2018[9]; Goldman, 2004[10]).

The spread of computers and Internet access over the past two decades has further exacerbated the importance of literacy skills in contemporary societies (Leu et al., 2017[11]). There is little that an illiterate person can do with a smartphone, a tablet or a laptop. Written signs are ubiquitous in most computer applications, including the most widely used video sharing platforms. Digital reading is increasingly important for people to access jobs, services and goods, and to participate in communities.

For these reasons, acquiring valid and reliable estimates of what adults can do with printed texts has become a prominent target for public institutions. Several rounds of studies have been conducted at an international level over the past decades.

Since the early 1990s, three large-scale cross-country assessments of literacy and basic skills of the adult population have taken place. The first was the International Adult Literacy Survey (IALS) (Murray, Kirsch and Jenkins, 1998[12]), which was conducted in 22 countries and regions over the period 1994-1998. The second, known as the Adult Literacy and Life Skills Survey (ALL) (OECD/Statistics Canada, 2005[13]; 2011[14]), was undertaken over 2002-2008 in 11 countries. A successor to IALS and ALL – the Programme for the International Assessment of Adult Competencies (PIAAC Cycle 1) (OECD, 2013[6]) was administered in 39 countries and regions over the period 2011-2019 (National Center for Education Statistics (NCES), n.d.[15]).

IALS, ALL and PIAAC share a common conceptual framework and approach to the assessment of literacy skills, covering the conceptualisation of literacy, the approach to measurement, data quality and reporting of results (Kirsch and Lennon, 2017[16]).

One of the major areas in which there has been a change between the three assessments concerns the skill domains assessed. IALS included three separate domains of literacy: prose literacy, document literacy and quantitative literacy. The major change between IALS and ALL was that a new numeracy scale replaced the quantitative scale, while the prose and document scales were kept.

The measurement framework for literacy in PIAAC Cycle 1 was heavily based on those used in IALS and ALL, but in PIAAC literacy was assessed on a single scale rather than on two separate scales (prose and document literacy in ALL). PIAAC Cycle 1 also expanded the kinds of texts covered by including electronic texts in addition to the continuous (prose), non-continuous (document) and combined texts of the IALS and ALL frameworks. In addition, the assessment of literacy was extended to include a measure of reading component skills. This was designed for people with low levels of literacy competence and focused on assessment of the foundational skills needed to gain basic meaning from texts. The skills tested were print vocabulary, sentence processing and passage fluency.

PIAAC Cycle 1 also differed from IALS and ALL in that it mainly was an integrated computer-based assessment. The majority of respondents were assessed using a laptop computer. A pen-and-paper version of the literacy (and numeracy) assessment was available for respondents who had insufficient familiarity with computers or preferred the paper-and-pencil version for other reasons (26%).

During the past 10 years, the use of internet has grown rapidly all over the world. According to a recent estimate (ITU, 2017[17]), more than half (53.6%) of the world’s households has internet access – a dramatic increase from just less than 20% of the households having internet access in 2005, and just over 30% in 2010. The number of individuals using the internet has naturally grown as the internet access has become more common. It is estimated that there are 3.5 billion internet users today, representing almost half (48%) of the world’s population (ITU, 2017[17]).

The rapid growth of the use of internet means that in today’s world, reading often takes place in digital environments: people search and read timetables, maps and calendars online, they look for products and product reviews and purchase them on the internet, look up information in Wikipedia, read newspapers and blogs online, and participate in social media. The medium for accessing information is rapidly moving from print to screens to handheld devices, such as smartphones. As digital media affords different types of activities than traditional print media, reading in digital environments poses different cognitive demands and challenges to the reader than reading in print (Mangen and van der Weel, 2016[18]). While digital environments allow features that can support comprehension, recent evidence suggests that reading comprehension of informational texts may suffer when text material is presented in digital form in comparison to print (Delgado et al., 2018[19]).

One notable difference between print and digital media is that printed text is static and linear in nature, whereas digital texts often are hypertexts, which can include embedded hyperlinks to other sources, including multimedia. The ability to navigate within the interrelated network of documents, and the ability to locate relevant information among the potentially distracting information, are thus crucial aspects of skillful digital reading (Salmerón et al., 2018[20]).

The current framework aims at describing reading literacy in the present day context, in which digital reading is a central aspect of active participation in society. Three core sets of abilities are required for skilful reading in the complex information environments readers interact with: 1) ability to navigate within and between networked documents, 2) ability to comprehend and integrate multiple and sometimes disparate sources of information, and 3) ability to critically evaluate the information presented (Britt and Gabrys, 2001[21]; Rouet and Potocki, 2018[22]; Salmerón et al., 2018[20]).

As a consequence of the increasing uses of digital communication, there is a need to expand the construct of literacy to account for the advanced skills that enable people to interact with complex repositories of information. These include an ability to identify relevant items within sets of texts, and to scan the selected texts in order to locate information of interest. During their search for relevant information, readers use a range of criteria to discard irrelevant or inadequate information while identifying the most helpful resources. In addition, proficient readers need to comprehend information not just from one text, but also across multiple texts potentially containing fixed or animated graphs, still pictures and video segments in addition to written information. As evidenced in research studies, integrating information from multiple documents requires specific mental processes that come on top of the more traditional comprehension processes (Rouet, Britt and Potocki, 2019[23]). Finally, being literate increasingly requires readers to distance themselves from the information they are processing, questioning the accuracy, completeness, actuality of the information, as well as the competence, perspective and potential biases of the authors and publishers. These validation processes (Britt, Richter and Rouet, 2014[24]; Singer, 2013[25]) rest on specific types of knowledge and heuristics that any assessment of literacy should give due consideration.

As the domain expands to represent more sophisticated strategies, care must also be taken to describe the skills of those who only have a limited ability to comprehend and use written texts. Studies like PIAAC have found that in many countries a substantial proportion of adults still experience difficulties with the foundational processes that support any kind of literate activities: identify written words or symbols, make sense of simple sentences, draw basic inferences. There have been calls to increase the precision of the assessment at the lower end of the proficiency scale. The PIAAC framework acknowledges the role of these foundational skills and aims to provide satisfactory coverage of their distribution in the population.

Finally, an assessment of literacy must also consider people's active engagement in literate activities both at work and in their daily life. Exposure to written texts has been found to be a factor of children's acquisition of literacy skills (Stanovich and West, 1989[26]). Likewise, adults who encounter frequent opportunities to use texts are likely to develop better skills and to maintain them over time. Therefore, information about individual exposure to and engagement with texts may provide helpful information to understand the links between skill use and proficiency.

PIAAC Cycle 2 uses a parsimonious definition of literacy that aims to highlight a set of core cognitive processes involved in most, if not all literate activities. At the same time, the definition acknowledges that literate activities "do not happen in a vacuum" (Snow and the RAND reading study Group, 2002[27]). Instead, they are done in the service of one's goals, one's development and participation in society. These diverse purposes and contexts contribute to shaping the way individuals make use of written texts, hence their inclusion in the definition.

"Literacy is accessing, understanding, evaluating and reflecting on written texts in order to achieve one’s goals, to develop one’s knowledge and potential and to participate in society."  
        

We elaborate on each part of the definition below, emphasising some important theoretical advances in the domain, as well as evidence from the first PIAAC cycle and former research studies.

"Literacy…"  
        

Although the etymology of the word literacy directly points to written language, in past decades the term has been used to refer to an increasingly broad array of domains and interests, for instance in "health literacy", "financial literacy" or "computer literacy". In some definitions, the activities denoted by these phrases have only remotely and incidentally to do with written language. In the present framework, the word is taken in its broadest but also most literal sense, to describe the proficient use of written language artefacts such as texts and documents, regardless of the type of activity or interest considered. This characterisation of literacy highlights both the universality of written language (i.e., its potential to serve an infinite number of purposes in an infinite number of domains) and the very high specificity of the core ability underlying all literate activities, that is, the ability to read written language. As demonstrated in neuroscience research, learning to read is a very special experience with consequences on the organisation of some areas of the brain (Dehaene, 2009[28]).

"is accessing…"  
        

Proficient readers are not just able to comprehend the texts they are faced with. They can also reach out to texts that are relevant to their purposes, and search passages of interest within those texts (McCrudden and Schraw, 2007[29]; Rouet and Britt, 2011[30]). Searching text is cognitively distinct from reading for comprehension (Guthrie and Kirsch, 1987[31]). When searching, the proficient reader makes use of text organisers (such as tables of contents and headers) in order to inform relevance decisions; the proficient reader can also adjust the pace and depth of processing, alternating phases of quick skimming with phases of sustained, deep reading for comprehension. Finally, proficient readers are parsimonious: they may decide to quit a passage upon realising that it does not contain helpful information. In the PIAAC literacy framework, these processes are subsumed under the term "accessing".

"understanding…"  
        

Most definitions of literacy acknowledge that the primary goal of reading is for the reader to make sense of the contents of the text. This can be as basic as comprehending the meaning of the words, to as complex as comprehending the dispute between two authors making opposite claims on a social-scientific issue. Whatever the context, any literate activity (including accessing a piece of text or a passage within a text) requires some level of understanding. Theories of text comprehension (Kintsch, 1998[32]) usually distinguish the literal understanding of the message from a deeper level of understanding in which the reader integrates their prior knowledge with the text contents through the production of various types of inferences (i.e., a situation model). Prior knowledge of the domain has a strong (usually positive) impact on the deeper level of understanding.

"evaluating and reflecting…"  
        

Readers continually make judgements about a text they are approaching. They evaluate whether the text is appropriate for the task at hand and whether it will provide the information they need. Readers also make judgements about the accuracy and reliability of both the content and the source of the message (Bråten, Strømsø and Britt, 2009[33]; Richter, 2015[34]). They attempt to detect and explain any biases and gaps in the coherence or persuasiveness of the text. And, for some texts, they must make judgements about the quality of the text, both as a craft object and as a tool for acquiring information.

"on written text…"  
        

In the context of PIAAC Cycle 2, the phrase "written text" designates pieces of discourse primarily based on written language. Written texts may include non-verbal elements such as charts or illustrations. However, pictures, video and other visual media are not considered written texts per se.

A text typically includes two broad components: a source and a content. The source of the text is a set of parameters that identify the origin and dissemination of the text. The most typical source parameters are a description of the author (for instance, "Alfred Nobel, a Swedish chemist and businessman"), the publication medium and date of the text. But source information sometimes includes more specific details about the text, for instance "second edition", or "confidential". Although all texts have a source, source information is not always provided together with the content. In addition, emerging practices of online publishing and social media have tended to make it more challenging for the reader to identify the source of the text.

As in the first cycle of PIAAC (and in related studies such as PISA), the assessment of literacy will include a wide variety of text types, such as narrative, descriptive or argumentative. Texts in various formats, such as continuous, non-continuous or mixed will be included. Just as in the real world, some of these texts may be presented in a static way, meaning that the reader has only a limited opportunity to navigate through them,1 whereas others, especially in digital environments, contain interactive navigation tools such as interactive tables of contents, hyperlinks and other devices. The PIAAC definition of written texts encompasses both static and interactive materials.

"in order to achieve one’s goals,"  
        

Just as written languages were created to meet the needs of emergent civilisations, at an individual level, literacy is primarily a means for one to achieve their goals. Goals relate to personal activities but also to the workplace and to interaction with others. Literacy is increasingly important in meeting those needs, whether simply finding one’s way through a building, or negotiating complex bureaucracies, whose rules are commonly available only in written texts (and increasingly only in digital forms). Literacy is also important in meeting adult needs for sociability, for entertainment and leisure, for developing one’s community and for work.

"to develop one’s knowledge and potential and to participate in society."  
        

Developing one's knowledge and potential highlights one of the most powerful consequences of being literate. Written texts may enable people to learn about topics of interest, but also to become skilled at doing things and to understand the rules of engagement with others.

Written communication is primarily and ultimately a consequence of humans being a sophisticated social species. Texts are communication artefacts, they serve the purpose of transmitting information but also feelings and values to others. As such, literacy contributes to building, nurturing and preserving social cohesion.

The PIAAC literacy assessment aims to provide a complete and accurate description of what adults can do with texts in a broad range of contexts and tasks. To that aim, the literacy domain is organised along a set of dimensions that ensure a broad coverage and a precise description of what people can do at each level of proficiency. In this section we describe the most important dimensions, which will be used to help define the proficiency levels for literacy.

Naturalistic reading is a complex and versatile process. Proficient readers can read systematically and intensely extended passages of texts, but they can also quickly scan a page in search for a single keyword. How readers approach texts is primarily determined by their reading goals, which themselves are informed by the reader's understanding of the context and the task demands (Britt, Rouet and Durik, 2018[9]). PIAAC identifies three groups of processes that support most reading activities: accessing text, understanding, and evaluating (Figure 2.1).

The three processes correspond to those included in related assessments such as PIAAC Cycle 1 and PISA 2018. Table 2.1 shows the correspondence between the processes in these frameworks.

Accessing text encompasses a number of literacy processes whereby readers examine the text(s) available, select the most relevant text, scan contents in search for specific pieces of information and locate these pieces through various types of cues. In addition, accessing conveys the sense of navigating across various texts or passages within texts as a function of task demands and the reader's progress towards their goal.

Ability to access information within and across texts is a core component of skilful reading in print and perhaps even more in digital environments (Salmerón et al., 2018[20]). Successful navigation means that the reader is capable of searching and locating relevant information within the texts, and this is influenced by the type of the question posed to the reader, as well as the nature of the materials. When searching, the proficient reader also calibrates their depth of processing of the information, merely scanning task-irrelevant contents while pausing and engaging in deeper processing of passages they deem relevant to the task.

The task or the question the reader has in mind has a big impact on how readers navigate within and between text documents (McCrudden and Schraw, 2007[29]). Identifying what information is relevant is only possible if the reader has formed an appropriate task model that provides specific criteria and guides the strategies utilised in searching and locating relevant information (Britt, Rouet and Durik, 2018[9]). Theories of purposeful reading suggest that when reading with specific objectives in mind, the incoming text information is constantly processed in the light of the task model (Britt, Rouet and Durik, 2018[9]). When task-relevant information is detected, attention is zoomed in to meet the task demands (Kaakinen and Hyönä, 2014[35]). The complexity of the task model depends on the question posed to the reader: simple questions may only require the search for a match between the question item and information within the text, whereas forming an appropriate task model for a more complex question may require background knowledge and inferencing. Lack of related prior knowledge may thus make it harder to search and locate relevant information (Kaakinen, Hyönä and Keenan, 2003[36]), as the reader’s task model might not specify what is relevant, and reader has to scrutinise all information in order to decide whether it is relevant or not.

The nature of the text materials obviously influences how easy or hard it is to access information from a text or set of texts. The PIAAC literacy framework distinguishes two types of search processes: identifying a relevant text from a set, and locating information within a single text.

Identifying a relevant text in a set. If the available material consists of multiple texts (for instance, several documents on the same topic), readers have to first search and select the text that is expected to contain the most helpful information, disregarding the other items. Then readers need to search and locate relevant information within that text (Britt, Rouet and Durik, 2018[9]). Searching a relevant text in a set often involves using lists such as a table of contents (Dreher and Guthrie, 1990[37]) or the page showing the results of a query in a search engine. In selecting an item in this type of list, readers often use very simple heuristics such as the ranking of the items [priority given to the first items in the list, see (Fu and Pirolli, 2007[38]; Pan et al., 2007[39]; Wirth et al., 2007[40]) for evidence from search engine tasks] or the presence of highlighted information (Rouet et al., 2011[41]). However, in some tasks these simple heuristics may lead to suboptimal selections. For instance, in the Rouet et al. (2011[41]) study, 5th and 7th grade students were more likely to select irrelevant items when the items contained capitalised keywords. Moreover, if the materials contain a lot of distracting (irrelevant) information, the reader has to work harder to reject that information, which poses extra demands on their reasoning and working memory skills (Kaakinen and Hyönä, 2008[42]), and may cause them to forget the question (Rouet and Coutelet, 2008[43]).

Locating information within a text. When readers need to locate a relevant passage within a single text, signalling devices, such as headings and highlighting, can be used to facilitate the visual scanning and the identification of the relevant passage (Lemarié et al., 2008[44]). Knowing the function of text signals and using them while scanning a text are characteristics of proficient readers (Garner et al., 1986[45]; Potocki et al., 2017[46]).

Readers' search and locate processes pervade the whole reading cycle, from readers' initial decision of which text or passage they want to focus on, to their post-reading assessment of whether the passage contributes to reaching their goal (see below, "Evaluate and reflect").

A large number of reading activities involve the parsing and integration of one or several extended passage(s) of text in order to form a complete representation of what the text is about. Cognitive theories of text comprehension usually distinguish two levels of representation (Kintsch, 1998[32]): a representation of the literal content of the text (literal comprehension), and a representation integrating the literal content with the reader's prior knowledge through mapping and inference processes [inferential comprehension or "situation model"; (McNamara and Magliano, 2009[47]; Zwaan and Singer, 2003[48])]. In addition, theories of multiple text comprehension (Perfetti, Rouet and Britt, 1999[49]; Britt and Rouet, 2012[50]) consider that text comprehension sometimes includes a representation of source features together with the respective contents.

Literal comprehension requires readers to comprehend the meaning of written words (e.g., "the kitten") and semantic propositions (i.e., small groups of words usually containing a substantive and a verb, adverb or an adjective, such as "the kitten is sleeping"). Propositions are then organised into hierarchies corresponding to one or a few sentences (Kintsch and van Dijk, 1978[51]). Literal comprehension tasks involve a direct or paraphrase type of match between the question and target information within a passage (for instance "what is the kitten doing?"). The reader may need to hierarchise or condense information at a local level in order to answer literal comprehension questions. Tasks requiring integration across entire text passages, such as identifying the main idea, summarising, or giving a title, are not considered literal, but rather inferential comprehension.

Inferential comprehension is the outcome of readers' integration of text information with their prior knowledge. The outcome is often labelled a "situation model" or "integrated text representation". Integrated text representations may be based on sentences but also on paragraphs or even on extended passages of text. As readers proceed through several sentences and paragraphs, they need to generate various types of inferences ranging from simple connecting inferences (such as the resolution of anaphora) to more complex coherence relationships (e.g. spatial, temporal, causal or claim-argument links). Sometimes the inference connects several portions of the text; in other cases, the inference is needed to connect the question and a text segment. Finally, the production of inferences is also needed in tasks requesting the reader to identify an implicit main idea, in order to produce a summary or a title for a given passage.

Multiple text inferential comprehension. When readers are faced with more than one text, integration and inference generation may be based on pieces of information located in different texts (Perfetti, Rouet and Britt, 1999[49]). Integration of information across texts poses a specific problem when the texts provide inconsistent or conflicting information. In those cases, readers must engage in evaluation processes in order to acknowledge and handle the conflict (Bråten, Strømsø and Britt, 2009[33]; Stadtler and Bromme, 2014[52]).

Competent readers can critically assess the quality of information in a text, even when the task does not explicitly require such an evaluation. The importance of evaluation as part of literacy has increased with the amount and heterogeneity of written information readers are faced with. Adult readers need to be able to evaluate to protect themselves from misinformation and propaganda and to make sense of conflicting information, such as political or scientific controversies. Evaluation can be based on attending to and assessing the accuracy, soundness, and task relevance of a text. The focus of these evaluations can be on the content or on the source of a text. Source evaluation plays a critical role when evaluating information from multiple texts, which sometimes provide discrepant or conflicting information (Bråten et al., 2011[53]; Leu et al., 2015[54]; Rouet and Britt, 2014[55]; Stadtler and Bromme, 2014[52]; Stadtler et al., 2013[56]). Handling conflict can require readers to assign discrepant claims to their respective sources and assess the credibility of the sources or believability of the claims (accuracy), to assess the relevance of the support or evidence provided for the discrepant claims (relevance), to evaluate the completeness of the provided perspectives and information from those possible (sufficiency), and to coordinate these outcomes to inform one’s weight to make a decision about the conflict.

Evaluating accuracy. The information conveyed in written texts can be more or less accurate, ranging from agreed upon facts to intentionally false information. Even websites conveying science information often contain inaccurate or misleading information (Allen et al., 1999[57]). The evaluation of the accuracy of claims and statements can be based on the content or on the source of the text. Content evaluation includes validation against one’s beliefs and knowledge (is the assertion true? Is it plausible? What information is presented to support the claim?) (Richter, Schroeder and Wöhrmann, 2009[58]). Readers can also assess accuracy indirectly, by identifying and assessing the source of the information (sourcing) (Britt and Aglinskas, 2002[59]; Wineburg, 1991[60]). For instance, the reader may ask whether the author is competent, well-informed and benevolent. When reading from web sources, readers may also check whether the information offered was submitted to any kind of editorial control prior to its publication (i.e., academic institutions, professional journalism vs. personal blogs or sites).

When dealing with conflicting information, readers have to be able to assign conflicting claims to different sources and use the credibility of the sources to assess the quality of information (Bråten, Strømsø and Britt, 2009[33]; Stadtler and Bromme, 2014[52]). Readers of multiple texts can also evaluate accuracy by comparing information across different sources (i.e., corroboration) (Britt and Aglinskas, 2002[59]; Wineburg, 1991[60]).

Evaluating soundness. The modern reader has to deal with texts that vary on a continuum of internal quality or soundness (Magliano et al., 2017[61]). In this framework, soundness encompasses two characteristics of discourse, namely completeness and internal consistency (Blair and Johnson, 1987[62]). Readers have to identify the completeness of the set of facts or evidence that is presented and to identify what is not accounted for or considered. Readers also have to identify perspectives presented in a text and assess whether all the important perspectives are represented. They may also have to account for any biases they find in the text. Evaluating bias may be based on language (does the text use neutral, factual language or rather colourful, evaluative language), or on the source of the text (i.e., interpreting, explaining or resolving different author biases that may impact sufficiency).

When evaluating internal consistency, readers must identify the structure of a text (e.g., persuade, inform) and evaluate the quality of the information in achieving that goal (e.g., warranted or sound claim-reason connections or reasonable cause-effect relationships). Does the author provide the type of information that is expected given the structural organisation of the text and what is the quality of that information for achieving the goal of the text? The evaluation of internal consistency can be especially challenging for argumentative texts (those attempt to convince the readers to accept a proposition, or claim by presenting supporting reasons; (Galotti, 1989[63]) because consistency cannot be determined by formal logic (Toulmin, 1958[64]).

When facing multiple texts that contradict each other, readers need to become aware of the conflict, understand where the conflict comes from (e.g., texts reporting discrepant facts or proposing discrepant interpretations) and to find ways to deal with the conflict (Britt and Rouet, 2012[50]; Stadtler and Bromme, 2014[52]).

Evaluating task relevance. As discussed in the section on “Accessing text” above, evaluating task relevance takes place throughout the reading process, from the reader's attempt to locate a text or passage of interest, to their post-reading assessment of whether the text or passage they have read was helpful (i.e., post-reading task relevance assessment); (Rieh, 2002[65]). When evaluating task relevance after reading a passage, readers must reconsider the task or question using an activated schema to understand what is being asked for and how to achieve that goal state (Britt, Rouet and Durik, 2018[9]; Rouet, Britt and Durik, 2017[66]). They must then assess whether a text they have just read contributes to reaching the goal state.

Research considers that there are two main routes in assessing task relevance. One consists in evaluating the content of the text, the other consists in evaluating the source (i.e., the person or the organisation responsible for authoring and disseminating the text). Both content and source evaluation can focus on accuracy, soundness or task relevance (Table 2.2). For instance, a layperson may realise that the text comes from a specialised medium (e.g., an academic journal or institution) and that the level of language and details is not suited to their prior knowledge and goals. Importantly, task relevance evaluation requires task readers to interpret the task or question using activated schema to understand what is being asked for and how to achieve that goal state (Britt, Rouet and Durik, 2018[9]).

The PIAAC literacy assessment will include tasks involving multiple, possibly discrepant texts and a series of items assessing each of the evaluate processes.

Reflecting on the author's intent, purpose, and effectiveness. When evaluating texts, readers need to be aware of the author’s intent or purpose for writing. Author purposes include to entertain, to inform, to explain or to describe, or to persuade. Author purposes generally have to be inferred from the structure and form of the text, although they are sometimes stated explicitly, for instance in a preface, an overview, or in a separate text, for instance a publisher leaflet or an interview with a journalist. Readers can also infer authors' purposes by acquiring information about the author's opinion, beliefs, attitude, assumption, or bias.

In addition to identifying the author’s purpose and viewpoint, the reader can evaluate how the author conveyed their points and whether it was effective. The structure of the text as well as tone, word choice and writing style can provide cues to author purpose and perspective. In the context of the PIAAC literacy study, "Reflect" represents tasks in which the reader is explicitly asked about authors' intentions, purposes or effectiveness.

Because handling conflict across texts includes all aspects of evaluating and reflecting, it is important to include units involving multiple, discrepant texts to assess the extent to which adults can meet the challenges involved in contemporary reading situations.

Texts are vehicles that convey the ideas, beliefs and intentions of their authors. They are communication artefacts anchored in space and time (Wineburg, 1994[67]). Every text involves a source (where the text comes from: author, date and so forth) and some content (what is said in the text). Source and content information are both important for comprehending and making use of texts (Perfetti, Rouet and Britt, 1999[49]). Moreover, with the advent of digital technology, laypersons have access to a growing diversity of textual materials. In addition to traditional genres such as a novel, a newspaper article or a cooking recipe, new genres have appeared such as blogs, forums, or instant messaging systems (e.g. Twitter). Furthermore, text genres tend to be presented in combination, such as when readers react to an online article or offer their versions of a cooking recipe. The profusion of text genres represents new opportunities, but also new challenges for contemporary readers. In addition, readers are increasingly faced with multiple texts that they may have to read in parallel in order to achieve their purpose. For instance, a person who seeks advice about a health issue may look up a web forum and read several messages posted by different people. The person may then turn to the website of a hospital to seek further information, and so on and so forth. Therefore, modern text comprehension involves an ability to make sense of multiple and sometimes heterogeneous sets of texts.

In this context, ensuring the coverage of the literacy domain is a challenge, as there is no universal categorisation of text types, genres and formats. The PIAAC literacy framework rests on a distinction between single and multiple texts (as defined by a distinct source). In addition, the framework relies on distinctions made in previous assessments, such as text types (e.g., narration, description), text format (i.e., continuous vs. non-continuous texts) and the presence of organising devices enabling readers to navigate within and across texts.

Text types describe the diversity of texts as prototypical representations of the world and communication acts. The most frequently encountered text types are description, narration, exposition, argumentation, instruction and transaction. Naturalistic texts are usually difficult to categorise, as they tend to cut across these prototypical categories. For example, a newspaper article might start with a specific story (narration), then engage in some definitions and context (explanation), and a critical analysis (argumentation). Nevertheless, it is useful to categorise texts according to the text type, based on the predominant characteristics of the text, in order to ensure that the instrument samples across a range of texts that represent different types of reading. The classification of texts used in the PIAAC literacy assessment is borrowed from that used in the previous PIAAC and PISA assessments.

Description is the type of text where the information refers to properties of objects in space. Descriptive texts are mostly meant to answer "what" or "how" type of questions. Descriptions can take several forms. Impressionistic descriptions present information from a subjective point of view reflecting the viewer's impressions of elements, relations, qualities and directions in space. Technical descriptions present information from a more objective and perspective-independent viewpoint. Frequently, technical descriptions use non-continuous text formats such as diagrams and illustrations. Typical examples of descriptions are a depiction of a particular place in a travelogue or diary, a catalogue, a geographical map, an online flight schedule or a description of a feature, function or process in a technical manual.

Narration is the type of text where the information refers to properties of characters and objects in time. Narration typically answers questions relating to "what", "when", "how" or "in what sequence". Why characters in stories behave as they do is another important question that narration typically answers. Narration can take different forms. Narratives present change from the point of view of subjective selection and emphasis, recording actions and events from the point of view of subjective impressions in time. Reports present change from the point of view of an objective situational frame, recording actions and events which can be verified by others. News stories intend to enable the readers to form their own independent opinion of facts and events based on the reporter’s account. Typical examples narrations are a novel, a biography, a play, a comic strip and a newspaper report of an event.

Exposition is the type of text meant to communicate concepts, phenomena and other mental constructs involving a set of interacting elements. The text provides an explanation of how the different elements interrelate in a meaningful whole and often answers questions about "how" and "why" (referring to enabling conditions and causal relationships). Expositions can take various forms. Expository essays provide a simple explanation of concepts, mental constructs or conceptions from a subjective point of view. Definitions explain how terms or names are interrelated with mental concepts. In showing these interrelations, the definition explains the meaning of words. Explications are a form of analytic exposition used to explain how a concept can be linked with words or terms. Minutes are a record of the results of meetings or presentations. Typical examples of expositions are a scholarly essay about the metabolism of sugar, a diagram showing a model of memory, and a graph of population trends.

Argumentation is the type of text that presents factual or interpretive claims about a situation, together with supporting reasons and warrants. Argumentative texts often answer "why" (as in, for instance, "why did this happen?" or "why should we do this?"), but also "what if" questions. An important subcategory of argumentative texts is persuasive and opinionative texts, referring to opinions and points of view. A "comment" relates the concepts of events, objects and ideas to a private system of thoughts, values and beliefs. "Scientific argumentation" relates concepts of events, objects and ideas to systems of thought and knowledge so that the resulting propositions can be verified as valid or non-valid. Examples of text objects in the text type category argumentation are a poster advertisement, the posts in an online forum and a web-based review of a book or film.

Instruction (sometimes called injunction) is the type of text that provides directions on what to do. Instructions present directions for certain behaviours in order to complete a task. Rules, regulations and statutes specify requirements for certain behaviours based on impersonal authority, such as practical validity or public authority. Examples of textual instruction are a cooking recipe, a series of diagrams showing a procedure for giving first aid and guidelines for operating digital software.

Transaction represents a written text that supports interpersonal communication, such as requesting that something is done, organising a meeting or making a social engagement with a friend. Before the spread of electronic communication, this kind of text was a significant component of some kinds of letters and, as an oral exchange, the principal purpose of many phone calls. Transactional texts are often personal in nature, rather than public, and this may help to explain why they do not appear to be represented in some of the corpora used to develop many text typologies. With the extreme ease of personal communication using e-mail, text messages, blogs and social networking websites, this kind of text has become much more significant as a reading text type in recent years. Transactional texts often build on common and possibly private understandings between communicators – though clearly, this feature is difficult to explore in a large-scale assessment. Examples of text objects in the text type transaction are everyday e-mail and text message exchanges between colleagues or friends that request and confirm arrangements.

The building blocks of texts are written words, which can be organised according to the rules of syntax, coherence and cohesion, but also according to spatial dimensions such as in lists, tables and charts. In the PIAAC literacy framework, continuous texts are defined as sequences of sentences and paragraphs. These may fit into even larger structures such as sections, chapters and books. Non-continuous texts are defined as words, sentences or passages organised in a list or matrix format (Kirsch and Mosenthal, 1990[68]).

In both print and digital environments, written texts are often associated with non-verbal representations, such as graphics and pictures. The PIAAC assessment does not focus on these representations per se, but some tasks may involve the use of text in combination with graphics or pictures.

The PIAAC literacy framework also considers mixed texts, which involve both continuous and non-continuous components. In well-constructed mixed texts, the components (for example, a prose explanation including a graph or table) are mutually supportive through coherence and cohesion links at the local and global level. Mixed text is a common format in magazines, reference books and reports, where authors employ a variety of presentations to communicate information. In digital texts, authored web pages are typically mixed texts, with combinations of lists, paragraphs of prose and often graphics. Message-based texts, such as online forms, e-mail messages and forums, also combine texts that are continuous and non-continuous in format.

Naturalistic texts vary from a few lines to several hundreds of pages. Depending on the length and purpose, texts may include a range of devices aimed at representing content and facilitate access to passages of interest.

Organisation is primarily signalled by the sequence of sentences and texts, along with the use of different font sizes, font types such as italic and boldface or borders and patterns. Various types of discourse markers also provide information about how ideas are organised in the text. For example, sequence markers (first, second, third, etc.), signal the relation of each of the units introduced to each other and indicate how the units relate to the larger surrounding text. Causal connectors (therefore, for this reason, since, etc.) signify cause-effect relationships between parts of a text.

Larger texts often come with titles and headers, paragraphs and sections. These markers also provide clues to text boundaries (with space and a new header showing section completion, for example). Yet longer texts are organised into chapters, they include a table of contents and one or several indexes. Readers' awareness and use of these devices is critical to their effectiveness when reading texts for specific purposes (Goldman and Rakestraw Jr., 2000[69]).

Digital texts also come with a number of tools that let the user access and display specific passages. Some of these tools are identical to those found in printed texts (e.g., headers), whereas others are more specific to the electronic medium. Examples include windows, scroll bars, tabs, but also embedded hyperlinks. There is growing evidence that the processes involved in reading printed and digital texts differ, partly because of differences in presentation formats and navigation tools (Delgado et al., 2018[19]; Naumann, 2015[70]; OECD, 2011[71]). Therefore, it is important to assess readers' ability to deal with texts featuring a diversity of content representation and navigation tools.

The PIAAC literacy assessment will implement texts that vary on a continuum of length (i.e., single vs. multiple pages), but also diversity and density of content representation and access devices.

As mentioned in the introduction to this section, a text is defined by its source and its content. The PIAAC literacy framework defines single texts as texts that originate in a single source, i.e., an author, a publication medium, and a date of publication [other dimensions of the complex construct of a "source" will not be discussed here; see (Britt et al., 1999[72]), for a more detailed analysis of the construct of a source)]. Multiple texts are defined by having different authors, or being published through different channels or at different times.

It is important to note that in this framework the distinction between single and multiple texts is in principle independent from the amount of information contained in the text(s). A single text can be as short as a single sentence and as long as a whole book or website, as long as it has a single author (or group of authors), publication medium and date. Conversely, multiple texts can take the form of a series of brief passages, for instance in a web forum where different people post messages at different times. A single text can also contain embedded sources, that is, references to various authors or texts (Rouet and Britt, 2014[55]; Strømsø et al., 2013[73]).

Items in a set of multiple texts may have different relationships to each other: some texts may corroborate, complete, support or provide evidence for other texts, whereas others may disagree, contradict or conflict with others. Readers' cognitive representation of a set of texts together with their respective sources and the network of intertext relationships has been termed a "documents model" (Perfetti, Rouet and Britt, 1999[49]).

Table 2.3 summarises the dimensions of texts that are considered in the PIAAC literacy framework.

Reading pervades all domains of an individual's life. Reading activities are normally situated in a social situation and may serve a range of purposes from personal to professional and civic. Both the motivation to read and the interpretation of the content may be influenced by the context. As a result, the PIAAC literacy framework defines three main types of contexts that will be represented in the assessment:

  1. a) Work and occupation. Written texts play an important role in a wide range of occupations. Uses of text in an occupational context includes finding employment, finance, and being on the job (i.e., regulations, organisation, safety instructions). However, the materials used in the PIAAC literacy assessment do not include specialised job-specific texts, which obviously would pose the problem of prerequisite background knowledge.

  2. b) Personal use. Reading is also important for personal purposes. Many adults engage in reading when dealing with interpersonal relationships, personal finance, housing, and insurance. They also increasingly make use of written materials in addressing health and safety issues (e.g., disease prevention and treatment, safety and accident prevention, first aid, and staying healthy). Adults also use texts in relation to their consuming habits: credit and banking, savings, and advertising, making purchases, and maintaining personal possessions. Finally, texts are important in organising leisure and recreation time, including travel, restaurants, and material read for leisure and recreation itself (games etc.).

  3. c) Social and civic contexts. Finally, literacy is essential in adults' participation in social and civic life. Community and citizenship includes materials dealing with community resources, public services and staying informed. Education and training includes materials that deal with opportunities for further learning.

The construct of literacy encompasses what readers can do with texts and also what they comprehend and remember from the texts. This warrants the design of testing situations in which test-takers may be asked to complete tasks either with the text available or after they have read the text, based on their memory for text information. Research suggests that answering comprehension questions with or without text availability tap in part on distinct mental processes, and that assessment tasks without the text available might be more sensitive to the quality of the reading processes and less dependent from reader motivation and test-taking strategies (Ozuru et al., 2007[74]; Schroeder, 2011[75]). However, the PIAAC literacy assessment focuses on what adults can do with texts, and therefore it is based on scenarios involving questions and one or several texts that remain available throughout the task. This is arguably the most common scenario in adults' daily uses of text (White, Chen and Forsyth, 2010[7]).

The PIAAC assessment of literacy is based on test units in which participants are asked to make use of one or several texts in order to answer a set of questions. A short introduction usually provides some context and motivation for the unit. Each question elicits one of the core processes defined in the framework (see section on cognitive task demands). Questions are presented one by one in a blocked format in order to decrease the influence of test-taking strategies and to reduce variance in test completion time.

The texts used as stimuli reflect texts that test-takers may encounter in real life. Many of them are directly drawn from authentic materials with little, if any adaptation. This means that no effort is made to make these texts easier to read or to improve their organisation or presentation. Using naturalistic texts, sometimes even clearly suboptimal ones (for instance, poorly organised or using complex language), ensures a high level of face validity. However, no artificial difficulty or flaw is introduced at the time of test design.

Questions can be designed using a wide range of response formats, such as constructed (open) responses, true-false judgements, multiple choice, or responses based on filling a blank or highlighting a text passage, to cite just some of the most common types. Computerised test delivery also affords additional response modes, such as "drag and drop". The form in which responses are collected – the response format – varies according to what is considered appropriate given the kind of evidence that is being collected, and also according to the pragmatic constraints of a large-scale assessment.

Response formats can involve demands on specific cognitive processes. For example, multiple-choice comprehension questions are typically dependent on decoding skills, because readers have to decode distractors or items, when compared to open constructed response items (Cain and Oakhill, 2006[76]; Ozuru et al., 2007[74]). Conversely, constructed responses tap on written production as much as on comprehension skills. Several studies suggest that the response format has a significant effect on the performance of different groups (Grisay and Monseur, 2007[77]; Schwabe, McElvany and Trendtel, 2015[78]). Finally, participants in different countries may be more or less familiar with different response formats. Consequently, the use of a diversity of response formats is recommended to ensure precision and to reduce potential biases. However, consistent with the general guidelines for PIAAC Cycle 2, the assessment of literacy will not include any constructed response. Besides removing the need for human scoring, this reduces the confounding of comprehension and written production skills.

The deployment of computer-based assessment in PIAAC creates the opportunity to implement adaptive testing. Adaptive testing enables higher levels of measurement precision using fewer items per individual participant. This is accomplished by targeting more items that are aligned to the ability range of participants at different points in the ability distribution.

Adaptive testing has the potential to increase the resolution and sensitivity of the assessment, most particularly at the lower end of the performance distribution. For example, participants who perform low on items that assess their ease and efficiency of reading (e.g. reading fluency) will likely struggle on highly complex multiple text items. Thus, there would be benefit in providing additional lower-level texts for those participants to better assess specific aspects of their comprehension.

The Literacy Expert Group recommends the following distribution of items based on a typology of cognitive task demands, text size and contexts.

The rationale for the recommended distribution per cognitive task demands is as follows: a substantial number of items (45%) should involve text understanding, both literal and inferential, as this is considered a core process present in most if not all reading activities. Due to its increased importance in digital environments, the category "access" (which involves identifying texts in a set and locating information within texts) should also be broadly represented (35%). Finally, about 20% of the tasks should involve one type of evaluation or reflection about the text.

As regards text size, most tasks (60%) will involve texts presented on a single page, with the view that some of these need to be simple enough so as to describe basic levels of literacy. Some of these short texts may involve multiple sources (such as, e.g., a series of short messages on a web forum page). However, acknowledging that readers most often face texts distributed across multiple pages (either from one or from several sources), the test will also include multi-page units. It is expected that tasks focusing on the process of "understanding" will be proportionally more represented in single page units, whereas "access" and "evaluate" tasks should be more frequent in multi-page units.

Table 2.4 presents the recommended distribution of items as a function of text size (i.e., single vs. multiple pages) and cognitive task demands.

It is further recommended that a majority of the test units (goal: 60%) include single source texts.

A broad range of tasks drawn from realistic contexts is meant to help ensure that no group of respondents will be either advantaged or disadvantaged based on their familiarity with, or interest in, a particular context. The recommended percentage of tasks for work, personal, community and education types of contexts is 15, 40, 30, and 15%, respectively.

No specific recommendation is made regarding a distribution of tasks across dimensions of text types or response formats, beyond the general recommendation to ensure a broad diversity and a representation of as many types as possible.

Reading fluency can be defined as an individual’s ability to read words, sentences and connected text efficiently (Kuhn and Stahl, 2003[79]), i.e. both quickly and accurately. Fluent readers master the basic reading processes of recognising written words, assigning meaning to these words, and establishing a coherent sentence meaning by way of syntactic parsing and semantic integration. They do so without using a large amount of working memory and attentional resources (LaBerge and Samuels, 1974[80]; Perfetti, 1985[1]). Therefore, fluent readers have more cognitive resources available to invest in higher-level comprehension processes such as inferences and reading strategies (Walczyk et al., 2004[81]). The differential allocation of mental resources to low- vs. higher-level processes in struggling vs. fluent readers accounts for the strong link between fluent reading and text-level comprehension outcomes found in many studies and in all age groups ranging from primary school to adult readers (García and Cain, 2014[82]; Klauda and Guthrie, 2008[83]; Richter et al., 2013[84]).

To better assess reading fluency, the PIAAC Cycle 2 assessment will again include a measure of reading component skills. The components assessment tasks are designed to inform our understanding of the basic reading skills that underlay proficient literacy performance levels. These tasks help describe what low literate adults can do and therefore form a basis for learning, instruction, and policy with respect to helping low literate adults achieve higher literacy levels (Sabatini and Bruce, 2009[85]). In response to the OECD’s requirement that the results of the components assessment be generalisable to the overall population, the components tasks will be administered to a representative subsample of all individuals who take the full literacy assessment.

The reading components assessment will include two sets of tasks, both of which were administered in the first cycle of PIAAC. The first set focuses on the ability to process meaning at the sentence level. Respondents will be shown a series of sentences, which increase in complexity, and be asked to identify if the sentence does or does not make sense in terms of properties of the real world or the internal logic of the sentence. The second set of tasks focuses on passage comprehension. For these tasks, respondents are asked to read passages where, at certain points, they must select a word from two provided alternatives so that the text makes sense [see sample tasks in (OECD, 2019[86])].

Because PIAAC Cycle 2 will be administered on tablets, it will be possible to precisely record both accuracy and response times for the component tasks. The accuracy data in the sentence verification and passage comprehension tasks will serve as indicators of the mastery of basic reading comprehension processes. They will be included in the scaling of the items in the PIAAC literacy assessment, increasing measurement precision in the lower range of the scale. The response times will serve as an indicator of fluency in basic reading processes, allowing researchers to explore its potential contribution to the mastery of the more complex literacy tasks in the PIAAC literacy assessment.

The concept of reading engagement refers to the degree of importance of reading to an individual and to the extent that reading plays a role in their daily life. Empirical studies with children and adults have shown that differences in engagement are systematically related to differences in performance on assessments. In particular, studies with different age groups provide evidence for an upward causal spiral: more proficient readers will read more and the exposure to printed texts will promote their reading development and lead to higher proficiency (Guthrie and Wigfield, 2000[87]; Mol and Bus, 2011[88]). The construct of engagement encompasses objective aspects such as the amount and diversity of reading one experiences in daily life, and also subjective aspects such as one's interest in reading, perception of control over reading, and reading efficacy. The PIAAC literacy assessments will capture core aspects of the objective aspects of reading engagement as part of the background questionnaire.

Metacognition, or one's awareness, monitoring and control of their own cognitive processes, is also considered an important aspect of reading literacy (Baker, 1989[89]). However due to methodological and practical constraints the PIAAC literacy study will not include any specific assessment of metacognition in reading. Metacognition will be indirectly assessed through its contribution to the more complex reading tasks which require strategic decisions and self-regulation to different degrees.

The difficulty of literacy tasks is expected to depend on three series of factors, namely a) characteristics of the text(s); b) characteristics of the question; and c) the specific interaction between a question and a text (or set of texts).

In addition, some of these factors affect the difficulty of the task regardless of the specific cognitive demands involved, whereas other factors are specific to a certain type of task demand. Table 2.5 lists the main text, task, and text-by-task factors driving difficulty in general, and then more specifically for each type of cognitive task demand.

References

[8] Alexander and The Disciplined Reading and Learning Research Laboratory (2012), “Reading into the future: Competence for the 21st century”, Educational Psychologist, Vol. 47/4, pp. 259-280, https://doi.org/10.1080/00461520.2012.722511.

[57] Allen, E. et al. (1999), “How reliable is science information on the web?”, Nature, Vol. 402/6763, p. 722, https://doi.org/10.1038/45370.

[89] Baker, L. (1989), “Metacognition, comprehension monitoring, and the adult reader”, Educational Psychology Review, Vol. 1/1, pp. 3-38, https://doi.org/10.1007/bf01326548.

[62] Blair, J. and R. Johnson (1987), “Argumentation as dialectical”, Argumentation, Vol. 1/1, pp. 41-56, https://doi.org/10.1007/bf00127118.

[53] Bråten, I. et al. (2011), “The role of epistemic beliefs in the comprehension of multiple expository texts: Toward an integrated model”, Educational Psychologist, Vol. 46/1, pp. 48-70, https://doi.org/10.1080/00461520.2011.538647.

[33] Bråten, I., H. Strømsø and M. Britt (2009), “Trust matters: Examining the role of source evaluation in students’ construction of meaning within and across multiple texts”, Reading Research Quarterly, Vol. 44/1, pp. 6-28, https://doi.org/10.1598/rrq.44.1.1.

[59] Britt, M. and C. Aglinskas (2002), “Improving students’ ability to identify and use source information”, Cognition and Instruction, Vol. 20/4, pp. 485-522, https://doi.org/10.1207/s1532690xci2004_2.

[21] Britt, M. and G. Gabrys (2001), “Teaching advanced literacy skills for the World Wide Web”, in Wolfe, C. (ed.), Webs We Weave: Learning and Teaching on the World Wide Web, Academic Press, New York, https://doi.org/10.1016/B978-012761891-3/50007-2.

[72] Britt, M. et al. (1999), “Content integration and source separation in learning from multiple texts”, in Goldman, S., A. Graesser and P. van den Broek (eds.), Narrative Comprehension, Causality, and Coherence: Essays in Honor of Tom Trabasso, Lawrence Erlbaum Associates, Mahwah, NJ.

[24] Britt, M., T. Richter and J. Rouet (2014), “Scientific literacy: The role of goal-directed reading and evaluation in understanding scientific information”, Educational Psychologist, Vol. 49/2, pp. 104-122, https://doi.org/10.1080/00461520.2014.916217.

[50] Britt, M. and J. Rouet (2012), “Learning with multiple documents: Component skills and their acquisition”, in Lawson, M. and J. Kirby (eds.), Enhancing the Quality of Learning: Dispositions, Instruction, and Learning Processes, Cambridge University Press.

[9] Britt, M., J. Rouet and A. Durik (2018), Literacy beyond Text Comprehension, Taylor and Francis, New York, https://doi.org/10.4324/9781315682860.

[76] Cain, K. and J. Oakhill (2006), “Assessment matters: Issues in the measurement of reading comprehension”, British Journal of Educational Psychology, Vol. 76/4, pp. 697-708, https://doi.org/10.1348/000709905x69807.

[28] Dehaene, S. (2009), Reading in the Brain, Penguin Viking, New York.

[19] Delgado, P. et al. (2018), “Don’t throw away your printed books: A meta-analysis on the effects of reading media on reading comprehension”, Educational Research Review, Vol. 25, pp. 23-38, https://doi.org/10.1016/j.edurev.2018.09.003.

[37] Dreher, M. and J. Guthrie (1990), “Cognitive processes in textbook chapter search tasks”, Reading Research Quarterly, Vol. 25/4, pp. 323-339, https://doi.org/10.2307/747694.

[38] Fu, W. and P. Pirolli (2007), “SNIF-ACT: A cognitive model of user navigation on the World Wide Web”, Human–Computer Interaction, Vol. 22/4, pp. 355-412, https://doi.org/10.1080/07370020701638806.

[63] Galotti, K. (1989), “Approaches to studying formal and everyday reasoning”, Psychological Bulletin, Vol. 105/3, pp. 331-351, https://doi.org/10.1037/0033-2909.105.3.331.

[82] García, J. and K. Cain (2014), “Decoding and reading comprehension: A meta-analysis to identify which reader and assessment characteristics influence the strength of the relationship in English”, Review of Educational Research, Vol. 84/1, pp. 74-111, https://doi.org/10.3102/0034654313499616.

[45] Garner, R. et al. (1986), “Children’s knowledge of structural properties of expository text”, Journal of Experimental Psychology no. 78, pp. 411-416.

[10] Goldman, S. (2004), “Cognitive aspects of constructing meaning through and across multiple texts”, in Shuart-Ferris, N. and D. Bloome (eds.), Uses of Intertextuality in Classroom and Educational Research, Information Age Publishing, Greenwich, CT.

[69] Goldman, S. and J. Rakestraw Jr. (2000), “Structural aspects of constructing meaning from text”, in Kamil, M. et al. (eds.), Handbook of Reading Research, Volume III, Lawrence Elrbaum Associates, Mahwah, NJ.

[77] Grisay, A. and C. Monseur (2007), “Measuring the equivalence of item difficulty in the various versions of an international test”, Studies in Educational Evaluation, Vol. 33/1, pp. 69-86, https://doi.org/10.1016/j.stueduc.2007.01.006.

[31] Guthrie, J. and I. Kirsch (1987), “Distinctions between reading comprehension and locating information in text”, Journal of Educational Psychology, Vol. 79/3, pp. 220-227, https://doi.org/10.1037/0022-0663.79.3.220.

[87] Guthrie, J. and A. Wigfield (2000), “Engagement and motivation in reading”, in Kamil, M. et al. (eds.), Handbook of Reading Research, Volume III, Lawrence Elrbaum Associates, Mahwah, NJ.

[17] ITU (2017), Measuring the Information Society Report 2017, http://www.itu.int/en/ITU-D/Statistics/Pages/publications/mis2017.aspx (accessed on 9.10.2018).

[35] Kaakinen, J. and J. Hyönä (2014), “Task relevance induces momentary changes in the functional visual field during reading”, Psychological Science, Vol. 25/2, pp. 626-632, https://doi.org/10.1177/0956797613512332.

[42] Kaakinen, J. and J. Hyönä (2008), “Perspective-driven text comprehension”, Applied Cognitive Psychology, Vol. 22/3, pp. 319-334, https://doi.org/10.1002/acp.1412.

[36] Kaakinen, J., J. Hyönä and J. Keenan (2003), “How prior knowledge, WMC, and relevance of information affect eye fixations in expository text”, Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol. 29/3, pp. 447-457, https://doi.org/10.1037/0278-7393.29.3.447.

[32] Kintsch, W. (1998), Comprehension: A Paradigm for Cognition, Cambridge University Press, Cambridge, MA.

[51] Kintsch, W. and T. van Dijk (1978), “Toward a model of text comprehension and production”, Psychological Review, Vol. 85/5, pp. 363-394, https://doi.org/10.1037/0033-295x.85.5.363.

[16] Kirsch, I. and M. Lennon (2017), “PIAAC: A new design for a new era”, Large-scale Assessments in Education, Vol. 5/11, https://doi.org/10.1186/s40536-017-0046-6.

[68] Kirsch, I. and P. Mosenthal (1990), “Exploring document literacy: Variables underlying the performance of young adults”, Reading Research Quarterly, Vol. 25/1, pp. 5-30, https://doi.org/10.2307/747985.

[83] Klauda, S. and J. Guthrie (2008), “Relationships of three components of reading fluency to reading comprehension”, Journal of Educational Psychology, Vol. 100/2, pp. 310-321, https://doi.org/10.1037/0022-0663.100.2.310.

[79] Kuhn, M. and S. Stahl (2003), “Fluency: A review of developmental and remedial practices”, Journal of Educational Psychology, Vol. 95/1, pp. 3-21, https://doi.org/10.1037/0022-0663.95.1.3.

[80] LaBerge, D. and S. Samuels (1974), “Toward a theory of automatic information processing in reading”, Cognitive Psychology, Vol. 6/2, pp. 293-323, https://doi.org/10.1016/0010-0285(74)90015-2.

[44] Lemarié, J. et al. (2008), “SARA: A text-based and reader-based theory of signaling”, Educational Psychologist, Vol. 43/1, pp. 27-48, https://doi.org/10.1080/00461520701756321.

[54] Leu, D. et al. (2015), “The new literacies of online research and comprehension: Rethinking the reading achievement gap”, Reading Research Quarterly, Vol. 50/1, pp. 37-59, https://ila.onlinelibrary.wiley.com/doi/epdf/10.1002/rrq.85.

[11] Leu, D. et al. (2017), “New literacies: A dual-level theory of the changing nature of literacy, instruction, and assessment”, Journal of Education, Vol. 197/2, pp. 1-18, https://doi.org/10.1177/002205741719700202.

[61] Magliano, J. et al. (2017), “The modern reader: Should changes to how we read affect research and theory?”, in Schober, M., D. Rapp and M. Britt (eds.), Routledge handbooks in linguistics. The Routledge handbook of discourse processes, Routledge/Taylor & Francis Group, https://doi.org/10.4324/9781315687384.

[18] Mangen, A. and A. van der Weel (2016), “The evolution of reading in the age of digitisation: An integrative framework for reading research”, Literacy, Vol. 50/3, pp. 116-124, https://doi.org/10.1111/lit.12086.

[29] McCrudden, M. and G. Schraw (2007), “Relevance and goal-focusing in text processing”, Educational Psychology Review, Vol. 19/2, pp. 113-139, https://doi.org/10.1007/s10648-006-9010-7.

[47] McNamara, D. and J. Magliano (2009), “Toward a comprehensive model of comprehension”, in Ross, B. (ed.), The Psychology of Learning and Motivation, Elsevier, https://doi.org/10.1016/s0079-7421(09)51009-2.

[88] Mol, S. and A. Bus (2011), “To read or not to read: A meta-analysis of print exposure from infancy to early adulthood”, Psychological Bulletin, Vol. 137/2, pp. 267-296, https://doi.org/10.1037/a0021890.

[5] Morrisroe, J. (2014), Literacy Changes Lives: A New Perspective on Health, Employment and Crime, National Literacy Trust, London, https://literacytrust.org.uk/documents/652/2014_09_01_free_research_-_literacy_changes_lives_2014.pdf.pdf.

[12] Murray, T., I. Kirsch and L. Jenkins (1998), Adult Literacy in OECD Countries: Technical Report on the First International Adult Literacy Survey, National Center for Education Statistics, Washington, DC, https://nces.ed.gov/pubs98/98053.pdf.

[15] National Center for Education Statistics (NCES) (n.d.), PIAAC Participating Countries, https://nces.ed.gov/surveys/piaac/countries.asp (accessed on 31.12.2018).

[70] Naumann, J. (2015), “A model of online reading engagement: Linking engagement, navigation, and performance in digital reading”, Computers in Human Behavior, Vol. 53, pp. 263-277, https://doi.org/10.1016/j.chb.2015.06.051.

[86] OECD (2019), The Survey of Adult Skills : Reader’s Companion, Third Edition, OECD Skills Studies, OECD Publishing, Paris, https://dx.doi.org/10.1787/f70238c7-en.

[6] OECD (2013), OECD Skills Outlook 2013: First Results from the Survey of Adult Skills, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264204256-en.

[71] OECD (2011), PISA 2009 Results: Students On Line: Digital Technologies and Performance (Volume VI), PISA, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264112995-en.

[14] OECD/Statistics Canada (2011), Literacy for Life: Further Results from the Adult Literacy and Life Skills Survey, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264091269-en.

[13] OECD/Statistics Canada (2005), Learning a Living: First Results of the Adult Literacy and Life Skills Survey, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264010390-en.

[74] Ozuru, Y. et al. (2007), “Influence of question format and text availability on the assessment of expository text comprehension”, Cognition and Instruction, Vol. 25/4, pp. 399-438, https://doi.org/10.1080/07370000701632371.

[39] Pan, B. et al. (2007), “In Google we trust: Users’ decisions on rank, position, and relevance”, Journal of Computer-Mediated Communication, Vol. 12/3, pp. 801-823, https://doi.org/10.1111/j.1083-6101.2007.00351.x.

[1] Perfetti, C. (1985), Reading Ability, Oxford University Press, New York.

[49] Perfetti, C., J. Rouet and M. Britt (1999), “Toward a theory of documents representation”, in van Oostendorp, H. and S. Goldman (eds.), The Construction of Mental Representations During Reading, Lawrence Erlbaum Associates Publishers, Mahwah, NJ.

[46] Potocki, A. et al. (2017), “Children’s visual scanning of textual documents: Effects of document organization, search goals, and metatextual knowledge”, Scientific Studies of Reading, Vol. 21/6, pp. 480-497, https://doi.org/10.1080/10888438.2017.1334060.

[34] Richter, T. (2015), “Validation and comprehension of text information: Two sides of the same coin”, Discourse Processes, Vol. 52/5-6, pp. 337-355, https://doi.org/10.1080/0163853x.2015.1025665.

[84] Richter, T. et al. (2013), “Lexical quality and reading comprehension in primary school children”, Scientific Studies of Reading, Vol. 17/6, pp. 415-434, https://doi.org/10.1080/10888438.2013.764879.

[58] Richter, T., S. Schroeder and B. Wöhrmann (2009), “You don’t have to believe everything you read: Background knowledge permits fast and efficient validation of information”, Journal of Personality and Social Psychology, Vol. 96/3, pp. 538-558, https://doi.org/10.1037/a0014038.

[65] Rieh, S. (2002), “Judgment of information quality and cognitive authority in the Web”, Journal of the American Society for Information Science and Technology, Vol. 53/2, pp. 145-161, https://doi.org/10.1002/asi.10017.

[3] Rouet, J. and M. Britt (2017), Literacy in 2030. Report commissioned by the OECD’s Education 2030 project, OECD, Paris.

[55] Rouet, J. and M. Britt (2014), “Multimedia learning from multiple documents”, in Mayer, R. (ed.), The Cambridge Handbook of Multimedia Learning, 2nd Edition (Cambridge Handbooks in Psychology, pp. 813-841), Cambridge University Press, Cambridge, https://doi.org/10.1017/cbo9781139547369.039.

[30] Rouet, J. and M. Britt (2011), “Relevance processes in multiple document comprehension”, in McCrudden, M., J. Magliano and G. Schraw (eds.), Text Relevance and Learning from Text, Information Age Publishing, Greenwich, CT.

[66] Rouet, J., M. Britt and A. Durik (2017), “RESOLV: Readers’ representation of reading contexts and tasks”, Educational Psychologist, Vol. 52/3, pp. 200-215, https://doi.org/10.1080/00461520.2017.1329015.

[23] Rouet, J., M. Britt and A. Potocki (2019), “Multiple-text comprehension”, in Dunlosky, J. and K. Rawson (eds.), The Cambridge Handbook of Cognition and Education (Cambridge Handbooks in Psychology, pp. 356-380), Cambridge University Press, Cambridge, https://doi.org/10.1017/9781108235631.015.

[43] Rouet, J. and B. Coutelet (2008), “The acquisition of document search strategies in grade school students”, Applied Cognitive Psychology, Vol. 22/3, pp. 389-406, https://doi.org/10.1002/acp.1415.

[22] Rouet, J. and A. Potocki (2018), “From reading comprehension to document literacy: Learning to search for, evaluate and integrate information across texts”, Infancia y Aprendizaje, Vol. 41/3, pp. 415-446, https://doi.org/10.1080/02103702.2018.1480313.

[41] Rouet, J. et al. (2011), “The influence of surface and deep cues on primary and secondary school students’ assessment of relevance in Web menus”, Learning and Instruction, Vol. 21/2, pp. 205-219, https://doi.org/10.1016/j.learninstruc.2010.02.007.

[85] Sabatini, J. and K. Bruce (2009), “PIAAC Reading Component: A Conceptual Framework”, OECD Education Working Papers, No. 33, OECD Publishing, Paris, https://dx.doi.org/10.1787/220367414132.

[20] Salmerón, L. et al. (2018), “Chapter 4. Comprehension processes in digital reading”, in Barzillai, M. et al. (eds.), Learning to Read in a Digital World (Studies in Written Language and Literacy, 17) (pp. 91-120), John Benjamins Publishing Company, Amsterdam, https://doi.org/10.1075/swll.17.04sal.

[75] Schroeder, S. (2011), “What readers have and do: Effects of students’ verbal ability and reading time components on comprehension with and without text availability”, Journal of Educational Psychology, Vol. 103/4, pp. 877-896, https://doi.org/10.1037/a0023731.

[78] Schwabe, F., N. McElvany and M. Trendtel (2015), “The school age gender gap in reading achievement: Examining the influences of item format and intrinsic reading motivation”, Reading Research Quarterly, Vol. 50/2, pp. 219-232, https://doi.org/10.1002/rrq.92.

[25] Singer, M. (2013), “Validation in reading comprehension”, Current Directions in Psychological Science, Vol. 22/5, pp. 361-366, https://doi.org/10.1177/0963721413495236.

[27] Snow, C. and the RAND reading study Group (2002), Reading for Understanding. Toward a R&D Program for Reading Comprehension, RAND, Santa Monica, CA, https://www.rand.org/pubs/monograph_reports/MR1465.html.

[52] Stadtler, M. and R. Bromme (2014), “The content-source integration model: A taxonomic description of how readers comprehend conflicting scientific information”, in Rapp, D. and J. Braasch (eds.), Processing Inaccurate Information: Theoretical and Applied Perspectives from Cognitive Science and the Educational Sciences, MIT Press, Cambridge, MA.

[56] Stadtler, M. et al. (2013), “Dealing with uncertainty: Readers’ memory for and use of conflicting information from science texts as function of presentation format and source expertise”, Cognition and Instruction, Vol. 31/2, pp. 130-150, https://doi.org/10.1080/07370008.2013.769996.

[26] Stanovich, K. and R. West (1989), “Exposure to print and orthographic processing”, Reading Research Quarterly, Vol. 24/4, pp. 402-433, https://doi.org/10.2307/747605.

[2] Street, B. and B. Street (1984), Literacy in Theory and Practice, Cambridge University Press, New York.

[73] Strømsø, H. et al. (2013), “Spontaneous sourcing among students reading multiple documents”, Cognition and Instruction, Vol. 31/2, pp. 176-203, https://doi.org/10.1080/07370008.2013.769994.

[64] Toulmin, S. (1958), The Uses of Argument, Cambridge University Press, Cambridge, MA.

[4] UNESCO (2017), “Literacy rates continue to rise from one generation to the next”, UNESCO Fact Sheet No. 45, UNESCO Institute for Statistics, Paris, http://uis.unesco.org/sites/default/files/documents/fs45-literacy-rates-continue-rise-generation-to-next-en-2017.pdf.

[81] Walczyk, J. et al. (2004), “Children’s compensations for poorly automated reading skills”, Discourse Processes, Vol. 37/1, pp. 47-66, https://doi.org/10.1207/s15326950dp3701_3.

[7] White, S., J. Chen and B. Forsyth (2010), “Reading-related literacy activities of American adults: Time spent, task types, and cognitive skills used”, Journal of Literacy Research, Vol. 42/3, pp. 276-307, https://doi.org/10.1080/1086296x.2010.503552.

[67] Wineburg, S. (1994), “The cognitive representation of historical texts”, in Leinhardt, G., I. Beck and C. Stainton (eds.), Teaching and Learning in History, Erlbaum, Hillsdale, NJ.

[60] Wineburg, S. (1991), “Historical problem solving: A study of the cognitive processes used in the evaluation of documentary and pictorial evidence”, Journal of Educational Psychology, Vol. 83/1, pp. 73-87, https://doi.org/10.1037/0022-0663.83.1.73.

[40] Wirth, W. et al. (2007), “Heuristic and systematic use of search engines”, Journal of Computer-Mediated Communication, Vol. 12/3, pp. 778-800, https://doi.org/10.1111/j.1083-6101.2007.00350.x.

[48] Zwaan, R. and M. Singer (2003), “Text comprehension”, in Graesser, A., M. Gernsbacher and S. Goldman (eds.), Handbook of Discourse Processes, Erlbaum, Mahwah, NJ.

Note

← 1. Navigation in a static piece of continuous text is always possible by simply shifting one's focus of attention from one passage of the text to another, by skimming through passages, and by browsing through pages and sections in the case of long texts.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2021

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.