copy the linklink copied!Annex C. Technical notes on analyses in this report

copy the linklink copied!Use of staff and centre weights

The statistics presented in this report were derived from data obtained through samples of centres, centre leaders and staff (Annex B). For these statistics to be meaningful for a country, they need to reflect the whole population from which they were drawn and not merely the sample used to collect them. Thus, survey weights must be used in order to obtain design-unbiased estimates of population or model parameters.

Final weights allow the production of country-level estimates from the observed sample data. The estimation weight indicates how many population units are represented by a sampled unit. The final weight is the combination of many factors reflecting the probabilities of selection at the various stages of sampling and the response obtained at each stage. Other factors may also come into play as dictated by special conditions to maintain the unbiasedness of the estimates (e.g. adjustment for staff working in more than one centre). A detailed description of the sampling and weighting procedures can be found in the TALIS Starting Strong 2018 Technical Report (OECD, 2019[10]).

Statistics presented in this report that are based on the responses of centre leaders and that contribute to estimates related to centre leaders were estimated using centre weights (CNTRWGT). Results based only on responses of staff or on responses of staff and leaders (i.e. responses from centre leaders were merged with staff responses) were weighted by staff weights (STAFFWGT).

copy the linklink copied!Standard errors and significance tests

Standard errors

The statistics in this report represent estimates based on samples of staff and centres, rather than values that could be calculated if every staff member and leader in every country had answered every question. Consequently, it is important to measure the degree of uncertainty of the estimates. In TALIS Starting Strong, each estimate has an associated degree of uncertainty that is expressed through a standard error. The use of confidence intervals provides a way to make inferences about the population statistics in a manner that reflects the uncertainty associated with the sample estimates. From an observed sample statistic and assuming a normal distribution, it can be inferred that the corresponding population result would lie within the confidence interval in 95 out of 100 replications of the measurement on different samples drawn from the same population. The reported standard errors were computed with a balanced repeated replication (BRR) methodology.

Differences between sub-groups

Differences between sub-groups along staff (e.g. teachers and assistants) and centre characteristics (e.g. centres with a high concentration of children from socio-economically disadvantaged homes and centres with a low concentration of children from socio-economically disadvantaged homes) were tested for statistical significance. All differences marked in bold in the data tables of this report are statistically significantly different from 0 at the 95% level. In the case of differences between sub-groups, the standard error is calculated by taking into account that the two sub-samples are not independent.

copy the linklink copied!Use of complex variables

Number of staff and children in the centre

TALIS Starting Strong asks leaders to indicate the number of staff in different categories working in their ECEC centres (leaders, teachers, assistants, staff for individual children, staff for special tasks, interns and other staff) and the number of girls and boys enrolled in the centre.

This information is used to derive several indicators describing the staff and children in the centre: 1) the share of different types of staff working at the centre (i.e. leaders, teachers, assistants and other staff); 2) the number of teachers and leaders compared to the total number of staff at the centre; 3) the number of children at the centre; 4) the number of staff per child at the centre. If the centre covers pre-primary education (ISCED level 02) and provision for children under age 3, children and staff at both levels are considered in those numbers.

The number of staff per child at the centre refers to the total number of staff working in a centre, regardless of their role, divided by the total number of children enrolled. Because the number of staff per individual child is very low, when specific examples are cited for comparative purposes, they are presented as “number of staff per ten children”, which is obtained by multiplying the number of staff per child by ten.

These indicators differ from administrative data capturing similar constructs, for instance because TALIS Starting Strong data does not allow differentiation between part-time and full-time employment at the centre level. Furthermore, regulations often refer to staffing requirements at the group or classroom/playroom level, rather than for the centre as a whole.

Number of staff and children in the target group

A similar set of variables is also built at the level of the target group. TALIS Starting Strong asks staff to take the example of the target group (the first group of children they were working with on the last working day before the day of the Survey). Respondents indicate the category that best represents their role when working with this group of children (leader, teacher, assistant, staff for individual children, staff for special tasks, interns and other staff), as well as the number of girls and boys who made up the group.

This information is used to derive three indicators: 1) the number of children per target group; 2) the number of staff working with the same target group on the same day; and 3) the number of staff per child working with the same target group on the same day.

The number of staff per child with the same target group on the same day refers to the number of staff working with the same target group, regardless of their role, divided by the number of children in the target group. Because the number of staff per individual child is very low, when specific examples are cited for comparative purposes, they are presented as “number of staff per ten children”, which is obtained by multiplying the number of staff per child by ten.

The number of staff per child working with the same target group on the same day reflects a specific situation and is, therefore, different from the number of staff per child at the centre level. Staff may be working with the same target group at different moments of the day and not together, or may work part-time. Children in the same group may also change over the day into different group compositions, and children’s attendance hours of children can differ. This concept also differs from the regulated maximum numbers of children per staff member, as that could include some restrictions on the staff to be included (depending on their qualifications or role) and can be specific to the age group of children.

As there is no indicator clarifying which target group each staff member referred to, several staff members may have referred to the same target group. This can result in a bias, as some target groups may be over-represented in the data.

National quarters

Some analysis using the number of children or the number of staff per child (at the centre or target group level) require these continuous variables to be transformed into interval categories. To accommodate for this need, the report makes use of national quarters. In each country, the weighted distribution of the continuous variable is split into equally sized categories, following the rank order. For instance, the cut-off point between the first quarter and the second quarter of the number of children per centre is the 25th percentile of the distribution of the number of children per centre in a specific country. As a result, the range of these intervals will differ across countries and vary with the properties of the distribution in each country.

Share of staff who left their ECEC centre in the previous year

Leaders participating in TALIS Starting Strong reported on the number of staff who left the ECEC centre in the previous year. The share of staff who left their ECEC centre in the previous year is obtained by dividing this variable by the total number of staff at the centre at the time leaders responded to the Survey.

copy the linklink copied!Assessing process quality in TALIS Starting Strong

The quality of the various interactions between the ECEC workforce, children and parents involves several dimensions, corresponding to major domains of children’s learning, development and well-being. Given its multidimensional nature, process quality can be conceptualised as a set of indicators. In TALIS Starting Strong, these indicators are built from questions on practices reported by staff as being used by staff at the ECEC centre or by themselves with the target group (the first group of children that they worked with on their last working day before the Survey).

The indicators of process quality used in this report are the result of extensive scale evaluation using guidelines and experience from TALIS 2018 and prior cycles. Through the scaling evaluation process, items included in the survey on interactions between children and staff and between parents/guardians and staff or children are grouped into indicators summarising responses from multiple questions into indicators of several practices. These include five indicators at the centre level (facilitating literacy development, facilitating numeracy development, facilitating prosocial behaviour, facilitating engagement of parents/guardians) and two indicators at the target group level (behavioural support and adaptive practices). However, because TALIS Starting Strong measures the self-reported practices of staff from countries with different cultural backgrounds and in different settings (i.e. pre-primary education and centres for children under age 3), building these indicators entails a number of methodological issues. In particular, individual and cultural factors affect the interpretation of questions. This may produce differences in levels of endorsement or frequency in survey responses and it may also affect the item correlation structure used to summarise the information and thus limit the comparability of the resulting indicators. In order to effectively use these indicators for further analysis, it is important to consider the specific scale properties, such as their reliability and validity in cross-cultural context.

To understand whether the process quality indicators in TALIS Starting Strong could be considered comparable across countries and levels of ECEC, measurement invariance was tested. The most restrictive level of measurement invariance, scalar invariance, is reached once the indicator satisfies three properties:

  1. 1. The structure of the indicator is the same across groups, meaning that the indicator is built using the same set of items across groups.

  2. 2. The strength of the associations between the indicator and the items (factor loadings) are equivalent. This property makes it possible to claim that one unit of change in the indicator will lead to the same amount of average change in the items that constitute the construct across different groups.

  3. 3. The intercepts/thresholds for all items across groups are equivalent. If the intercepts of the items for all groups are equivalent, then the expected value of the items becomes the same across groups when the value of the indicator is zero and means can be compared across groups.

If only properties (1) and (2) are satisfied, then the indicator reaches metric invariance. If only property (1) is satisfied, the indicator reaches configural invariance.

Indicators of process quality built for this publication did not reach scalar invariance. As a result, the means of process-quality indicators cannot be compared across countries. However, all process quality indicators for pre-primary education (ISCED level 02) used in this publication reached metric invariance (Table A C.1). This means these indicators can be used for comparison within countries and comparisons across countries of the strength of the association between process-quality indicators and other factors. With metric invariant scales the same items from the Survey are relevant for each dimension of process quality across countries. Therefore, these indicators of process quality are used to describe practices within each country and to examine how characteristics of the specific group of children, the centre and the responding staff member explain variation in practices across countries.

Some indicators of process quality used in this report only reached configural invariance for centres for children under age 3 (facilitating literacy development, facilitating emotional development and behavioural support; Table A C.1). Results using these indicators are meaningful within countries, but cannot be compared across countries.

By design, all indicators and dimensions have a midpoint of 10 and a standard deviation of 2. This means that indicators and dimensions with values above 12 can be considered high. The fact that all indicators and dimensions have the same midpoint helps interpret the level of implementation of a specific practice, regardless of whether the practice is expected to occur quite often in the target group (or centre) or not. Additional information on the construction and validation of the scales included in this report can be found in Chapter 11 of the TALIS Starting Strong 2018 Technical Report (OECD, 2019).

copy the linklink copied!
Table A C.1. Indicators of process quality in TALIS Starting Strong: levels of measurement invariance

Dimension

Indicator

Practices (items from the Survey)

Level of measurement invariance

Centres for children under age 3

Pre-primary education (ISCED level 02)

Facilitating literacy and numeracy development (Practices used at the centre level, according to staff)

Facilitating literacy development

Play word games with the children, Play with letters with the children, Sing songs or rhymes with the children

Configural

Metric

Facilitating numeracy development

Use sorting activities by shape or colour, Play number games, Sing songs about numbers, Help children to use numbers or to count, Refer to groups of objects by the size of the group

Metric

Metric

Facilitating socio-emotional development (Practices used at the centre level, according to staff)

Facilitating emotional development

Hug the children, Talk with children about feelings, Help children to talk about what makes them happy, Help children to talk about what makes them sad

Configural

Metric

Facilitating prosocial behaviour

Encourage sharing among children, Encourage children to help each other, Encourage children playing in small groups to include other children, Encourage children if they comfort each other

Metric

Metric

Group organisation and individual support

(Practices used by staff with the target group)

Behavioural support

I help children to follow the rules, I calm children who are upset, When the activities begin, I ask children to quieten down, I address children’s disruptive behaviour that slows down other children’s learning, I help children understand the consequences if they do not follow the rules

Configural

Metric

Adaptive practices

I set daily goals for the children, I explain how a new activity relates to children’s lives, I give different activities to suit different children’s level of development, I give different activities to suit different children’s interests, I adapt my activities to differences in children’s cultural background

Metric

Metric

Facilitating engagement of parents/guardians (Practices used at the centre level, according to staff)

Staff engagement with parents and guardians

Parents or guardians can get in touch with ECEC staff easily, Parents or guardians are informed about the development, well-being and learning of their children on a regular basis, Parents or guardians are informed about daily activities on a regular basis, Parents or guardians are encouraged by ECEC staff to play and do learning activities with their children at home

Metric

Metric

Note: This table shows the practices that are included in the indicators of process quality used in this publication.

copy the linklink copied!Statistics based on regressions

Country-specific regression analyses were performed to examine the associations between different variables. Multiple linear regression was used in those cases where the dependent (or outcome) variable was considered continuous, for example with the process quality indicators. Binary logistic regression was employed when the dependent (or outcome) variable was a binary categorical variable, for example a high versus low share of children from socio-economically disadvantaged homes. Outcome variables used in the report refer either to the centre or to the target group; the predictor and control variables are adjusted accordingly.

The centre (or target group) characteristics of interest can relate to one another and with other characteristics of the staff member who is reporting. Thus, the regression analyses were performed through an estimation of the associations of interest, holding all other characteristics constant. In the models, the associations between a specific centre (or target group) feature and the outcome variable were examined after accounting for a set of centre and staff characteristics, described below. Control variables included in the regression models were selected based on theoretical reasoning and to ensure comparability of the model across countries. For some countries, the number of staff or centres in a particular category was too low to draw conclusions. Results are presented only when they are based on a minimum of 30 staff or ten centres.

Staff and centre characteristics used in the models

The typical regression model used in this report includes the following set of variables as independent variables. In some cases additional variables of interest are added depending on the analysis purpose while in other cases only a single predictor is used in the models. Tables providing complete regression results for all models presented in the report provide specific information on the variables included in respective models (see Annex D).

  • Staff education level is aggregated into three categories: secondary education or below (ISCED level 3 or below); post-secondary non-tertiary education or short-cycle tertiary education (ISCED level 4 or 5); and bachelor’s degree or equivalent or more (ISCED level 6 or more), which is set as reference.

  • Staff specifically trained to work with children versus staff without specific training (without specific training as the reference).

  • Staff experience refers to the number of years of experience in any ECEC centres, in three categories: less than 5 years; between 5 and 9 years; and more than 9 years, which is set as the reference.

  • Permanent employment versus fixed-term contracts/self-employment (two categories with fixed-term contracts as the reference).

  • Working full-time versus part-time (part-time as the reference).

  • Leader/Teacher: the respondent is either a leader or a teacher in the target group. All other categories, including assistants, are grouped and taken as the reference.

  • Centre in city: the centre is in a municipality with more than 15 000 people, with a location with fewer people taken as the reference.

  • Public management versus private management (private management as the reference).

  • Number of children in the centre (or target group), in quarters. In each country, the distribution of answers from leaders on the number of children can be divided into four equal quarters with increasing numbers of children per centre.

    The first quarter is set as the reference: the respondent works in a centre (or target group) with a number of children among the 25% lowest of the country distribution.

  • Number of staff per child, in quarters: the total number of staff working in the centre (or target group), regardless of their role, divided by the number of children in the centre (or target group).

    The first quarter is set as the reference: the respondent works in a centre (or target group) with a number of staff per child among the 25% lowest of the country distribution.

  • Concentration of children from socially disadvantaged homes: the proportion of children from socio-economically disadvantaged homes in the centre (target group) is greater than or equal to 11% with a proportion of 10% or less as the reference.

Strength of association

The strength of association between two variables (indicator, staff or centre characteristic) relates to the magnitude of the corresponding unstandardised coefficient of a regression in which one of the variables is the dependent variable and the other is among the independent variables.

copy the linklink copied!Pearson correlation coefficient

Correlation coefficients measure the strength and direction of the statistical association between two variables. Correlation coefficients vary between -1 and 1; values around 0 indicate a weak association, while the extreme values indicate the strongest possible negative or positive association. The Pearson correlation coefficient measures the strength and direction of the linear relationship between two variables.

copy the linklink copied!International averages

Cross-country averages are provided for pre-primary (ISCED level 02) settings throughout the report. These averages correspond to the arithmetic mean of the nine country estimates. Comparisons between a single country and the international average are not used because the averages reflect only nine countries. Each country makes a substantial contribution to the overall average and therefore a comparison between the averages and a single country may overestimate the similarity of that country’s results with those from the other countries.

Reference

[10] OECD (2019), TALIS Starting Strong 2018 Technical Report, OECD Publishing, Paris.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

https://doi.org/10.1787/301005d1-en

© OECD 2019

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.

Annex C. Technical notes on analyses in this report