13. Using CLA+ in a pilot in England for measuring learning gain

J.G. Morris
University of Birmingham
United Kingdom
S.C. Brand
Birmingham City University
United Kingdom

Universities in England are described as autonomous, independent organisations which have some government funding but are not owned or managed directly by the government (Eurydice, 2019[1]). There is a diverse range of universities, arising from various changes in the organisation of higher education. Expansion of the sector occurred in the 1960s when the Robbins report established the principle that, based on merit, courses should be much more widely available.

A further very significant change to the English higher education sector happened in 1992. Before this higher education was provided in universities and polytechnics. Universities were autonomous institutions funded by central government for research and teaching; polytechnics, on the other hand, were under local authority control and provided mainly vocational education. Then in 1992, the so-called “binary divide” was abolished and polytechnics became autonomous universities. A feature of this autonomy is the power to award degrees rests with universities rather than the state. This change has provided a diverse range of universities but some distinctions remain: for example, pre-1992 universities tend to be more research-intensive whereas post-1992 universities have a greater focus on vocational courses.

The regulator of the higher education sector in England has been the newly formed Office for Students since January 2018, and it is they who can register new institutions that apply for degree-awarding powers.

The United Kingdom higher education sector comprises about 160 institutions and just over 2.5 million students in 2019-20 (HESA, 2021[2]) of whom 1.89 million were undergraduates and 0.64 million postgraduates. In 2016-17 just under 20% of students were from outside the UK. Of these overseas students about 30% were from within the European Union (EU) and 70% from non-EU countries (Universities UK, 2018[3]). The majority of full-time first-degree courses last for three years.

The Higher Education Initial Participation Rate (HEIPR), an estimate of the likelihood of a young person participating in higher education by the age of 30, grew steadily from 42% in 2006-07 to 49% in 2011-12, but fell sharply in 2012-13 before recovering over the following years. In 2014-15 it rose again to 48%. Indeed the figures for 2011-12 and 2012-13 can now be interpreted as a disturbance to the steady rise over nearly 10 years. After 2013 the increase in HEIPR resumed and rose to just over 50% in 2017-18 (Gov.uk, 2020[4]).

It is interesting that this steady rise in the participation of young people in higher education has occurred in the context of rising tuition fees. These annual fees were introduced in England through the Teaching and Higher Education Act 1998 at the level of GBP 1 000 per year for full-time undergraduate students. The fee level was trebled in the Higher Education Act 2004 with institutions allowed to set fees up to a maximum of GBP 3 000 per year. At this point tuition fees, instead of being paid upfront were mainly covered by tuition fee loans with repayment deferred until graduates reached an income threshold. A further increase in tuition fees followed the Browne Report Securing a Sustainable Future for Higher Education (Browne, 2010[5]). From 2012 the government raised the maximum fee to GBP 9 000 per year and also raised the repayment income threshold for graduates to GBP 21 000, (Hubble and Bolton, 2018[6]). Finally, from 2017-18, the maximum fee rose to GBP 9 250. Tuition fees remain manifested for the majority of students as a build-up of debt following the taking up of student loans during their degree course. This is an interesting juxtaposition: before the imposition and subsequent increases in tuition fees the cost of tuition was borne by the state. The current position sees the student as the main bearer of this cost and has perhaps seen a greater tendency for students to see themselves as customers. This in turn has led, particularly over the last decade, to much work in the field of student engagement with the concept of students as active partners rather than mere customers (for example, see (Brand and Millard, 2019[7])).

The first steps in England that led to our work with CLA+ arose from the government Department for Business, Innovation and Skills (BIS) when the Higher Education Funding Council for England (HEFCE) was asked to consider whether there were better indicators such as measures of student engagement to provide information on what a high quality student experience looked like. This request was put forward in its annual grant letter to HEFCE for 2014-15. The following year’s grant letter for 2015-16 set out an expectation that there should be progress towards developing and testing new measures of learning gain.

As a preface to this initiative a report on learning gain was commissioned from RAND Europe, an independent not-for-profit research institute based in Cambridge, UK in 2014. RAND had worked in partnership with BIS, HEFCE and the Higher Education Academy (HEA). In particular RAND was asked to investigate:

  1. 1. In what ways and for what purposes are methods and tools for measuring learning gain already in use in English higher education?

  2. 2. Analysis of the relative benefits of approaches to measuring generic skills independently of discipline-specific knowledge and measuring generic skills in disciplinary contexts

  3. 3. Analysis of the applicability of methods and tools for measuring learning gain for different identified purposes such as to inform improvements to learning and teaching; to provide information to prospective students and their advisers; to investigate the impact of particular learning and teaching interventions and contextual factors; to assist in international comparison; or to form part of the quality assurance of learning and teaching

  4. 4. What are the relevant considerations for the use or adaptation of methods or tools for measuring learning gain in an English Higher Education context?

  5. 5. What are the relevant considerations for the design and development of robust pilot activities for measuring learning gain that might be drawn from existing practice and literature?

  6. 6. What relevant lessons can be drawn from research on value-added undertaken in the UK school sector?

The comprehensive report produced (McGrath et al., 2015[8]) highlighted a number of issues that were key to this work. One such issue was that although 130 out of 147 respondents to a call for information via the Higher Education Academy (HEA) recognised that the “concept of learning gain could be useful”, there was also a level of cautiousness as to how such measures might be used for accountability and transparency (McGrath et al., 2015, p. xiii[8])

The RAND report identified 14 possible methods classified in five groups: grades, surveys, standardised tests, mixed methods, and other qualitative methods. Importantly, the report also noted the clear distinction between direct measures of learning gain and proxy measures such as engagement surveys, satisfaction surveys, and graduate employment statistics or graduate salaries. A suggestion emerging in the report was that those who had commissioned it might want to conduct a series of pilots exploring a range of practical, technical, methodological and financial issues when seeking to use some of the methods discussed. Notably the report pointed out that “the concept of learning gain, and its definition, was in its infancy in higher education in England”.

Thus, early in 2015, HEFCE invited expressions of interest for “funding for projects that will pilot and evaluate measures of learning gain”. There was considerable interest in this call, which eventually led to the award of approximately GBP 4 million across 13 pilot projects involving over 70 higher education institutions. At this stage learning gain was defined by HEFCE as “distance travelled: the improvement in knowledge, skills, work-readiness and personal development demonstrated by students at two points in time”. Part of HEFCE’s thinking at this time was to seek to demonstrate to the government and to students the value of their investments in higher education. This call made specific reference not only to the five groups listed above but also to the possibility of exploring both longitudinal and cross-sectional approaches.

An interesting feature of the RAND report was the distinction drawn between “Content Knowledge” and “Skills and Competencies”. Content knowledge was defined as “a body of information that teachers teach and that students are expected to learn in a given subject or content area,” (McGrath et al., 2015, pp. 7-8[8]). The term was taken as referring to facts, concepts, theories and principles that are taught and learned rather than related skills such as reading, writing or researching that students also learn in academic courses. This concept of a generic skills assessment had previously been described in the Assessment of Higher Education Learning Outcomes (AHELO) Feasibility Report in 2012 (AHELO, 2012[9]), which considered a generic skills strand and two subject-specific strands in economics and engineering.

It was in this context that the work, which we led, set out to use the Collegiate Learning Assessment (CLA+). It was one of two funded projects that use a standardised test. We set up an ambitious schedule of testing and our original intention was for a longitudinal study following up to 1 000 students across the four participating universities, all of which were post-1992 institutions. The timing of this work was a critical factor: decisions as to which projects would be funded did not arrive until July 2015 thus leaving only a few weeks to set up the testing activity. Perhaps an even greater challenge was the fact that for all participating students the testing activity would be beyond their curriculum and thus might be perceived by many as a voluntary extra activity.

We refer above to the early cautiousness revealed at a survey carried out by the HEA as to how and for what purpose learning gain information might be used. In the early stages of our project similar concerns were expressed by participating academic staff. The question was posed as to the extent that this work might benefit students both in terms of any potential it might have to lead to enhancements in learning and teaching, and also to assist individual students in their own development of skills.

A further significant development during the time period (2015-18) of our project was the development of the Teaching Excellence Framework (TEF) which had its first full iteration in 2016-17. The TEF was introduced as a provider-level assessment that was delivered to a specification provided by the Department for Education, which was by this time the government ministry responsible for higher education. The TEF concentrated on three “Aspects of Quality”: Teaching Quality; Learning Environment; and Student Outcomes and Learning Gain. It seems likely that there was a desire that the funded projects being undertaken might contribute eventually to the choice of some form of standard method for assessment of learning gain to contribute to the TEF as it further developed. The TEF has now been renamed the Teaching Excellence and Student Outcomes Framework. Following an independent review and government response in early 2021, it was proposed that it continue as a periodic review operating every four to five years at institutional rather than subject level (Office for Students, 2021[10]).

Our initial target at our institution when implementing the CLA+ was to test students longitudinally from a variety of disciplines through four testing points at various points of their studies. Two of these testing points would occur during the first year of study with the remaining two testing points in autumn of their second and third years. A delay in the confirmation of funding for our project meant that testing was delayed. Our first testing point was moved to the spring of the students’ first year and the total number of longitudinal testing points reduced from four to three. As part of our learning gain project, we aimed to recruit up to ~350 students at our institution to test longitudinally and allow for attrition. Much of the recruitment and implementation of the CLA+ was carried out by an institutional lead based in our institution’s Centre for Excellence in Learning and Teaching, and the project lead working as a consultant. It was also important that we worked closely with academic staff within faculty. As we will discuss later on in this chapter, this proved to be one of the integral factors in student recruitment. In addition to our staff leads, we sought to involve students in all of the core aspects of implementing the test. Throughout the three-year delivery period of testing, we employed various student interns who worked closely with us on testing logistics, data analysis and including the student voice in all that we did when interacting with students and disseminating our findings.

In the process of using an American test in a UK context, we worked closely with the Council for Aid to Education (CAE) in adapting the CLA+ to suit the needs and cultural differences of the students taking the test. As Ashford-Rowe et al., (2013[11]) highlight in their work on authentic assessment, the CLA+ aligns with two critical characteristics of effective assessment design: the insurance of knowledge transfer to a single, real-world domain and the role of metacognition. The insurance of knowledge transfer requires “consistency between the assessment and real-world application” (Ashford-Rowe, Herrington and Brown, 2013, p. 208[11]). For example, the Performance Task (PT) based on “building” a baseball stadium was reworked using football (soccer) stadiums based in England. A core feature of the PT asks students to analyse and interpret multiple data sources and perspectives to construct their own argument in the format of a report, a transferable skill to be used in any context. As Zahner and Lehrfeld (2018[12]) highlight, the measurement of such skills have become an important indicator in career placement and workplace success.

Following this, metacognition acknowledges the “importance of critical reflection and self-assessment”, a key component encouraged through the test via a self-evaluation survey, and the resulting student reports that indicate their mastery levels (Ashford-Rowe, Herrington and Brown, 2013, pp. 208-209[11]). We also reworked the language to suit UK spelling and grammar in the CLA+, including changing the mastery level terminology to reflect the developmental language used in assessment criteria and grading. One of the notable points of our findings through qualitative and anecdotal feedback was that the structure and topics of the PT were positively received but Selected-Response Questions (SRQs) were received less favourably.

Part of implementing the CLA+ also required staff resources not only in recruiting students and testing logistics, but also for scoring PT submissions. This written response was scored using a bespoke rubric by human scorers on Analysis and Problem Solving, and Writing Effectiveness and Writing Mechanics, all of whom needed to be trained and calibrated prior to testing. CAE and ourselves found this to be a useful opportunity to run scorer training sessions with staff from our institution and our three other partner institutions from the learning gain project. These proved to be fruitful environments that encouraged critical dialogue on the process of scoring, such as different perspectives from individual scorers that may influence overall scores, a feature noted in research by Wyatt-Smith et al., (2010[13]) on marking with rubrics. An interesting aspect of our scoring was that depending on the institutions who were testing in a particular window, the colleagues would not necessarily be scoring students from their respective institution, which was an important consideration for the project team to eliminate any scoring bias.

As noted in our previous section, the contextual factors surrounding the project on measuring learning gain meant that there was a degree of hesitancy from our academic colleagues who were in some cases nominated to aid us with recruitment and testing of students. The prospect of measuring their students longitudinally, and then being compared both institutionally and externally on whether their students had demonstrated development in their critical thinking across their programme evoked some level of anxiety within many of our colleagues. It is also worth noting here that in part, this has been exacerbated by a sector-wide culture of measurement and the increasing view of students as customers, an ideology often perpetuated by the institution as opposed to the students themselves (Woodall, Hiller and Resnick, 2012[14]). It was no surprise then that such resistance occurred. An important feature of our implementation involved persuading colleagues of the value of the CLA+ as not just a platform to measure and track student development but a diagnostic tool as well. In addition, our project team offered their assurance that collected data would not be used as a metaphorical stick to judge the merits/failings of their programmes.

Using such diagnostic data would also align with institutional strategy and wider sectoral research on enhancing student progression and retention (see (Webb et al., 2017[15])). Thus, such work puts student development at the forefront while balancing institutional pressures such as loss of fees from student withdrawals. Many staff, particularly those residing within certain disciplines, instantly saw the value of the test and where possible tried to convince their students of the value of taking the CLA+, including the offer of incentives in certain situations. Despite this, we still experienced a mixture of reactions from colleagues within our institution, ranging from total investment, to apathy and an active effort to dissuade students from completing the test.

We planned to test students from a variety of disciplines across the university’s four faculties. With the TEF subject-level pilots being carried out as the backdrop of our work, we saw a unique opportunity to analyse some of the key differences in student performance in the CLA+, and also how student perceptions of the test would differ depending on their disposition towards the skills it claims to measure. Initially we tested students from English Literature, Social Work, Marketing and Computer Science, but later widened our scope to include a broader range of disciplines, which was mainly influenced by our struggles to recruit students to complete the test. By the end of the project, we had tested students from the following disciplines at our institution:

  • Computer Networks and Security

  • English Literature

  • Engineering

  • Graphic and Visual Communication

  • Jewellery

  • Marketing

  • Media and Communication

  • Psychology

  • Social Work

Testing sessions were all conducted remotely in computer lab spaces, working collaboratively with the university’s Information Technology (IT) departments to ensure the CLA+ testing platform could be downloaded without issue prior to the beginning of testing. We would often schedule testing for two hours, despite the test only needing 90 minutes to complete, to allow for the additional time spent setting up and the completion of the demographic questionnaire at the end. The tests were advertised initially as extracurricular activities with incentives such as food, and an eventual shift to gift cards and cash sums to encourage attendance. However, despite these incentives and the benefits reported from participating in such activities, we continued to struggle with student recruitment (Kaufman and Gabler, 2004[16]).

As Stuart et al. (2011[17]) highlight, the problem of student recruitment is no surprise considering the literature that examines student engagement with extracurricular activities. The study provides evidence that students from lower socio-economic backgrounds spend considerably more time working on course material than on extracurricular activities, suggesting the primary motivation for these students is their assessments (Stuart et al., 2011, p. 10[17]). As a widening participation university ourselves, the approaches to learning and circumstances of our students would prove to be an important factor with some of the successful aspects of our project. As we will later discuss, we had particular success with engagement only when embedding the test into the curriculum as a piece of formative, diagnostic assessment.

The student-facing and institutional reports also proved a valuable aspect of this approach and provided programme/module leads and important data on the demographic of their cohorts via performance, language and gender. This would quickly enable us and academic staff to identify struggling students and offer a means to be more inclusive in their approach to student support. The emergence of the CLA+ as a valuable diagnostic tool became the core focus of our project and basis for implementing the test at the institution. Measuring learning gain arguably became a secondary objective.

In this section we will aim to provide an account of our main findings during our funded project on measuring learning gain. This includes a quantitative narrative on how many students were recruited and tested, how tested students performed on the CLA+ from the institution’s datasets, and findings from our longitudinal and cross-sectional studies. In addition to this, we will also include our qualitative findings from student-focus groups.

In total, our institution tested 774 students over the three years, with 2 090 being tested across all four partner institutions. The large proportion of first-years recruited (87%) to take the test highlighted our struggle to test students longitudinally across multiple years to measure the development of their critical thinking skills over time. The remaining number of second-years from our sample who took the CLA+ tallied to 5.5%, with a slightly larger number of third-years at 7.5% (see Table 13.1 below).

In total, 132 students were tested longitudinally on our project. Although we had initial difficulty with recruitment, we did have some extremely positive case studies involving the use of the CLA+ as a diagnostic tool and piece of formative assessment. This included successfully testing 201 first-year Marketing students with this approach. As we will discuss later, our work with a Marketing degree programme showed the ability of CLA+ also to act as a reflective tool for students’ development rather than only performance output. Finally, some of our most impactful findings pertaining to students’ perceptions of the test, including factors influencing their motivation to take the CLA+, were revealed in our cross-sectional analysis in 2016-17. This involved students completing the CLA+ followed by a short questionnaire of selected questions from the United Kingdom Engagement Survey (UKES) and a short focus group after to capture reflections. We successfully tested and conducted focus groups with 136 students in total during this study.

In the context of overall student performance in the CLA+ as part of our project, analysis was carried out by CAE on students who undertook the CLA+ on the new international platform. It noted a significant difference among years of study when looking at total CLA+ scores. At our institution, students in their third year outperformed both first- and second-year students, with the former having the greatest difference in scores. While this was an output we might have anticipated in our pursuit of measuring learning gain, it is also worth noting that the difference in these overall scores were largely influenced by scores in the PT, with no significant difference between recorded in SRQ scores. In our qualitative data, we consistently noticed that students would highlight the PT as the more enjoyable aspect of the CLA+, noting the freedom offered by its structure as the main reason for this. The link between students’ enjoyment or motivation and its correlation with performance was not proven but is an area our project team would have liked to explore further.

It is no secret, particularly in UK higher education, that an attainment gap has existed between Black and Minority Ethnic (BME) and their white counterparts in attaining “top degrees”, which would typically consist of students achieving either a First-Class Honours or Upper Second-Class Honours (Advance HE, 2017[18]). In their Equality in Higher Education: Statistical Report (2017), they noted a 15.6% gap in the attainment of a top degree between BME students (63.2 %) and their white counterparts (78.8%) (Advance HE, 2017[18]). It is with this statistic in mind that students on our project taking the CLA+ also mirrored this issue, adding further evidence to support the work needed to close the gap. In particular, it was among the first-year students that white students on average received higher CLA+ scores than their BME counterparts.

There was also a reported difference in student performance by their primary language. Students whose first language was the same as the language of instruction, in our case, British English, outperformed their peers with different primary languages. This pattern was consistent across all years of study and raises important concerns about assessments centred on measuring constructs such as communication skills, particularly written, potentially disadvantaging these students. In our tested first-year students, we found that students whose primary language was English achieved higher scores than those for whom this was not the case. Noticeably, this difference was more pronounced in the Performance Task, where written communication skills are more important than in the selected-response questions.

Finally, it was also interesting to note that there was a significant difference in students whose parents had received differing levels of education for our first-year students, however this was not found in our second- or third-year student samples. Due to our large sample size of first-years in comparison to second- and third-years, it may be that a robust parental education effect could not be observed due to the lack of statistical power. This difference found in our first-years is elaborated in a study by Vanthournout et al. (2016, p. 53[19]). It notes that the democratisation of higher education has introduced student cohorts with various backgrounds, each with different ways of learning and motivation. Most notably, our first-years whose parents had no more than primary school education were outperformed by all of their peers whose parents had reached higher levels of education. This bears out arguments of social capital and its impact on assessment performance. More importantly, it brings to light the role higher education needs to play in bridging these gaps and the importance of fostering the acquisition of learning strategies for these students.

As discussed in previous sections, soon after the beginning of our project it became clear to both our institution and our project partners that a dual approach had emerged. Although our principal aim was to measure learning gain at both institutional and individual levels, the potential value of using the CLA+ as a tool for students to analyse their position in relation to the skills being tested for their ongoing development also became part of our focus. Reflecting on this, it also became clear that the test was more attractive to those studying some disciplines, or indeed key academics running programmes based within these disciplines.

For example, the CLA+ was successfully embedded within our institution’s business school as both optional and compulsory elements of curricula. Largely thanks to the programme lead’s investment in the CLA+, the test was made compulsory and used as part of a professional development module for first-years. The resulting reports for students acted as a form of evidence that would eventually be submitted in the summative assessment as part of an e-portfolio, with a reflective narrative on how aspects of the portfolio could be fed into areas of ongoing development. This use of the CLA+ and its integration within these portfolios align with the proposal that “authentic assessment design should ensure transfer of knowledge” (Ashford-Rowe, Herrington and Brown, 2013, p. 208[11]) as the knowledge and skills developed through the students’ engagement with the CLA+ were transferred beyond this context.

When distributing these reports and to make the link to employability explicit, we encouraged the students to collect the reports shortly after the mid-point of their module during the university’s Graduate+ event, a week focussed on development of employability attributes. Naturally, we also considered whether, if at all, there was a difference between students' performance in the CLA+ when analysing those who took the test as optional vs compulsory. To do this, CAE compared and analysed the differences between performance of students from Autumn 2017 (compulsory) and Spring/Autumn (non-compulsory) of 2016 (see Table 13.2 below). As with our main results, there was a “marginal difference” in the total CLA+ scores, with the students taking the CLA+ as a compulsory element of their module slightly outperforming their non-compulsory counterparts. Similarly, the difference appeared to be driven by PT results as there was no reported difference in SRQ scores. Despite these differences, it was also interesting to note that there was no difference in self-reported effort, an area covered in the CLA+ demographic questionnaire, suggesting that assessment did not increase motivation in the case of the compulsory cohort.

During our time working on the project we naturally sought to have discussions with students about the CLA+ and some of their key motivations for completing it. During our cross-sectional study, 136 students completed the CLA+ and also participated in a bespoke United Kingdom Engagement Survey (UKES) and focus group. The focus groups in particular offered us a space to encourage students to think critically about the core aspects of the CLA+ such as the topic, its ability to test the skills it is designed to, their interest in how results could be used and any key motivators influencing them to take the test.

When asked about the topic of the Performance Task (PT), the majority of students taking the test struggled with engagement due to the lack of relevance to their subject topic. This has been a criticism often fired at the CLA+ in publications such as Academically Adrift (Arum and Roksa, 2011[20]), particularly the concern of measuring generic skills over specialised subject skills which students may attend university to learn. However, despite this many also reported back that the process of analysing evidence, formulating a critical argument and adapting this to various real-world situations made the PT more engaging as opposed to the SRQs. Being the latter part of the CLA+, the SRQ section was an area where students consistently cited mental fatigue, having already invested what they perceived as a large amount of time (60 minutes) on the PT task. There were also differences between PT topics, which students highlighted as having an impact on the level of their motivation and engagement. Our small number of students who sat the CLA+ between 2016 and 2017 reported to us that they preferred the second topic as it was more closely related to their discipline, suggesting that content is a key influencer of engagement.

When discussing the skills the CLA+ claims to measure and develop, we found several consistent trends from our students on this, including thoughts on the overall structure, timing and time-tabling for the CLA+. It was mentioned throughout the focus groups that students thought that regardless of their discipline, they should be able to analyse, critically think and problem-solve no matter the context. Despite this, as mentioned previously, there were also concerns that particular disciplines may have an advantage over others in terms of performance when taking the test. For example, students from creative, practical courses felt that students studying programmes that practice writing and analytical skills such as Law would perform better than their Art and Design counterparts. When analysing the performance between disciplines, there was no significant difference between any, suggesting that in fact these core skills are not impacted by subject areas.

Students were also hesitant about whether they would want their CLA+ results to be shared with peers. However, this was also partnered with an interest in how their scores might compare with others who had undertaken the test in an anonymised, aggregated way, a feature that is offered in the individual student result reports. There was also an evident interest in the potential of the results report being valued by employers and top postgraduate programmes.

The option to not have spell-checker or auto-correct was consistently cited as an issue. While this is predominantly thought of as a luxury on our part, it is important to note that spell-checker can be an important feature for inclusivity – a useful and necessary tool for students with dyslexia in particular. We found that CAE was very helpful with other inclusivity concerns such as allowing extra time, with a feature to add this on when pre-registering our cohorts before testing. Not only was time-taken an important factor in engagement but the question of when the CLA+ is taken during an academic year was also discussed. Many highlighted that it would be better to take the test at the start or middle of an academic year due to the latter part of the year having a high assessment load e.g. exams, coursework deadlines. We had encountered some issues with timing so some testing sessions had to be conducted during these busy periods. Unsurprisingly, these were not seen as high priority to students. It was also suggested that these arrangements would have been more readily achieved if the work had been embedded in the curriculum rather than extra-curricular.

In addition to testing students as part of our learning gain project, we sought to continue our tradition of working in partnership with students in our Centre. This informed everything we did during our research. We employed and worked alongside three students working part-time during their study to help with project delivery and compiling and analysing data. We ensured that both students were regular co-presenters with the project team at conferences to disseminate findings, as well as communicate their own experiences of being project participants. It was an interesting feature of our work with them that they had also taken the CLA+ as part of our non-compulsory testing cohort. This meant they were able to provide their reflections and experiences of taking the test. These have proved to be invaluable, and we have the following quote from one of the interns:

Naturally I was very curious about the test due to the little information given and was very happy to participate. Now that I have taken the test, I found it very challenging but rewarding as it successfully tested my ability to think critically and analytically. – Second-year student, Faculty of Computing, Engineering and the Built Environment

The employment of our interns extended beyond the initial scope of our project. They were integral to embedding the test in the Business School’s curriculum, making the test compulsory. Our interns worked directly with module and programme leads, co-ordinating testing sessions and working with invigilators.

Following delivery of the HEFCE-funded projects an independent evaluation was carried out and presented to the Office for Students (OfS), a non-departmental public body of the Department for Education. This report highlighted a number of important issues. Undoubtedly, the most prominent of these was that of student recruitment to participate in testing. This problem was not peculiar to the two projects that employed CLA+ but was encountered in many other projects that required students to undertake activities they perceived as not a compulsory part of their curriculum. During the course of our project we also discovered that very different recruitment rates for testing were achieved in different subject areas. This appeared to be related to the enthusiasm of the key academic staff with whom the students concerned had strong connections, such as a course leader or year tutor. We conclude that to obtain high levels of recruitment the test would need to be embedded as part of the curriculum. Alternatively, at the very least, a high level of enthusiasm for testing from academic staff would need to be present. Without these levels of recruitment it would be difficult to investigate further the important question of scalability, or to explore potential links with existing university data.

It is also clear from the external evaluation report that there was only limited interest in using learning gain data from senior managers or academic staff in subject areas not involved at this stage in this work. It seems likely that if the use of a standardised test for institutional metrics was required, this position would have to change.

A further key lesson learnt relates to timeliness. As the HEFCE-funded projects had a very short lead in time, it proved impossible to adequately prepare for such necessities as schedules of marking, training of scorers, and results analysis in good time. This in turn meant that it was almost impossible to sustain students’ interest in taking the work forward.

A question also arises as to whom the CLA+ testing is for. Our view is that if such activity is perceived as solely relating to measuring institutional performance, then an opportunity would be missed. There is a potential benefit for individual students to see initial CLA+ testing attempt as an important diagnostic function of their generic skills.

The shift in focus from using the CLA+ as a tool for institutional measurement to one that is diagnostic – although a deviation from our initial project aims – opened up several exciting avenues for future work. Due to the success in student recruitment and using the test as part of formative modular assessment, we have continued our work with our institution’s Business School. We will continue to embed the test into their Professional Development module and work on embedding the reflection of these results for future development, a core focus of the module. In addition, we will include structuring a session around collecting and making sense of the results to the individual.

One of the key findings from our focus group was the value of CLA+ for employment. Upon completion of the test, the students would receive a report on their performance along with a digital badge confirming their overall mastery level. When asked, students highlighted that they were particularly interested in the idea of digital badging, and the use this would have as part of an evidence portfolio for employers. This suggestion that students would be more likely to take the test if employers and/or other universities valued the results is a useful consideration for programme/modules teams to make when designing and delivering curriculum. This is an aspect of the test we felt we could have communicated more when briefing students on the benefits of taking the test as being confronted with an examination-type assessment with no opportunity to prepare may have been an anxiety-inducing prospect for students.

We are also acutely aware of some of the limitations presented by student recruitment, namely our struggle to test a larger number of students longitudinally (n=132). Due to low sample size, we were unable to provide more useful statistics, meaning we were less able to identify any outliers that could have potentially skewed the data, minimising the margin of error. Even in our cross-sectional study, the number of first-years heavily outweighed the number of second- and third-years tested, meaning that analysing performance via year of study was difficult to achieve. We would like to continue to work on testing more second- and third-year students for both our longitudinal and cross-sectional datasets.

One interesting development we initially discussed at our institution since implementing the CLA+ was the introduction of an assessment centre offering larger scale, personalised assessment testing for students. This stemmed from our refocus on using the test as a diagnostic tool alongside other institutions across the UK higher education sector implementing similar centres. In addition, such a centre would also be integral to addressing some of the challenges around progression, retention and employment of our students. This idea had been adapted from similar practices in the U.S. higher education system where a range of support mechanisms would be linked to similar assessments offered in a single location in one window. Since the close of our project, our institution now has a functional assessment centre on its campus. The centre works with academic colleagues to design and deliver digital developments on their programmes by collaborating with a team of digital assessment designers and technicians. The types of assessment offered are categorised as diagnostic, development and destination, and aid students' understanding of their own abilities in relation to more nuanced skill sets such as academic skills, numeracy, sentence construction and performance in simulated professional examinations.

References

[18] Advance HE (2017), Equality Challenge Unit. Equality in higher education: statistical report 2017, Advance HE, UK, https://www.advance-he.ac.uk/knowledge-hub/equality-higher-education-statistical-report-2017 (accessed on 15 July 2021).

[9] AHELO (2012), AHELO feasibility study interim report, OECD, Paris.

[20] Arum, R. and J. Roksa (2011), Academically Adrift: Limited Learning on College Campuses, University of Chicago Press, Chicago.

[11] Ashford-Rowe, K., J. Herrington and C. Brown (2013), “Establishing the critical elements that determine authentic assessment”, Assessment & Evaluation in Higher Education, Vol. 39/2, pp. 205-222, https://doi.org/10.1080/02602938.2013.819566.

[7] Brand, S. and L. Millard (2019), Chapter 4: Student engagement in quality in UK higher education: more than assurance?, Routledge, London and New York.

[5] Browne, L. (2010), Securing a Sustainable future for higher education: an independent review of higher education funding & student finance.

[1] Eurydice (2019), Autonomous and diverse institutions, https://eacea.ec.europa.eu/national-policies/eurydice/content/types-higher-education-institutions-94_en (accessed on 3 May 2021).

[4] Gov.uk (2020), Participation rates in Higher Education: Academic Years 2006/2007 - 2007/2018, https://www.gov.uk/government/collections/statistics-on-higher-education-initial-participation-rates (accessed on 29 July 2021).

[2] HESA (2021), How many students are in HE?, https://www.hesa.ac.uk/news/27-01-21/sb258-higher-education-student-statistics/numbers (accessed on 3 May 2021).

[6] Hubble, S. and P. Bolton (2018), Higher education tuition fees in England, The House of Commons Library, https://researchbriefings.files.parliament.uk/documents/CBP-8151/CBP-8151.pdf (accessed on 29 July 2021).

[16] Kaufman, J. and J. Gabler (2004), “Cultural capital and the extracurricular activities of girls and boys in the college attainment process”, Poetics, Vol. 32/2, pp. 145-68, https://doi.org/10.1016/j.poetic.2004.02.001.

[8] McGrath, C. et al. (2015), Learning gain in higher education, RAND Corporation, Santa Monica, California and Cambridge, UK, https://www.rand.org/pubs/research_reports/RR996.html.

[10] Office for Students (2021), Future of the TEF, https://www.officeforstudents.org.uk/advice-and-guidance/teaching/future-of-the-tef.

[17] Stuart, M. et al. (2011), “The impact of engagement with extracurricular activities on the student experience and graduate outcomes for widening participation populations”, Active Learning in Higher Education, Vol. 12/3, pp. 203-215, https://doi.org/10.1177/1469787411415081.

[3] Universities UK (2018), Patterns and trends, https://www.universitiesuk.ac.uk/facts-and-stats/data-and-analysis/Pages/Patterns-and-trends-in-UK-higher-education-2018.aspx.

[19] Vanthournout, G. et al. (2016), Chapter 4: Discovering and Strengthening Learning Strategies and Motivation Using the Lemo-instrument, Leuven, Lannoo.

[15] Webb, O. et al. (2017), “Enhancing access, retention, attainment and progression in higher education A review of the literature showing demonstrable impact”, Enhancing access, retention, attainment and progression in higher education A review of the literature showing demonstrable impact, https://s3.eu-west-2.amazonaws.com/assets.creode.advancehe-document-manager/documents/hea/private/resources/enhancing_access_retention_attainment_and_progression_in_higher_education_1_1568037358.pdf.

[14] Woodall, T., A. Hiller and S. Resnick (2012), “Making sense of higher education: students as consumers and the value of the university experience”, Studies in Higher Education, Vol. 39/1, pp. 48-67, https://doi.org/10.1080/03075079.2011.648373.

[13] Wyatt‐Smith, C., V. Klenowski and S. Gunn (2010), “The centrality of teachers’ judgement practice in assessment: a study of standards in moderation”, Assessment in Education: Principles, Policy & Practice, Vol. 17/1, pp. 59-75, https://doi.org/10.1080/09695940903565610.

[12] Zahner, D. and J. Lehrfeld (2018), Employers’ and advisors’ assessments of the importance of critical thinking and written communication skills post-college, Paper presented at the 2018 Conference of the American Educational Research Association, New York, NY.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2022

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at https://www.oecd.org/termsandconditions.