Site blog

Anyone in the world

As you will all know, at HEAT we don’t only track students in to HESA to learn their HE progression outcomes, but also the National Pupil Database (NPD, to obtain Key Stage 4 (GCSE) and Key Stage 5 (A-level/BTEC) exam outcomes.

In December 2019, nearly all Uni Connect Partnerships using HEAT received a partnership-level report showing the Key Stage 4 (KS4) exam performance of all participants engaged in outreach before sitting their GCSE exams. For most partnerships, this would have been activities delivered in 2017/18 to year 11 pupils, with the exam results of this cohort being the most recently available through HEAT’s tracking. Just in case anyone has missed this, here is a link to your partnership’s Key Stage 4 report in the File Store and the accompanying video explaining how to interpret your report.

Key Stage 4 attainment has been shown as critical to future HE progression. National research shows that attainment at this point explains nearly all of the social stratification in later HE progression (Crawford, 2014). Owing to this finding, the Office for Students now require universities to provide evidence in their Access and Participation Plans of how they are raising pre-entry attainment in schools. The original aim of the Uni Connect programme in Phase 1 was to work in target wards where GCSE attainment was high but HE progression did not follow at a similar rate. The remit was widened in Phase 2, and the importance of accounting for KS4 attainment has increased. Furthermore, attainment at KS4 may be considered an important indicator for Uni Connect Partnerships, as an early outcome to predict later HE progression that is available within the life span of the project. On this basis I would like to see more partnerships engage with this stage of reporting from HEAT.

HEAT’s aggregate report showed a correlation between participation in a Uni Connect activity and improved attainment at KS4 when compared with school average results. This remained true after controlling for prior attainment at Key Stage 2 (exams taken at the end of primary school). Although this is a promising overall finding, to enhance learning about what boosts attainment, we also need to support partnerships wishing to evaluate specific activities using these data.

To achieve this HEAT have offered to re-run each member’s most recent reports based on a cohort of participants of their choice. The partnership-level reports are based on all participants a partnership has recorded on HEAT, and so may include participants who have taken part in only minimal outreach, perhaps students who have only completed a baseline survey. Reporting on all participants can have the effect of including a lot of ‘noise’ in the results, which may render them less useful for partnerships’ evaluation.

A practical example

To demonstrate the potential evidence that it is possible to generate from our offer to re-run reports based on a sub-set of participants, we have teamed up with Make Happen to work through a real example. The process is possible for any Uni Connect Partnerships with participant data in the latest KS4 tracking report. Members can also get access to the data should you wish to conduct your own analysis of it.

Make Happen were keen to understand whether two of their pre-16 activities had an impact on participants’ KS4 exam attainment. These activities were run in partnership with two external providers. First, Fix Up who offer a range of sessions related to providing support with motivation and exam preparation.  Second, Positively Mad who run whole day workshops in schools focused on exam and revision skills.

Drawing on HEAT’s offer to re-run KS4 reports, Make Happen identified the participants they wanted to include in two separate KS4 attainment reports, one for each activity. Make Happen were careful to include only those participants who had attended and received above a threshold contact hours of two and three hours for each activity respectively.

Although both activities have an attainment-raising component, they were not tutoring activities. We are aware that there is still work to be done around setting out clear Theories of Change in relation to raising attainment, not least because the link between raising motivation and attainment is debated (Cummings et al., 2012; Gorard and See, 2013). We plan to pick this up when we promote the use of HEAT’s Evaluation Planning Tool later in the year. It is my intention to collaborate with members on developing and testing Theories of Change as recommended in the OfS report: Evaluation of outreach interventions for under 16 year olds. But for now we feel confident that the activities in question provided a theoretically sound mechanism to improve participants’ attainment.

The results showed that, for Fix Up participants (n=165) Attainment 8 Scores were on average +6.1 grades higher than the average scores for the schools they came from. This remained true after breaking down by prior attainment at Key Stage 2: participants with low prior attainment (n=20) demonstrated the greatest positive difference, achieving on average +5.7 grades higher across eight GCSEs than their similarly low attaining classmates.

Positively Mad participants (n=130) also demonstrated higher Attainment 8 Scores than their schools’ average scores, achieving on average +6.6 grades higher. Participants with medium prior attainment (n=70) demonstrated the greatest positive difference, achieving on average +5.5 grades higher across eight GCSEs than their classmates from the same attainment band.


Moving to Type 3 evidence

At HEAT we are critical about the methodological limitations of all our reporting. The analysis above uses the school average as a comparator group against which to compare the outcomes of activity participants. For this reason it can be considered strong Type 2: Empirical Evidence, according to the OfS Standards of Evidence. It fails to meet the Type 3: Causal Evidence standard due to the chosen comparator group. It is very possible that participants are not representative of their classmates; the targeting inherent in WP often drives this.

We have recently published a video giving advice on sourcing data for a more suitable comparator group, based on experimental or quasi-experimental techniques. Following this, HEAT can provide you with reporting, using the same KS4 Track report template, for this comparator group alongside your participants. Depending on the similarity of your comparator group to your participant group, this is a way to raise the evidence generated from HEAT’s Track reporting from a Type 2 to a Type 3.

Uni Connect Partnerships may be in a better position than HEAT’s core members when it comes to sourcing data for a comparator group. Many partnerships have been baselining, and tracking, all pupils within year groups as part of CfE’s national evaluation. Make Happen were well organised in this regard and have been tracking all baseline respondents through HEAT. These students were linked to a ‘baseline survey’ activity and so were actually included in Make Happen’s original KS4 Track report from HEAT, an example of some of the ‘noise’ resulting from reporting on all activity participants found on the database.

Of the tracked baseline respondents, we were able to isolate those who had not taken part in any outreach activities, other than completion of the baseline. We identified 735 students who would make up a ‘non-treatment’ group with which to compare the outcomes of our treatment group of activity participants.  Note: It is also important to consider how the participants have been selected and conversely why the non-participants were not selected. The reasons for this will determine how similar the two groups are in terms of their motivation to participate in outreach. This motivation is something that is more difficult to account for retrospectively, and thus ideally, should be considered during the evaluation design phase. Make Happen were confident that selection practices would not have resulted in large differences in motivation levels between the two groups.

Next we employed matching methods, based on a quasi-experimental research design, to match students retrospectively from treatment and non-treatment groups based on variables known to influence our outcome of interest: KS4 attainment. These variables were taken from a literature review of factors known to influence attainment (Sylva et al., 2014): Gender, Ethnicity, IMD and IDACI quintile and KS4 performance of school (quintiles calculated from HEAT’s Planning Datasets). Uni Connect Target ward (Y/N) was also included as a match variable to ensure the groups matched on this important variable to the programme.

Now for the technical bit. Participants were matched using Case Control Matching in SPSS v26 to a pair from the non-treatment group without replacing cases. A match tolerance of one quintile was allowed for IMD, IDACI and KS4 performance of school to maximise the number of matches. All other variables matched exactly. When conducting this type of matching, the match tolerance can be tightened or loosened; there will always be a trade-off between maximising the comparability of the groups and ensuring that a sufficient number of matches are made. Of the 165 Fix Up participants, a pair was found for 140 (85%). Of the 130 Positively Mad participants, a pair was found for 115 (88%). Unmatched records were discarded. The sample sizes are now slightly smaller but the groups were checked for balance post-matching and this ensured that the participant and non-participant groups were similar in relation to the observed variables to which we have access.

We found that, broadly, our original findings remained true and participants’ Attainment 8 scores were higher than those from the matched comparator groups for both activities, albeit to a lesser extent than when the school average was used as the comparator. The smaller grade differences that we now observe between participants and the comparator group suggest that, in the case of these activities, using the school average as a comparator group may have led to an overestimation of the effect of the programmes. This is a good example of the need to collect data for a suitable comparator group that can be considered as similar as possible to the participant group.

Fix Up participants achieved an average of +1 grade higher than the matched non-participant group and Positively Mad participants achieved an average of +4 grades higher than the matched non-participant group. The Positively Mad result was statistically significant (p=.019). The Fix Up result was not significant at the 5% significance-level, likely due to the small observed effect size of 1 grade increase combined with the sample size available for analysis. This doesn’t necessarily mean we should write-off the result, and arguably we might expect to see only small gains in attainment from participating in outreach, but from these data we cannot be confident that the improvement in participants’ grades was due to the intervention, and may have been down to other factors as well as chance.

There were differences within prior attainment bands once we changed the comparator group. Fix Up participants with medium prior attainment no longer performed better than the comparator group, but those from low and high prior attainment bands did, albeit to a lesser extent than when compared with the school average scores. Participants with high prior attainment (n=45) achieved scores that were +3.5 grades higher than the match non-treatment group with similar prior attainment.


Positively Mad participants with low and medium prior attainment still performed better than non-participants from the same attainment bands, although again to a lesser extent than when the school average was used as a comparison group. Participants from the high prior attainment band (n=40) demonstrated the greatest improvement when compared with non-participants from the same attainment band, and the difference of +5 grades higher was greater than the difference calculated based on the school average comparison.


One limitation of the design must be noted. We were not able to include prior attainment at KS2 as a matching variable as this is not available to HEAT members at student-level. It is available to me as HEAT Analyst, but I wanted to follow the process that we are offering members, using the data available to you. We are trying to find a solution to this that would allow prior attainment to be included in the matching variables whilst complying with the DfE’s data sharing requirements, but for now this is not possible. However, this may not weaken the evaluation too dramatically as we are able to provide breakdowns by prior attainment band in the report. We are also able to present a profile of KS2 prior attainment bands in the generated reports after matching. Fortunately, the KS2 band profiles were very similar for participant and non-participant groups in both activities.

It may be possible to strengthen the design described above by improving the variables used in the matching process. HEAT’s improved Survey Tool (Technical Update 14) means the database can accommodate the responses to the CfE baseline survey. It may then be possible to include an indicator for educational engagement in the matching, taken from the attitudinal questions.

Until then, we are pleased with the design of the evaluation and the results we were able to produce. All of this has been possible using HEAT’s exports and standard KS4 reporting. The KS4 attainment data on which these reports are based are sourced from the NPD following a lengthy and resource intensive negotiation process with the DfE. It is therefore important that the use of these data are optimized by HEAT members. I would encourage all partnerships to engage with these data.

How can I get results for my activities?

Start by looking up your KS4 report and read the accompanying Notes document to find out how to request the Student HEAT IDs making up the report. Using these IDs alongside exports from HEAT you can append the activities in which these students have participated, and explore whether you can follow the process described here.

A final word, by way of a caveat. HEAT’s track reports provide one piece of evaluation, the findings from which should be triangulated with results from other sources. However, I do hope this process shows how NPD data can be accessed through HEAT in a meaningful and useful way and that this will now remove the need for HEAT members to submit resource intensive NPD applications of their own.

I look forward to receiving lots of requests for more KS4 Track reports from you all! Please contact us at support@heat.ac.uk or me directly at anna.anthony@heat.ac.uk if you would like to discuss how you might go about optimizing the use of HEAT’s tracking data for your own evaluation.


Dr Anna Anthony, Senior Analyst, HEAT

 



References:

Crawford, C. (2014) The link between secondary school characteristics and university participation and outcomes, London: Department for Education.

Cummings, C., K. Laing, J. Law, J. McLaughlin, I. Papps, L. Todd and P. Woolner, P. (2012) Can Changing Aspirations and Attitudes Impact on Educational Attainment? A Review of Interventions, York: Joseph Rowntree Foundation.

Gorard, S. and B.H. See (2013) Overcoming Disadvantage in Education, Abingdon: Routledge

Sylva, K., Melhuish E., Sammons, P., Siraj, I., Taggart, B., Smees, R, Toth, K., Welcomme, W. and Hollingworth H. (2014) Students’ educational and developmental outcomes at age 16. Effective Pre-school, Primary and Secondary Education (EPPSE 3-16) Project. Research Report. Department for Education. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/351496/RR354_-_Students__educational_and_developmental_outcomes_at_age_16.pdf


Comments

 
Anyone in the world

On 23rd March, as schools across the country closed their doors and a stream of rolling news outlined the unquantifiable but certain impacts on learning and attainment for young people, colleagues across the Widening Participation (WP) and outreach community swiftly turned their attention to alternative, socially-distanced delivery models and ways to quickly address the inevitable widening of  the ‘disadvantage gap’.[i] Although the effects of school closures have doubtless been far-reaching, the cohort of young people uncomfortably labelled ‘disadvantaged children’ have reportedly felt the immediate impacts most keenly and look set to experience the repercussions for years to come. Becky Francis, Chief Executive of the Education Endowment Foundation, predicts ‘at least a reversal’ of the progress made over the last ten years in closing the disadvantage gap for GCSE students, while Robert Halfon, Chair of the Commons Education Select Committee, warns of a ‘potential cascade of mounting social injustice’ that could itself last a decade.[ii]

The uncertain context in which schools and learners found themselves inevitably manifested unprecedented support needs for teachers, parents, and learners, while simultaneously unprecedented challenges and limitations to some aspects of WP outreach as we knew it. Many experienced practitioners well-versed in face-to-face outreach and evaluation were suddenly traversing uncharted terrain in the new WP world order where online approaches necessarily feature as a primary mode of engagement (in spite of pertinent, lingering concerns about the ‘digital divide’[iii] and accessibility of online content). What quickly became evident as delivery staff attuned themselves to this brave new world was an aporia of evidence on the subject of ‘what works’ in online delivery, and an urgent scrambling for practical, accessible guidance which might enable practitioners to continue to engage young people in effective ways from a distance.

Although similarly beset by challenges, evaluators and researchers quickly reached consensus that we had also been presented with an organic opportunity to implement neoteric research approaches through which we might better understand the impacts of the remarkable and unique circumstances school-age young people now find themselves in. Implementing robust approaches to evaluation of online and distanced outreach provision affords us the prospect of starting to lay some data-driven foundations on which delivery staff might begin to build a new frontier for WP: one which is quickly reactive to the specific needs and requirements of our learners; is innovative and evidence-based; is applicable in a technology-driven context; and which can potentially bridge barriers and divides where face-to-face modes of WP might not be able to. 

In reaction to the immediate needs and questions being posed by Uni Connect delivery staff, the Study Higher Evaluation Team compiled a toolkit to practically support and inform the design, monitoring processes, and evaluation of online activity. The toolkit is intended to be an empowering resource for anyone involved in the creation, monitoring and evaluation of online activity and resources, having been created to be accessible and easy to implement regardless of the level of evaluation-specific knowledge or experience on the part of those designing and delivering new online interventions.

Although elements of the toolkit are bespoke to Study Higher’s specific context and operation, some of the guidance and suggestions may be of use for other consortia and institutions seeking practical cornerstones to creating, monitoring and evaluating their own activity. Five helpful highlights from the toolkit are included below. 

1. Continue to set intended aims and outcomes at the activity/resource design stage

The context we are operating in may be radically different at the moment, but it’s business as usual in terms of how to go about activity and evaluation design. Although this may pose some practical or logistical challenges in an online setting, the principles of robust activity and evaluation design remain the same: all activities and resources should have intended aims and outcomes outlined from the start, as well as clear indicators of what engagement and impact will ‘look like’. For Study Higher it has been important for us to be able to support our Higher Education Liaison Officers (HELOs) to retain an outcomes-focussed approach as they develop new activities and resources or work to adapt existing activity and content for alternative modes of delivery; in our case, these outcomes continue to sit within Study Higher’s broader progression framework which is founded upon the praxis-based NERUPI model.[iv]

2. Capture small steps towards change

Guidance from the Office for Students outlines that a sustained and progressive programme of outreach with multiple activities is likely to have a more positive impact on learners’ knowledge and attitudes toward higher education than single or ad-hoc outreach activity.[v] However, engaging learners at all is currently an outright challenge, and many Uni Connect consortia are re-evaluating and adapting progression frameworks to reflect the unusual circumstances learners find themselves in, and the new challenges and barriers they face.

To effectively outline how activity contributes to the ‘bigger picture’, it is worth creating a very simple theory of change[vi] model up-front to demonstrate exactly how your activity/resource will impact your target learner(s). Theory of change models range from the very simple type through to incredibly complex, programme-level models, but at their heart, are simply a visual means of mapping the assumptions which inform an intervention at each stage, and of capturing the network of factors which can influence project or programme outcomes.

Where previously we may not have logged conversations or ad-hoc interactions with learners, it might be helpful to establish a method for recording more ad-hoc interactions and conversations you might have with individual learners given the current context and added complexities of engaging with young people directly. Given the lack of certainty and direction many young people are experiencing in knowing the best options for them to take in this climate of such colossal uncertainty, even minor, less ‘formal’ interactions may help shape learners’ decisions and contribute to their progression in the medium and longer term.

3. Reflective practice is now even more essential to evaluation and outcome capture
For activities and interventions delivered ‘at a distance’ where capturing quantifiable outcome data may be challenging (for example, in a virtual mentoring setting, an informal Q&A, or discursive workshop) the practitioner’s experience and perceptions of the intervention and outcomes for participant(s) become all the more pertinent.

As soon as the ‘live’ activity ends, note down anything - positive or negative – you perceive in relation to the new, distanced approach to delivery, and anything which evidences a perceived impact(s) of this (or not, as the case may be). Include any comparative insights you may have in relation to face-to-face modes of delivery for similar activity, being sure to document notable changes to dynamics or elements that worked well / didn’t work as well as anticipated. This could be especially insightful if you have delivered the activity in a face-to-face setting before, as you will be well-placed to take note of how the activity or interaction is enhanced or impacted due to being facilitated online. While is not possible to entirely mitigate the issue of subjectivity in this practice, it is helpful to contextualise and corroborate practitioners’ insights with supporting qualitative and/or quantitative data from the participant(s) themselves.

Although this sounds like a straightforward undertaking, it is important for the person undertaking reflective practice to feel comfortable to be as honest and open as possible. This potentially means including any impacts or outcomes which they perceive to be negative or unintended, and highlighting if an activity fell ‘flat’ or failed to engage participant(s) for some reason. Acknowledging shortcomings in this way may feel uncomfortable, but is an important feature of self-reflection and evolution, as well as for facilitating the following suggestion.

4. “Move fast and break things”…

Adopted as a popular mantra by entrepreneurs after being quoted by Facebook’s founder and CEO Mark Zuckerberg, the heart of this statement reminds us that sometimes we need to break things to improve them and that making mistakes is, to some extent, an intrinsic consequence of innovation. With ten years of lost ground in terms of closing the disadvantage gap, and the potential for detrimental impacts on some young people to last a further decade, now is the time for affirmative and (reason-informed) radical action.

Zuckerberg’s mantra doesn’t hold up entirely under scrutiny – not least for its brash application and lack of forethought about possible consequences or room for reflection[vii] – but it remains true that, in these unprecedented times, we have an opportunity as a sector to break away from some of the ways WP operated prior to lockdown, to try new things, and to learn from the outcomes of our shared experiences. Now is an apt time to take attenuated risks, to try new things, to ask for honest feedback, and to be prepared to adapt and improve quickly as a result of new insights. To do this responsibly and in such a way as to create a strong infrastructure in the WP digital frontier, it’s vitally important that practitioners keep a record of the new approaches we are trying, along with our rationale and risk-mitigation measures. As more evidence becomes available, it’s possible we will find that these innovations have forged a path to new and exciting modes of engaging and supporting young people.

5.  …but use your data to fix them quickly and regularly

Policy and priorities continue to change quickly at both a local and national level at the moment, and insights into impact and what does/doesn’t work in distanced outreach and evaluation practice are bound to emerge at a fast pace. Swift recording, sharing, and analysis of monitoring data and outcomes from process and impact evaluation enables us to continuously improve and develop in-house provision and maximise the impact of interventions. Given the fast pace of change and new research being conducted, a swift turnaround of evaluation and impact data is essential for us to be able to quickly learn more about ‘what works’ in socially-distanced outreach activity and to share this with our colleagues across the sector.

Vitally, we also retain an ethical responsibility to ensure that we evaluate our activities responsibly. The Academy for Social Sciences list among their key principles for ethical research that ‘All social science researchers should acknowledge their social responsibilities’, and also that ‘All social science should aim to maximise benefit and minimise harm’.[viii] Swift review and adaptation is essential to pre-empt the potential perpetuation of any negative impacts on participants and to maximise the benefits for the young people we work with.

Although the current situation poses unprecedented challenges and limitations to some aspects of our work as it was, the shift to online delivery and evaluation is an exciting change laden with possibilities and potential for innovation, great insights, and to contribute meaningfully to an emerging area of research. Over the forthcoming weeks and months we are bound to learn much more – as individual practitioners, as consortia, and as a sector - about the many benefits, challenges, impacts, and possibilities intrinsic to distanced and online outreach approaches. 


Dr Emily Scott, Monitoring and Evaluation Manager, Study Higher





[i] Children’s Commissioner for England, 20th April 2020. Tackling the disadvantage gap during the Covid-19 crisis.https://www.childrenscommissioner.gov.uk/publication/tackling-the-disadvantage-gap-during-the-covid-19-crisis/

[ii] P. Foster, 19th May 2020. How Coronavirus is widening the gap in schools. Financial Times, https://www.ft.com/content/50fcc605-674d-4630-9718-d3890eccffbf

[iii] H. Holmes & G. Burgess. (n.d.) “Pay the wi-fi or feed the children”: Coronavirus has intensified the UK’s digital divide. University of Cambridge, https://www.cam.ac.uk/stories/digitaldivide

[iv] Further information about the NERUPI framework is available on their website: http://www.nerupi.co.uk/about/nerupi-framework-overview

[v] C. Millward, 30th October 2019. Sustained outreach makes the difference. Office for Students, https://www.officeforstudents.org.uk/news-blog-and-events/blog/sustained-outreach-makes-the-difference/

[vi] Information about theory of change models and their rationale can be found at: https://www.theoryofchange.org/what-is-theory-of-change/

[vii] An explanation of the adoption of Zuckerberg’s controversial mantra and how this interesting article on this subject can be found at https://www.businessinsider.com/facebook-hack-shows-again-the-downside-of-move-fast-and-break-things-2018-10?r=US&IR=T

[viii] R. Dingwall et al (2014). Towards Common Principles for Social Science Research Ethics: A Discussion Document for the Academy of Social Sciences (pp.7-8), https://www.acss.org.uk/wp-content/uploads/2014/06/Ethics-Final-Principles_16_06_2014.pdf


[ Modified: Monday, 8 June 2020, 10:28 AM ]

Comments

     
    Anyone in the world

    Within Lincolnshire, the Uni Connect programme is delivered by LiNCHigher based at Bishop Grossteste University in Lincoln. The project team ordinarily delivers outreach activities as part of the Uni Connect programme to over 40 schools and colleges reaching approximately 4,200 target learners. The local evaluation for Phase 2 of the programme is managed within the Lincoln Higher Education Research Institute (LHERI) at the University of Lincoln.

    During the current Covid-19 crisis, Uni Connect partnerships are exploring alternative ways of engaging with learners and evaluation methodologies are evolving to reflect the different modes of delivery. Like other evaluators, LHERI is currently embedding evaluation in the online delivery of the LiNCHigher programme. Within the LHERI evaluation team, we also manage a LiNCHigher funded project called ‘Explaining the Gaps’. This project has been running since 2018 and informs the local evaluation using analysis of the Lincolnshire-specific data collected through the national survey managed by CFE. The survey addresses various aspects of higher education including learner awareness of HE, their level of confidence and the practical aspects of HE. ‘Gaps’ in these aspects are identified based upon responses to the survey. One of the aims of the project is to enable the delivery of targeted outreach activities and to identify hidden sub-groups of learners within larger cohorts using quantitative methods. In light of the current situation, the ‘gaps’ identified will be used as a forward planning tool for when schools reopen.

    Explaining the Gaps: Methodology and analysis

    Gaps are identified at a school and year group level, and where possible by further sub-groups. The sub-groups include male and female students, Uni Connect target learners and students with a self-reported disability. This targeted approach is intended to enable LiNCHigher to deliver effective outreach activities within schools, making the biggest impact whilst maximising resources.  

    Although the survey is focussed on Uni Connect target learners, responses are not restricted to these students. Similar to the experience of Uni Connect partnerships in other parts of the country, many schools in Lincolnshire prefer not to single out Uni Connect learners, and collecting responses from a wider population makes it possible to have a comparison group. In 2019, LiNCHigher collected 10,875 responses to the Wave 2 follow-up survey, 9,800 for Wave 1 (2018) and 2,400 at baseline (2017). Wave 1 and 2 were completed online, whilst the baseline data was collected via a paper survey, which is the reason for the reduced response rate.

    The LiNCHigher version of the survey contained 29 questions about various aspects of higher education. Whilst it is valuable to look at the responses to each of the individual questions on a school by school basis, we have found that it is more beneficial for planning targeted interventions to group questions that address similar themes together.

    The national learner survey is arranged so that distinct blocks of questions address different topics. In order to confirm that questions may be aggregated, we used a technique called principal components analysis (PCA) to reduce the number of variables to produce a smaller set of so-called hidden or latent variables, or principal components, in order to aid description and analysis. We used standard criteria (e.g. scree plot, eigenvalues greater than one) to determine that five was the optimal number.  Once established, a simple score determined from the average of each survey item can then be applied to each. Finally, the components are summarised by the underlying theme of the grouped variables.

    The five LiNCHigher survey themes included:

    • Application knowledge
    • Participation knowledge
    • Confidence and resilience
    • Study skills
    • Personal benefits of Higher Education

    LiNCHigher outreach activities have been mapped to both the Gatsby Benchmarks and the Network for Evaluating and Researching Participation Interventions (NERUPI) Framework in order to categorise each activity by the intended learning objective. The five learner survey themes generated using PCA broadly align with the NERUPI Framework.

    In order to use the data for targeted outreach, firstly the scores within a year group for each of the five survey themes were ranked by centile and reported by quartile. Secondly, an average score for each of the themes was calculated for each year group within individual schools. These scores were then compared to the overall Lincolnshire year group quartiles. Where there was enough data, average scores were also compared for sub-groups. Scores that fell within the lowest or the highest quartiles were highlighted, and all scores were summarised using a red, amber and green traffic light scale presented in table format.  Scores for each of the themes were calculated using all the survey responses within each year group and not just the Uni Connect learner responses. In this way gaps identified through the survey data (i.e. scores that are in the lowest quartile) can be tackled by appropriately mapped activities. For example, Uni Connect target learners in a specific year group within a given school might have a group average score for confidence and resilience within the lowest quartile of scores (shown as red in the summary table). In this case these students might benefit from activities that specifically address this issue.

    As part of our evaluation, we hope to track learners that participated in both Wave 1 and Wave 2 follow-up surveys longitudinally. At the beginning of Phase 2 we identified six schools from different parts of the county as case study schools. When we can visit schools again, we will run a series of focus groups with a number of students from each location but specifically a sample of those that we have longitudinal data for. We originally planned to hold these focus groups at the beginning of the summer term to capture students’ views of the outreach activities they had participated in throughout the school year and explore the impact. As the focus groups have been postponed, we plan to additionally use them as an opportunity to help understand the impact the Covid-19 situation has had on these learners. If the Wave 3 follow-up survey can be carried out in the next academic year we will follow up with the same students again in 2021.

    There are some limitations to the overall analysis. Data collection took place in the Autumn term and could be seen to be out of date as the school year progresses. However, there are observable patterns in the data between Wave 1 and Wave 2, which suggests that the findings can be used as a template – or a starting point – for a school which helps LiNCHigher to plan outreach activities to take place in the next academic year. In addition, the depth of data relies on the number of responses from a school to enable reporting for sub-groups. However, providing feedback on students’ responses is popular with schools and supports collaboration. In practice, it has meant that schools have been more inclined to participate in the survey and this has translated to an increased response rate.

    Lucy Mallinson, Lincoln Higher Education Research Institute, University of Lincoln

    LHERI logo Linchigher logo


    [ Modified: Tuesday, 19 May 2020, 2:14 PM ]

    Comments

       
      Anyone in the world

      Our recent survey to assess the implications of school closure for Uni Connect evaluations confirmed that most people feel that the current restrictions and school closures as a result of the Covid-19 pandemic will have consequences for the short and long term evaluation of the programme. In the short term, evaluators were most concerned about the interruption to their data collection methodology, particularly in terms of collection of post activity data (for pre/post activity level and/or programme level analysis). Some mentioned problems for their evaluation activities in relation to being able to take forward planned qualitative data collection methods (notably inability to undertake focus groups and interviews due to social distancing regulations). A suspension/change in activities delivered has implications for the quality of evaluation data in terms of the ability to draw conclusions about any particular new or existing activities that had been scheduled but not taken place. Plus, it undermines assessment of sustained/progressive programmes because the learners will not be receiving the programme as envisaged or in a tailored or optimal way. Some respondents anticipate problems due to data sharing arrangements being disrupted. Plus the current situation will have an effect on skewing outcome measurements.

      Respondents tended to feel that online delivery will be more difficult to evaluate due to issues of not being able to know the students taking part/get quality data on them and their circumstances, or be able to follow-up/track the people who are participating. There is also the issue of the sample who take part in online delivery being skewed to the more engaged learners. Generally the opportunities for date collection appear to have been limited by the current situations, although there also appear to have been some emerging possibilities. For example, putting effort into collecting reflective logs from teachers who may have more time to complete them.  In some areas time has been freed up to support professional development on evaluation for practitioners.

      Long term problems anticipated included difficulties associated with being able to unpick the effects of the Covid-19 situation on the participant outcomes being measured as part of Uni Connect evaluations. Most people felt it will be harder to prove the impact of Uni Connect. Respondents mentioned a range of potential issues such as ‘false negatives’, issues due to confounding factors caused by anxiety or mental health problems due to the pandemic, or students needing to work to support their families, with increases in inequalities between student groups (as well as a gap in the amount of guidance/support being delivered and received during this time). Changed priorities affecting school engagement in evaluation and data collection/sharing activities were also mentioned as a potential long-term issue for Uni Connect evaluators. There were calls for improved access to pupil level administrative and guidance on how to approach GDPR under new working condition (including specifically, messaging around use of public task and reassurances to schools/colleges that this was a requirement). We should all become advocates for better data at this time because systematic data analysis will be needed to establish how the pandemic is affecting different target populations in the short and long term.

      It is clear that work will need to be done to adapt evaluation plans and designs now. In this new phase, evaluation becomes more developmental and will need to adapt to unknowns. Ideally, Uni Connect evaluators will be proactive. This will probably involve talking to the stakeholders you're working with to implement your evaluations (even though evaluation might be the last things on the minds of practitioners who aren't evaluators). However, adjustments need to be made now, to update your evaluation and data collection. For example, outcome measured may have changed, which means the evaluation design and criteria for judging effectiveness will need to change. Furthermore, as an evaluator, you are in a position to help practitioners by showing that thinking on evaluation and real-time data now will be highly relevant because the findings will be really useful to helping to understand the current situation.

      Unfortunately, data collected in a crisis may not be a rigourous as under ‘standard’ conditions, and its important to acknowledge where there are gaps and uncertainty about data quality. However, usually it is possible to formulate at least tentative conclusions, even from a small amount of data. For example, interviews with a small sample may be done quite quickly, and will provide results where a full-scale survey is not possible. Systematic and valid evaluation data and information will be at a premium right now, and there is much to learn.

      The Evaluation Capability Building team is seeking to find ways in which we can support each other as an evaluation community. Please, let us know what you are doing to update your plans and how we can support this. If you have advice and tips for others these can be shared on the live Q and A Forum part of the Website.

      Read the write-up of the implications of school closures for Uni Connect evaluation here

      Join our Webinar at 10.00 on 22nd May which will discuss the findings of our surveys:  pre-registration is not required, just join at the time using link here https://us02web.zoom.us/j/81108606474

      You might also be interested in the results of the responses from the survey of parents and teachers which are going out via the University of Exeter Twitter feed at https://twitter.com/UniofExeterNews/status/1257602401210695690

      Online links to materials to support evaluation during a pandemic:

      Inspiring impact - https://www.inspiringimpact.org/impact-support-during-covid-19/

       Better Evaluation - https://www.betterevaluation.org/en/blog/adapting-evaluation-time-covid-19-part-1-manage

       Michael Quinn Patton - https://bluemarbleeval.org/latest/evaluation-implications-coronavirus-global-health-pandemic-emergency  


      [ Modified: Wednesday, 13 May 2020, 12:50 PM ]

      Comments

         
        Anyone in the world

        When I tell people about what we do at The Centre for Education and Youth I often say that evaluation is our ‘bread and butter’. While we also love working on big research pieces, policy events and resources for practitioners, we really value our work supporting organisations on the ground to learn and grow. We avoid doing simple ‘Did it work?’ evaluations as, for us, the key question is not just ‘Did this have an impact?’ but also:

        • How did it make an impact? What are the mechanisms behind that impact?
        • On which outcomes was the impact stronger or weaker?
        • How did different activities impact on different groups? How does impact vary across regions, age groups or other demographics?
        • What factors facilitated impact and what inhibited it?
        • How likely is it that impact on short term outcomes will lead to achieving the long-term goal?

        The aim of evaluation is improvement, rather than a pat on the back for a job well done. So, the main question we ask is: how can a programme change to improve the impact on children and young people?

        evaluation cycle

        This is why we are always especially excited to work with organisations on multi-year evaluations, as this is the only way to scrutinise the impact of implementing recommendations year on year to truly see the evaluation cycle all the way through. We have seen the benefit of this with a range of different partners and projects, such as our three year evaluation with the Arts’ Council England.

        We have been working with Aspire to HE, the Uni Connect programme in the West Midlands, for two years as their evaluation and research partner. Aspire to HE works with local school and college partners to run a Widening Participation (WP) programme for pupils in Years 9 to 13. Their programme supports in-school outreach-ambassadors who provide Information, Advice and Guidance (IAG) and mentoring, as well as campus visits and other WP events and activities. Their core aim is to increase the number of young people from disadvantaged backgrounds progressing to HE.

        Aspire to HE share our philosophy of learning from evaluation and seeking to improve impact based on findings. Our long-term partnership with Aspire to HE has meant we can carry out a range of research and evaluation projects that demonstrate some key tenets of good evaluation practice.

        1.     Collaboration: At each stage of the project we have drawn on the knowledge of Aspire to HE’s stakeholders. This was especially important when we initially designed the evaluation framework and methodology. We conducted research into the local context and the barriers to HE progression experienced by young people in the Black Country. This allowed us to identify the place related barriers to progression and to pin down the intended outcomes of the programme. We also conducted in-depth interviews with stakeholders such as teachers, college leads and the Aspire to HE core team.  This is what allowed us to develop an evaluation framework that was tailored to the programme and to stakeholders’ needs.

        2.     An active role in evaluation: We don’t think of ourselves as ‘evaluation consultants’. As a social enterprise we want to see sustained change in the teams we work with as well as improvements to programmes. Therefore, it’s important for us to upskill our partner organisations so they can take an increasingly active role in their evaluation, rather than building up a dependency. One way to do this, as we have with other evaluation partners, such as the national gambling outreach charity GamCare, is to run evaluation training workshops for staff. Another is to involve staff in the evaluation on an ongoing basis. This is what we have done with Aspire to HE, their team have been highly involved in the data collection and analysis. This year we have used methods such as Most Significant Change, in which delivery staff and managers sort and assess qualitative responses from pupils who have taken part in campus visits in order to identify the events’ main impacts. This upskills staff in evaluation methodology, helping them to understand the relevance of evaluation to their work and to build the skills to carry out evaluation more objectively and effectively. Staff can also make changes to their practice as findings emerge throughout the year, rather than waiting for the final reporting stage.

        3.     Research-informed activities: Not all of our research activity with Aspire to HE has been evaluation focused. We have also investigated widening participation literature and conducted primary research to inform Aspire to HE’s assumptions and practice. This research complements and strengthens the evaluation, for example by interrogating the assumption that the short-term outcomes we measure are likely to lead to the long term goal, in this case progression to HE.

        Other research projects have shaped the programme. We have created a knowledge-based curriculum based on a series of consultation focus groups with WP practitioners, teachers, school pupils, undergraduate students and other experts, examining the knowledge young people need to support their progression to HE. This curriculum now underpins the programme’s information, advice and guidance activities and allowed us to develop an evaluation tool to test participants’ knowledge. This alignment between research findings, programme activities and evaluation strengthens both the programme itself and the validity of the evaluation. It ensures that we are looking at the right outcomes and using the right tools to test whether there has been impact on those outcomes.

        Effective evaluation practice is not always easy, it takes an investment from an organisation and a commitment to making sure that learning is taken forward into programme changes. Over the years, we have found that incorporating the principles outlined above: drawing on stakeholders expertise, upskilling staff and supporting evaluation with other types of research means we can answer a range of detailed questions about impact and improve programmes whilst also leaving teams stronger and more able to embed evaluation into their everyday practice.

         Ellie Mulcahy, Head of Research, The Centre for Education and Youth

        Headshot of E Mulcahy

        [ Modified: Wednesday, 22 April 2020, 3:22 PM ]

        Comments

           
          Anyone in the world

          As a consortia Aspire to HE strive to evidence what works but we don’t shy away from shining a light on what didn’t so that we can learn and grow. We believe this can be achieved most effectively through an external lens. We wanted to work with a partner that would be a critical friend and would support us to measure impact and improve our practice for our young people.

          Having an external evaluator has allowed Aspire to HE to bring a wealth of knowledge and experience on board in a cost effective manner. Working with an evaluation company gives us access to expertise: a team of qualified professionals with many different ideas and techniques.

          Choosing a partner with the right ethos was very important to us. Aspire to HE wanted to work with someone who had the same passion for our goals, understanding barriers, as well as experience conducting research with young people and working within education. It was really exciting when we managed to find a partner that encompassed all these things!

          The Centre for Education and Youth (CfEY) is a ‘think and action-tank’ who believe society should ensure all children and young people receive the support they need to make a fulfilling transition to adulthood. This ethos aligns with that of the Uni Connect Programme and we felt they would provide an excellent partner for our consortia.

          CfEY have lots of experience of working with young people and conducting research in and around educational establishments; the majority of their team are former teachers and youth workers. This has been of great benefit to us as they have been able to understand the difficulties of getting in to school and colleges to talk to young people and the complex nature of our delivery.

          They have also authored a number of reports and research papers on widening participation and young people’s life outcomes, such as; ‘Careers Education: What should young people learn and when?’ (in partnership with Founders4schools), ‘Boys on track’ improving support for white FSM- eligible and black Caribbean boys in London (in partnership with Greater London Authority) and Partners in Progression: Engaging parents in university access (in partnership with King’s College London), to name a few. If you are interested in reading any of these or seeing then many others then please follow the link: https://cfey.org/reports/

          We first worked with CfEY in 2018 – bringing them on board to look at our evaluation framework and methodologies. They undertook an in depth review of what we had done so far, what had worked well, what hadn’t worked so well and spoke to our partners and stakeholders to get an understanding of what they were looking for from the evaluation process. CfEY also undertook lots of consultation with our staff to consider how evaluation would work for them on the ground. This was a really important part of bringing staf on board with evaluation from the start. Building a strong relationship with our evaluation partners was a crucial step to getting it right. Staff felt consulted and considered in the creation of the evaluation framework and tools that were going to be used.

          Diagram showing process undertaken

          This first iteration of our new evaluation framework provided us a clear structure to work with for our evaluation and a consistent methodology. CfEY provided tools that the team could use to evaluate the work they were doing using pre and post evaluation surveys as well as activity level surveys for each event we delivered.

          It was important for Aspire to HE that we not only had quality data to look at our impact and improve best practice but that we could also tell the story of our young people. So we incorporated qualitative data in the form of interviews and deep dives with our students to look at their journey through a year of Aspire to HE delivery. Working with CfEY gave us a team of people with lots of experience in conducting this type of research and who could provide us with a report of this work.

          The report CfEY pulled together detailed the results of our evaluation and recommendations on how to hone our practice to best support the young people and achieve our desired outcomes. This has been presented to staff and our governance meeting to showcase the work we have been doing and informed part of our review process for our delivery model and approach to outreach work.

          Having achieved such a successful partnership for 2018/19 we went out to tender to use an external evaluator again in 2019/20. CfEY were successful in gaining the tender for this academic year and we are pleased to be building on this relationship again this year.

          2019/20 evaluation is going really well! It was important for us that this year’s evaluation included up skilling our staff and building a team that could continue to use our evaluation tools and technique to good effect. A review of the processes and tools we have used has been undertaken and improvements made.

          Additionally, CfEY has supported Aspire to HE to undertake the development of a Knowledge Curriculum to use alongside our progression framework. Aspire to HE consulted groups of their key stakeholders to identify the key elements of knowledge which young people need in order to progress to HE.

          These stakeholders included:

          • The central Aspire to HE programme team 
          • Programme Leads and Delivery Practitioners from the 7 Aspire college partners
          • Key Stage 3 and 4 Uni Connect pupils from a local target school
          • Current undergraduates at the University of Wolverhampton 

          The consultation workshops with these groups were designed to identify the body of knowledge a young person needs to make informed decisions about HE, and group this into themes. Themes were then refined by combining or revising as necessary. In a follow up consultation, key stakeholders considered each theme and its sub themes in detail and identified how knowledge within each theme builds from Key Stage 3, to Key Stage 4 and into Year 12 and Year 13.

          This has underpinned a key part of our pre and post evaluation as students now complete a knowledge ‘test’ to more objectively measure change in what they know about HE. This adds another element to the evaluation survey, alongside the self-reporting answers which gives us another set of data to examine and report on.

          Staff have embraced the new approach this year and have gathered lots of excellent evaluation so far I am really excited to see what our evaluation partnership produces this year!

          The Centre for Education and Youth will be blogging to tell us about their experience of this partnership and how they approach evaluation to ensure the greatest impact on young people.

          Hannah from Aspire to HEHannah Guy, Data, Monitoring and Evaluation Officer, Aspire to HE

          [ Modified: Wednesday, 1 April 2020, 12:05 PM ]

          Comments

             
            Anyone in the world

            Over recent years the Office for Students has increased the emphasis on higher education providers to evidence ‘what works’ in terms of widening participation outreach activities. This drive to improve the evidence base has been supported by the OfS SEF, standards of evidence and capability building across the sector. OfS evaluation guidance sets out three types of evaluation evidence, starting with type 1 - where interventions are based on a narrative in terms of a theory of change and research literature/evidence from across the sector, type 2 empirical evidence – interventions are supported by pre/post change and type 3 demonstrating causality via experiment/RCT approaches where outcomes of an intervention group are compared against a comparison or control group.

            Achieving an experimental or quasi-experimental (type 3) impact evaluation design is challenging for a range of reasons. Often outreach practitioners say they do not believe randomised control trials (RCTs) should be employed due to ethical concerns, in that some young people will miss out, or because it is difficult to collect data and obtain consent for non-participants, and importantly because it is difficult to control for ‘contamination’ effects (i.e. hard to be sure that any control or comparison group did not benefit from other types of support). This point is important as RCT’s are often employed within medical research where the dosage of treatment (e.g. drug A) can be controlled within the test environment and participants are unlikely to have access to the treatment drug outside of this environment. However, it remains important for NCOP evaluators to aspire to type 3 evaluation methods since we cannot assume that the outreach work in question will make a positive impact without robust evidence of the proven outcomes they achieve. One such approach that can support this is a quasi-experimental design.

            The Aimhigher Plus evaluation approach discussed here involves a quasi-experimental design across the programme activities based on the Propensity Score Matching (PSM) methodology.[i] This type of approach will be of interest to you if you are seeking to make the evaluation of the impact of NCOP interventions more robust.

            The Aimhigher Plus programme is fortunate in having well-established evaluation tools and methodologies that have been highlighted as good practice by the OfS and the Sutton Trust. Plus, the methodology is supported by my PhD at the University of Birmingham School of Education and Economics. This has meant the approach has been peer reviewed by leading academics including Peter Davies, Tracy Whatmore and Claire Crawford.

            Theoretical approach underpinning the evaluation plan

            The Aimhigher Plus evaluation plan is underpinned by a well-developed theory of change, associated logic model and a learner progression framework. The Theory of Change model synthesises the Sociological and Psychological factors underpinning educational inequalities between different socio-economic groups, throughout the learner life-cycle, operationalised into five key barriers known as the 5As (Awareness, Aspirations, Attainment, Application and Access). These barriers are being addressed through six key targeted intervention types (Campus visits, Information, Advice and Guidance (IAG), Masterclasses, Mentoring, Tutoring and Summer Schools), which aim to increase the likelihood of disadvantaged learners progressing to HE.

            The evaluation design aims to triangulate both quantitative and qualitative evaluation including outcome/impact measures, formative and process evaluation approaches and secondary data sets, taking a realist approach in order to explore ‘what works’, in what contexts and for which learners. A quasi-experimental design is being used across the programme and intervention types to assess the impact. The approach employs a matched-groups design, where outcomes are compared between learners that have engaged in NCOP activities (intervention group) and have not engaged (comparison group). This evaluation is supported by national (CfE) and local survey data to measure shifts/changes in attitudes and via access to school / college and national administrative data sets to support tracking and the analysis of learner outcomes (attainment, progression to level courses and entry to HE).

            The evidence of impact will complement the formative and process evaluation, including feedback from NCOP staff, school / college partners and learners, and externally commissioned case studies, to ensure activities and associated content effectively meet learner’s needs and programme objectives. The diagram below, provides an overview of the cycle of phase two evaluation activities and the standards of evidence that these evaluations support.

            Evaluation Methodology and Standards of Evidence

            Building on the existing evidence base

            Within phase one Aimhigher West Midlands undertook analysis that began to demonstrate the impact of the programme. Over 4000 students were tracked across all schools / FE colleges in terms of UCAS acceptance rates. This empirical evidence compared outcomes between NCOP students that had and had not engaged in the programme and outcomes between students and their level of engagement within interventions.

            The results of the analysis of the UCAS acceptance data (2018)[iii], given in the Aimhigher NCOP HE Progression Summary 2018 show that 57% of AHWM NCOP students that engaged were accepted to HE (UCAS) compared to 37% for a comparison group of AHWM NCOP students that did not engage. This means students who engaged were over 1.5 times more likely to be accepted to HE. There appears to be a linear relationship between engagement and HE participation, as shown in the graph below. This empirical evidence base (OfS evaluation standard type 2) suggests that the Aimhigher Plus programme is making a difference and that it is worthwhile continuing (however without establishing definitive direct causal effects). This report was recently highlighted as good practice in the OfS ‘what works’ lit review (to be published in early 2020).

            HE acceptance rates (%) by engagement

            Next steps: testing for causality

            The PSM methodology in the phase 2 evaluation plan will strengthen this evidence base by testing for causality using a matched groups’ design. PSM is useful to support causal inferences in non-experimental settings where the selection of a comparison group is difficult because of the complexity of the pretreatment characteristics. In normal matching approaches, groups are matched on simple characteristics (i.e. to make sure the groups are alike).[ii] PSM uses propensity scores to take account of the predicted probability of membership of the treatment vs. the control group (using logistic regression) and as covariates that predict/control for the observed outcomes.

            NCOP learners within the treatment (NCOP intervention) and non-treatment groups will be matched in terms of key variables that have been found to influence attainment and HE progression rates. Evidence suggests that the most significant factor associated with progression to HE is a learners’ prior level of attainment.[iv] Prior attainment and HE progression rates vary across socio-economic groups,[v] gender,[vi] ethnicity,[vii] disability,[viii] and English as an additional language (EAL).[ix] In addition to the factors outlined above we will also only be comparing learners’ outcomes if they are attending the same schools/colleges. Evidence suggests that it is important to control for the school environment/experience in this way.[ix] Learners from NCOP wards will be matched in terms of these characteristics within the treatment and non-treatment comparison groups. The diagram below provides a summary of the variables that will be matched and controlled. PSM will be employed to match the treatment and comparison groups and to estimate the treatment group effect size.

            Match group design

            A key component of our matched group design is that the non-treatment group will only include learners that have not engaged (e.g. non-participants) in NCOP or other widening participation activities. This data on non-participants is collected for a sample of students via annual surveys which include questions about their wider participation in outreach activities (e.g. non-NCOP). Without this approach comparisons would be made between a treatment-group (for whom the dosage of intervention is known) against a ‘so called’ control/comparison group (for whom dosage of interventions is only partially known or not known at all). This runs the risk of suppressing any significant impact as the control/comparison group may have engaged in WP interventions.

            Partnership working underpins the success of the evaluation. Implementation is supported by colleagues across the Aimhigher partnership including the central team, partner HEIs, FE colleges and schools across the West Midlands region. Learner engagement within NCOP activities is tracked via the Aimhigher Database. Furthermore, data is collected to measure both short and medium term outcomes as well as the long term HE progression outcomes. Individualised UCAS outcomes data is sourced successfully from schools, and schools agree to provide data as part of their involvement in the programme. Local data is complemented with data sourced from administrative datasets using the National Pupil Database (NPD), Individualised Learner Record (ILR), UCAS and Higher Educational Statistics Agency (HESA) data.

            Because Aimhigher West Midlands is an OfS approved tracker service, we have a track-record and expertise in working with national administrative datasets, which otherwise can be very challenging to negotiate and require high level expertise to analyse. Becoming an approved researcher for the Office for National Statistics (ONS) Secure Research Service (SRS) has enabled us secure access to de-identified, unpublished data in order to work on research projects for the public good. The other tracking organisations (HEAT and EMWPREP) are also helping to make sure that more and more outcomes data should become available to NCOP evaluation teams in future. However, there is a particularly significant time delay in receiving NPD data (in the past typically 12 months after the data is processed). We are looking to the OfS to expedite the processes involved. Without this data it is not possible to employ quasi-experimental approaches and support the Office for Students drive to identify ‘what works’, in what contexts and for whom.

             Matthew Horton

            Matthew Horton
            Aimhigher Research and Monitoring Officer



            [i] Paul Rosenbaum and Donald Rubin introduced the technique in 1983. See for example:  https://scholarworks.umass.edu/pare/vol21/iss1/4/

            [ii] There is a danger that error may be introduced and regression to the mean may occur (for example if ‘worst’ cases from one group are compared to ‘best’ cases in another group).

            [iii] People who made an application and were accepted through UCAS to start an undergraduate course in the 2018 cycle)

            [iv] DfE, 2014 Gorard 2012; BIS 2013; Goodman et al., 2010; Chowdry 2013

            [v] DfE 2009, DfE SFR 2013, BIS 2015, Sutton Trust 2010

            [vi] DFE SFR, 2016; HESA 2014/2015

            [vii] DFE SFR, 2016; UCAS End of Cycle Report, 2015

            [viii] DFE SFR 2016

            [ix] Perry 2016

            [x] Bandura, 1994; Bryk et al, 2001 Rosenbaum et al, 1988


            [ Modified: Wednesday, 1 April 2020, 12:06 PM ]

            Comments

               
              Anyone in the world

              In October the OfS published the NCOP End of Phase 1 report for the national formative and impact evaluations, combining work carried out by CFE Research, myself and colleagues at Sheffield Hallam University, and the Behavioural Insights Team.

              Most of you, if you are NCOP 'old hands', will be familiar with the various activities of the two evaluation strands: the Impact strand was mostly concerned with gathering data on pupils' interaction with outreach activities; the Formative strand, while beginning the process of identifying capability needs we are now building on in Phase Two, was also largely concerned with identifying optimal ways in which partnerships work, in terms of operating models and structures that capture the complexity of working across multiple-partner consortia. This was done largely by running NCOP staff surveys and by initiating a series of 'field visits' to 12 of the 29 areas. I was personally involved in six of these visits, collecting data from an average of 10 interviews across two-day trips. We were able to capture the perspectives of everyone from Consortia leads and board members; partner leads; school and college-based NCOP-funded delivery staff; and data and evaluation officers. The formative evaluation also produced a number of fascinating case studies illustrating everything from organisational structures to innovative outreach activities.

              The Impact evaluation consisted largely of a baseline and follow-up survey of over 4,000 learners who had (up to July 2019) taken part in the programme, three randomised control trials (RCTs) and a qualitative review of the partnerships’ evaluation evidence. On the basis of learning from Phase One, we made recommendations on how the programme could be enhanced and evaluation practice strengthened in Phase Two. While the two strands operated (and reported) separately, this joint Phase One report specifically sets out to draw findings across the entire scope of activities that you have been engaged in.

              Summarising the main findings we found that:

              • NCOPs are definitely adding value - the collaborative approach is successfully addressing ‘cold spots’ in outreach provision, and as a result some schools and further education colleges (FECs) are engaging in outreach for the first time ever, or since the demise of Aimhigher back in 2011. 
              • This is reflected in a combination of new and well-established interventions that are increasingly structured to create a sustained and progressive programme of support for NCOP learners through Years 9 to 13.
              • Partnerships are moving away from offering fixed menus of activities and increasingly providing programmes that are tailored to the age and circumstances of learners, school/college type and the local context.
              • NCOP is facilitating access to high-quality, impartial information, advice and guidance (IAG) for target learners, in support of the achievement of the programme’s objective to help ensure post-16 and post-18 decisions are better informed.
              • Reflecting the priorities of many NCOPs at inception, there has been notable progress in addressing the challenge of engaging parents as key influencers on young people’s aspirations and decision-making.
              • Locating NCOP staff within schools and FECs to co-ordinate and/or deliver outreach activities boosts the capacity of the schools/FECs to engage with the programme. It also helps to support the professional development of teaching staff by raising their awareness of the routes to, and opportunities in, HE.

              Despite these encouraging findings, we also found several keys areas in which partnerships could work better:

              • Some partnerships’ governing bodies do not reflect the core membership of the partnership they oversee and some lack strategic focus. We recommend that in future all core partners are represented at a strategic and operational level through membership of the governing body and/or operational group or sub-group.
              • Good communication between the strategic and operational groups is imperative, as is communication between the lead institution and partners and between partners themselves. Although communication had improved, some partnership staff still report that it is not as effective as it could be.
              • Schools, colleges and young people are best placed to articulate their needs and the challenges they face, but they are not always represented at a strategic or operational level within partnerships and, as such, have limited opportunity to shape delivery plans. 

              There is still some confusion about the aims and objectives of NCOP and the difference between NCOP and other outreach activities amongst schools and FECs, which is acting as a barrier to engagement in the programme. As a result, one our recommendations to OfS is to strengthen the brand (e.g. Aimhigher between 2004-2011 was successful as a brand) or at least, given that partnerships have been encouraged to develop distinct brand names, introduce a degree of consistency across local branding (e.g. a common strap line) to create a national identity that differentiates NCOP from other outreach.

              We would be very interested to hear what you think about this - and any other issues highlighted in this blogpost.

              Colin McCaig

              Author: Colin McCaig, ECAP evaluation team member, 20th Nov 2019

              [ Modified: Wednesday, 1 April 2020, 12:06 PM ]

              Comments

                 
                Picture of Outreach Evaluation Hub
                by Outreach Evaluation Hub - Wednesday, 13 November 2019, 4:38 PM
                Anyone in the world

                HeppSY is approaching Phase 2 with a growing sophistication in its approach to data and evaluation. The recent introduction of the NCOP Progression Framework has prompted a review of HeppSY’s Evaluation Model and has made for exciting developments, enabling an even tighter integration between the programme and its evaluation.

                Lucy Clague, Evaluation and Data Manager, explained how HeppSY’s work has been informed by Bourdieu's concepts of economic, social and cultural capital. The theory, which has underpinned the partners’ understanding of inequality in access to higher education, recognises that where a young person comes from a physically, economically and socially challenging background it is important to the barriers they face. All of HeppSY's outreach activity rests on four barriers a young person might experience around accessing higher education.  Informed by a collaboration of practitioners, academics and partners specialising in widening participation, the development of a programme Logic Model led to the four barriers, or strands, namely confidence and resilience; attainment; higher education knowledge and career knowledge. This has set HeppSY’s prototype for activity based around 'what works', forming the principles of the Evaluation Plan.

                When the OfS tasked NCOP partnerships with developing Progression Frameworks for Phase 2, it made sense for the coding of the activities in the Progression Framework to follow the coding of evidence in HeppSY's indicator bank which had been informing its evaluation. The learning outcomes for the activities in the Progression Framework are skills based and incremental – designed to show the journey young people are going through, with the data and evidence collection organised in parallel to follow the journey. Plus, mapping of the framework against the Gatsby benchmarks and Ofsted inspection expectation for careers guidance has meant the value of the activities goes beyond that of the NCOP alone.  

                The creation of the Progression Framework then led to the separation of the Phase 1 Logic Model into two, the first reflecting 'programme' outcomes and a second one focusing on 'learner' outcomes - therefore linking it neatly with the new Progression Framework. This ensured a cohesive design across all elements of the programme and its evaluation. Although separate, the Logic Models are interlinked and interdependent; to be successful both models require positive outcomes, and the iterative nature of the HeppSY programme is also at play here with evaluative evidence emerging from the overarching programme developing the learner experience, and the learner experience into the ongoing re-development of the programme.

                The ambition doesn’t stop there. The HeppSY Learner Progression Logic Model mirrors the statements held in the Progression Framework, ensuring that evidence can be collected which maps onto both models. In the next few months the team are planning a data analytics tool, which will support activity planning and evaluation on the ground. An online diagnostic tool is being developed with schools and colleges. This will be available in future on the student-facing part of HeppSY’s website. The tool will use a series of questions measured by Likert scales to capture attitudinal change for 24 skills based outcomes (coded in line with the Progression Framework and Evaluation Indicator Bank).

                Being able to feed learner voice data and analysis back into delivery is another reason that practice has now reached a critical place. 

                Work is currently underway to ensure that the diagnostic tool scales represent reliable measures that predict HE outcomes and intention to apply to HE. The annual survey (which has been mapped against the four HeppSY strands since 2017), has now been mapped on to the Progression Framework outcomes to enable HeppSY to draw on and link with data from the 10,000 responses from each wave of the survey.

                Sharon Woodward-Baker, HeppSY Programme Manager, says the programme feels more iterative and cohesive than ever and that having a clear theoretical starting point has helped to line up funding, activity development, delivery and evaluation aspects, with a view to target learners who are better equipped to face future challenges.  The personalised approach has reconciled a high-level Logic Model with a Progression Framework which focuses on the individual and allowed data and evidence to feed into the learner activity (and vice versa). A wealth of management information is being generated which will support both formative and summative evaluation of the higher education support needs of the target groups in South Yorkshire.

                Considering it's still early in Phase 2, the team is pleased to have the data and evidence infrastructure firmly in place. In HeppSY’s case the formula, “more thinking = better ideas” seems to be paying off.

                HeppSY's Recommendations:

                A Logic Model developed through local evidence.
                An Evaluation Plan which links in with the wider programme but is flexible enough to meet changing funding requirements.
                A team who recognise the value in having a comprehensive collaborative approach between data and practice.
                A Progression Framework with skills and outcomes which are quantifiable.
                A close partnership between data and the evolution of student experience.


                Sharon and Lucy at the NCOP practitioners conference

                Sharon and Lucy at the NCOP practitioners conference

                Author(s): 

                • Sharon Woodward-Baker, HeppSY Programme Manager
                • Lucy Clague, HeppSY Evaluation and Data Manager


                [ Modified: Wednesday, 1 April 2020, 12:07 PM ]

                Comments