The National Student Survey has exerted “downwards pressure on standards…driven by both the survey’s current structure and its usage in developing sector league tables and rankings.”

So said Ministers Donelan, Solloway and Lord Bethell when they launched their policy paper in September on reducing the bureaucratic burden in research, innovation and higher education and with it launched a “radical, root and branch review of the National Student Survey”.

Since then we at GuildHE have been gathering members’ views of the Survey, firstly trying to identify whether there is any evidence to support these claims, whether the NSS is seen as a useful tool by institutions, but also to consider how we might continue to reform the NSS.

Looking back

As I look back at the outcomes of the HEFCE-run consultation from 2004 (2004/33) it is interesting to see how well the National Student Survey has stood the test of time. The National Student Survey was proposed to be “an essential element of the revised quality assurance framework for higher education, as part of a package of new public information on teaching quality” being described as both “ambitious” and seeking to “report results that are both reliable and at a detailed level.”

Rather than being seen as having had a negative impact, as described by the Ministers, members echoed the original purpose and saw the NSS as a highly effective enhancement tool to improve the student learning experience, providing a comprehensive overview of student views on the key issues relating to their academic experience that is not disproportionally burdensome to the value.

The data that is produced by the survey is used by higher education providers as a key part of their annual and ongoing approach to quality review and enhancement. It is particularly helpful as one of the many tools to engage students and gather their views, something that is seen of particular importance by GuildHE institutions.

The strong response rates of the survey allows deep dives into views of different groups of students and identifies key areas of focus for departments, institutions and the sector as a whole, as demonstrated by the focus on feedback and assessment over recent years.

Comparisons vs expectations

One of the key criticisms about the NSS has been that students have only been at one institution and so they are not able to compare their experience to another institution. This is a legitimate concern, but perhaps more a challenge of how the information is used. There is a strong case that the information shouldn’t be used to feed into comparison between institutions, such as newspaper league tables, since that is not what the information is designed for.

However, whilst students are not able to compare their experience with other institutions they are able to compare it with their own expectations. Institutions are unlikely to want to lower student expectations, and so the higher the expectations the further the institution has to go to exceed these, and therefore get a high NSS result, and so it can be a useful measure.

Value of the perception of independence

One of the other key benefits identified was that the NSS was seen as independent by students. This was seen as giving the survey extra credibility both in terms of how the information is gathered but also in terms of the information being published and therefore being acted upon.

This independence could be further enhanced by institutions not being expected to promote the survey, which could include no incentives, prize draws and so on.

Challenge of data in smaller providers

There are a number of specific challenges in smaller providers – firstly that for a number of institutions or courses they do not have enough data to reach threshold to be externally reported, which can then be presented as “No data available”. As more providers join the OfS register, many with small student numbers, this could result in large number of providers without data available. However, HEFCE/OfS have looked closely at this over many years and developed ways of aggregating over years or up JACS/CAH code.

Secondly, in smaller providers the data can be volatile, changing significantly year on year due to just a couple of students having a better or worse experience. However, benchmarks and the “significance” of the data is now presented which can help ensure that the data is given due weight and perhaps more support could be provided to boards of governors to interpret the data in a robust, but appropriate, way to discharge their academic governance responsibilities.

Additionally, there is the challenge in smaller providers where they can have a small number of courses, in sometimes related but quite different areas. When a small course fails to meet its data reporting threshold it is initially aggregated with results of the previous year and if that still doesn’t meet threshold then it is aggregated with other results which can give misleading outcomes. This can lead to either the information for one course being presented as the satisfaction rate for the whole institution or result in confusing data aggregations, and is something that the review should consider.

Gaming the system?

There was no evidence that institutions are gaming the system indeed the fact that the survey is filled in between January means that institutions wouldn’t be able to give higher grades for higher NSS score as that wouldn’t fit with the timing of assessment. Additionally, suggestions that institutions would just make the whole course easier is protected through both robust internal quality processes and rigorous external assurance measures such as review by external examiners, PSRBs and regulators, there is also the simple fact that if students are not getting something they will tell you about it!

Reforming the NSS

There was a strong sense from discussing this with members over the last couple of months that the National Student Survey plays an important and valuable role and should be retained as a key vehicle for gathering the views of students.

There is a legitimate challenge that if you are asking largely the same questions for almost 15 years it can provide a helpful longitudinal perspective, but might also be subject to the law of diminishing returns. It is therefore worth considering how we can continue to evolve the National Student Survey so that it remains fit for purpose.

One way to enhance the NSS would be for the Survey to continue to be used to enhance the student experience, provide regulators with necessary information and provide information for prospective students on the experiences of current students but not be used to compare experiences of students from different institutions.

There are clearly questions about extending the survey to other year cohorts, running the survey on alternate years or looking at different running it at different times of the year – is there an opportunity gap now that we don’t run the DLHE six months after graduation or would that just become a response to their final grade? All of these will, I’m sure, be considered by the review as it progresses and there will be pros and cons to different scenarios.

However a few suggestions of how I would seek to enhance the National Student Survey:

Firstly, given the current experiences, and probably more blended teaching approaches in the future, there could be some questions included about the digital learning experiences of students.

Secondly are there additional student characteristics that we could helpfully gather to enable us to examine the experiences of additional groups in more detail – this could include expanding the current three groups of students with disabilities to the nine that HESA gathers, or a more nuanced approach to different ages of “mature” students.

Thirdly, there could be a themed set of questions in the main survey that are asked for several years but then replaced on a rolling basis to enable us to explore different areas of the student learning experience.

Next, what does the “neither agree or disagree” category really tell us? It is essentially reported as a negative and so why not just have an even-number of options so that students are able to make the choice themselves.

Finally, do we really need a summation overall satisfaction question – Question 27 – or could this be gathered by the views of the aggregation of different sections?

As a final thought I would reintroduce the national steering group which existed for many years, initially as the NSS Steering Group and then as the Public Information Steering Group, bringing together students, institutions, funders and regulators, before being abolished by the OfS a couple of years ago. The role of group was well described by the 2004 HEFCE consultation response document as a body to “oversee and review the NSS… advise on the longer-term strategy for developing and embedding the survey”. One does wonder that if that group still existed perhaps we wouldn’t have got to where we are now?