Jon Renyard is the University Secretary and Director of Student Experience at Arts University Bournemouth and he also chairs the GuildHE Quality Managers Network. 2014 saw Jon and his team curate and pilot a new student engagement survey. At the latest meeting of the Quality Managers network Jon sat down with Rhys Wait, Project Officer at GuildHE, to discuss the trials and tribulations of the survey.

Rhys Wait: What was the reasoning behind curating a survey at Arts University Bournemouth that was specifically centred on student engagement?

Jon Renyard: We’ve always had an internal satisfaction survey which we roll out every other year, but in response to the inclusion of student engagement in the QAA Quality Code and in the re-writing of our own student engagement survey we decided we wanted to find out how engaged our students are with their learning. To do this we decided to curate a new survey, which would be asked in conjunction with the standard questions – which check if everything is going OK – every other year.

RW: And every student is surveyed?

JR:  All of our students from foundation through to level 6 are surveyed. They are first asked 29 general satisfaction questions, whilst questions 30 to 64 focus more specifically on student engagement.

RW: You mentioned a moment ago that you wanted to find out how engaged students are with their learning. What is it you specifically wanted to know that would help you understand this?

JR: Well we wanted to find out individual things which would help us understand student behaviour and attitudes to learning. For example, we were wondering whether students do any collaborative work with students from other courses. And things like how many hours they did paid work a week. Did they feel their writing had improved during their course? That sort of thing.

RW: And what sort of thing would you do with the results?

JR: With the pilot survey the intention was to establish a benchmark and get some stats and statistics in the first instance. We were trying to take on board the research undertaken by HEA that suggests that encouraging engagement leads to higher achievement and decided we needed to monitor student behaviour and develop a data-set to allow us to identify different issues going forward. Interestingly, results from the first cohort suggested that students didn’t feel their writing had improved during their time at university, so we’ve now hired a new Director of Creative Writing to address this issue. I’d love to say this survey directly led to this change! It didn’t, of course – there were other factors at play – but the survey was able to confirm a need for it.

RW: In demarcating the two sections of this survey – from satisfaction to engagement – do you feel this survey takes steps to address the limitations of the NSS or a satisfaction survey by gathering information beyond what the results of the NSS could tell you.

JR: I think at this point the limitations to the NSS are well established, for example the three-month window for responses means that students are often confronted with the survey just before a hand-in, which will affect their results, and the varied way in which students can reply – online, on the phone or by paper – all have an effect on the results. I have not encountered anyone that says measuring the student experience across the sector is a bad idea, but with that said the NSS is a sector-wide process which is unable to capture the nuances of all institutions. Ultimately the results are used to make superficial comparisons between institutions, when really the survey is about expectations; have student expectations about the university experience been met? And invariably this leads to students comparing their experience to the perceived experience they could have had elsewhere.

RW: So how does your survey take measures to avoid these pitfalls?

JR: We have taken a number of measures to ensure our survey more accurately reflects the student experience. The survey is presented to students by a staff-member not directly related to their course, at a neutral time, over a three week period. So we try to avoid giving the survey to students directly before a hand-in day, or at a time when there is a heavy work-load which is likely to have an impact on how they respond to the questions.

RW: And has there been a marked difference between the results?

JR: The results have not been madly different but the response from staff and student reps has been very strong.

RW: Speaking of student reps – were any involved in the construction of this survey?

JR: Yes there were. The survey was curated by a sub-group of the quality committee and a student sat on the panel. He was very actively engaged and I think around ten of the final questions included were actually his … I find that if a student says you should do something, you need a strong reason not to. That wasn’t to say they were all included however. We would discuss the suggestions and say ‘this question might not be fully understood by students’ and we would then drop it – it was a collaborative process. And he was terribly keen to see the answers afterwards.

RW: What would you say to institutions that were interested in running a similar project?

JR: I would absolutely recommend it – by 2018 we’ll have three cohorts worth of data and then we’ll hopefully start to see some trends emerging.

RW: And what advice would you give them?

JR: Decide what it is you want to know given who your students are. Know your students and put things in ways they’ll understand.