blog post image
Webcast-Blog

Webcast Blog 1: Analyzing student trends and needs is often ineffective

PUBLISHED ON:
April 28, 2022
LANGUAGE:
English🇬🇧
TYPE:
Webcast Blog

Welcome to our brand-new student experience series! Over the next few months, we will be publishing weekly episodes about the latest trends, best practices, tips, tricks, and other insights regarding the general experience for university students.

Want to listen to this topic instead? Click here to check out the webcast.

How do you reduce student dropout rates? What are the best ways to reach vulnerable students? How can you encourage students to visit their campus more often? These are just a few examples of questions and challenges that experts within the industry will cover in this series. You can expect us to cover a wide range of topics, varying from student wellbeing to engagement to community-forming.

Our structure will always be the same. We’ll start by breaking down the problem into smaller issues by using some real-world examples. Next, we’ll suggest a few ways to work on these challenges. We’ll then round off by suggesting the first step to get you started.

For the first episode, we’ll kick off with one of the biggest problems; analyzing student trends and needs is often ineffective. But what exactly do we mean by that? Universities try and collect data all year round to determine how satisfied students are with their university experience. Is their work-life balance in order? Are they coping with all the exam stress? Is there enough support for students who need a little bit of extra support?

Firstly, while working with numerous universities across Europe, we have noticed that it can take several months to even a year for a lot of universities to collect, polish, analyze and respond to data. For instance, at some universities, students will be asked to fill in a survey in the summer, but the results will not be readily available until December. By the time the data is analyzed, most problems may already be outdated. This does not only result in the university potentially spending resources to solve problems that no longer exist, but it also suggests that students who were vulnerable and needed support at the time of filling in the survey may have already dropped out or developed the need for even more support.

Secondly, we realize that student feedback is only requested very infrequently, often not more than just once or twice per year. For example, in The Netherlands, it is quite common for many higher education institutions to send ask students for feedback just before the summer holidays. However, only measuring the student sentiment at certain times of the year may cause bias. In our example, as most universities collect data near the summer, problems may not come across as strongly as students are happy that their summer vacation is approaching. Collecting data in the spring or fall may reflect entirely different perceptions (especially in recent years when there were more lockdowns during this time of year). On top of that, most universities are only able to reach a small portion of students for feedback. Several universities have indicated that their survey rates never exceed more than 8% of the total student population. This relatively small sample rate, in combination with the lengthy analysis period results in the next problem.

Lastly, asking students to provide their feedback but not addressing the problem (in time), may only result in more negative feedback. If a university asks a student where they would like to see improvements but is consistently unable to deliver, the student is more likely to lose their trust in the university (creating a vicious cycle where feedback becomes more negative every year).

To address these issues, we suggest the following three solutions:

  1. Collect data all year round. The data stream should never be coming to an end, as this helps detect yearly recurring trends and other interesting patterns.
  2. Don't only use focus groups or surveys (as this may take a very long time), but also measure data on other platforms (behavioral trends) such as Facebook (e.g., number of comments & likes to measure engagement), your university website (e.g. number of students visiting the student psychologist page) and attendance tools (e.g. how many students are attending their lectures on campus).
  3. Keep in mind that some of the students who may be struggling the most are the hardest to reach (e.g., they do not attend the campus very often). These students will also be the students experiencing the biggest problems.
  4. Work together with student societies to collect data. Student societies are often more influential for students than the university itself. They also have more direct access to approach them.

To get started with analyzing student trends more effectively, we propose the following first step.

Create an overview of where you are currently receiving student experience data from. Include how often you receive the data, how reliable it is, and what portion of the student population it represents. This will help you get a first understanding of when, what, and how data is being collected and addressed.

The sooner you understand students, the sooner you can help them (which is what we’ll be covering in the next few episodes).

Explore our other blog posts