It has again been ages, since I posted something here. Life gets in the way, I guess.

But, drum roll, after a very long journey, started in 2019(!), I am very glad to see our draft paper on ‘binning of midpoints’ has found a home at Studies in Higher Education. There is much more to it and for you to read (here) but in a nutshell: we warn against grouping the midpoint in Likert scales arbitrarily and then generating rankings. Although it is very common to produce headline results like ‘x% of students agree’, this is problematic as we could have also have had a headline ‘x% of students disagree’, for example. If we then proceed to create rankings of courses or institutions on % agreed, we might reach different conclusions on whether we have excluded the midpoint from that % calculation or when we . The overall rankings might be the same but individual courses/institutions could show (very) large shifts up or down, depending on how the calculations were done… . So we caution against using a slice of the data (e.g., the top two response categories from a Likert scale) and then using these in further analyses. Even though we make the point based on student satisfaction ratings, it should be clear that arbitrarily binning response categories in Likert scale questions is a pretty bad idea. We are not the first ones to make that point by the way and it also generalises to other settings - with some rare exceptions arbitrarily creating bins in your data is a practice to be generally avoided,… .