In spite of providing support to other areas of the College undertaking assessment projects, like many IR offices, ours had yet to fully engage with our own assessment. Sure, we had articulated goals for our office some time ago, and had reflected on our success on the simpler, numeric ones, but only this year (as part of increased College-wide efforts) began to grapple seriously with how to assess some of the more complex ones.
One of our take-aways from the Tri-College Teagle-funded Assessment project that we’ve been involved with was that the effort to design rigorous direct assessment is key, and the work that goes into thinking about this may sometimes be even more important than the measurement itself. It’s one thing to observe this, and quite another to experience it firsthand.
As Alex and I looked at our goals, each phrase begged clarification! We don’t want to just conduct research studies, we want to conduct effective studies. But what do we mean by “effective”? How do we meet our audiences needs? How do we identify our audience’s needs? What is the nature of relationship between the satisfaction of a consumer of our research and the quality of the research? And on and on. Before even identifying the sorts of evidence of our effectiveness that we should look for, we had already identified at least a dozen ways to be more proactive in our work.
The other revelation, which should not have been a surprise because I have said it to others so often, is that there are seldom perfect measures. Therefore, we need to look for an array of evidence that will provide confidence and direction. There are many ways to define “effective” research – what is most important (and manageable) to know?
And finally, it’s easy to get caught up in these discussions and in setting up processes to collect more and more evidence. We could spend a lot of time exploring how a research study informed someone’s work and decision-making, but that time that could have instead been used in expanding the study. We have to find a balance, so that we do assessments that are most helpful, and don’t end up distracting us from the very work we’re trying to improve.