Comments on: Inchworm https://blogs.swarthmore.edu/burke/blog/2015/10/02/inchworm/ Culture, Politics, Academia and Other Shiny Objects Fri, 30 Oct 2015 03:33:46 +0000 hourly 1 https://wordpress.org/?v=5.4.15 By: Alice https://blogs.swarthmore.edu/burke/blog/2015/10/02/inchworm/comment-page-1/#comment-72988 Fri, 30 Oct 2015 03:33:46 +0000 https://blogs.swarthmore.edu/burke/?p=2886#comment-72988 So, I teach in a high school, which is clearly different from college. But it’s a lot of the concerns you raise apply in that setting, and there are also many similarities between a well-endowed boarding school and a well-endowed residential college.

We do a lot of qualitative assessment. My department has an external review starting next week. Three teachers (at the high school and college level) are going to come by, watch some of our classes, talk to us, talk to our students, read documentation that we prepared, etc. and give us feedback based on that. I’m a new teacher and I have a mentor who watches me teach and gives me written and verbal feedback every day.

I guess I wanted to say that I think there is absolutely a feasible middle ground between hiding our practice and submitting to Science. And that I find it very helpful in my growth as a teacher, and in maintaining common purpose across a department.

]]>
By: Shallot https://blogs.swarthmore.edu/burke/blog/2015/10/02/inchworm/comment-page-1/#comment-72987 Thu, 22 Oct 2015 19:06:36 +0000 https://blogs.swarthmore.edu/burke/?p=2886#comment-72987 I’m a Swarthmore grad who got a PhD in history and then moved into administration, and I’m now working in assessment. I see my training as a historian as good preparation for thinking about assessment, because I try to frame it as about “evidence” rather than “data, ” a term that I try to avoid using In most situations.

I understand many of the frustrations faculty members have with assessment, and I share some myself. I’ve seen institutions implement assessment requirements in ways that are inflexible and onerous, that result in making assessment a bureaucratic requirement without meaning. Some assessment leaders are coming to this realization as well, arguing against the “compliance culture” that has developed and seeking to make learning assessment meaningful. (See recent book from the National Institute for Learning Outcomes Assessment, Using Evidence of Student Learning to Improve Higher Education). Middle States has recognized this as well, and the new standards offer more flexibility in terms of assessing student learning outcomes.

One of my main frustrations is that many faculty have little understanding of the history of assessment and accreditation and the relationship between them. Much of this recent emphasis on assessing student learning comes as a reaction to the report of the Spellings Commission, which seemed to suggest that mass standardized testing, using tests like the CLA, was likely to be adopted as a way to evaluate student learning gains in higher education. Regional accreditors, along with much of the higher education community, opposed this, especially given the wide variety of missions and types of institutions in the U.S. The assessment requirements that all regional accreditors have developed were a way to establish faculty at each institutions as the only ones who could determine what students should learn and assess their learning. I think it’s important to recognize this. Accrediting requirements could be so much worse in terms of assessing student learning, and I think failing to acknowledge this is an example of the lack of discrimination faculty sometimes show in attacking initiatives. The fact that faculty are supposed to be the ones leading assessment is why assessment can be a burden to faculty – we administrators cannot do the work for you. At the institutions I’ve worked out, the increase in service work from assessment has been a greater source of faculty resistance than philosophical opposition.

Hopefully I am an administrator who is a partner to faculty, working with them to figure out how to incorporate assessment into work they already do and to make it meaningful and useful for them. I’m not looking for small, incremental progress, and I value stories and narratives as a part of the assessment process as long a couple anecdotes are not the sole evidence of student learning. At the heart of assessment I see the asking of questions about student learning and the attempt to answer those questions through some sort of systematic inquiry. This is similar to what faculty do in their own work. Unfortunately, in practice, many institutions do have reporting requirements that make it difficult to approach assessment in this way, but, at least in the Middle States region, I don’t think it has to be set up this way.

]]>
By: Contingent Cassandra https://blogs.swarthmore.edu/burke/blog/2015/10/02/inchworm/comment-page-1/#comment-72984 Mon, 05 Oct 2015 22:48:09 +0000 https://blogs.swarthmore.edu/burke/?p=2886#comment-72984 This rings true to me. I’ve long thought that one of the things lost in the increased use of faculty for whom what we usually call “service” is not an official part of the job is a parallel loss of a kind of localized, ongoing, often just-in-time research that takes place when faculty within a department or program talk to each other about what their students need, and how that is changing, and craft and re-craft curricula accordingly. At this point, even in schools with still relatively robust tenure-track systems and faculty governance, it’s often the case, especially with the core/intro curriculum, that the faculty who teach under average conditions have very little voice in shaping the curriculum, and the faculty who shape the core curriculum as part of their service work have little experience in teaching those courses under average (i.e. adjunct/contingent load) conditions.

This is not a good thing, I’ve found, to point out to data-oriented tenure-track colleagues. It’s even worse to point out that some people are increasingly making their livings telling other people how to teach, based on “research” that is mostly divorced from the local classroom context, while doing very little teaching themselves. It *is* a good (or at least conciliatory) thing, in such conversations, to claim that one’s approach is “evidence-based,” even if one came up with the approach first and looked for the support later (which tends to be the way most “evidence-based” pedagogy arguments are born, I rather suspect).

It’s also not a good idea to point out that most quantitative data has its origins in questions posed using language, with all the possibilities for multiple or mis-interpretations, unconscious steering, etc., etc. that involves.

But you’ve described all of the above, and more, far better than I can.

]]>
By: Timothy Burke https://blogs.swarthmore.edu/burke/blog/2015/10/02/inchworm/comment-page-1/#comment-72983 Sun, 04 Oct 2015 21:23:01 +0000 https://blogs.swarthmore.edu/burke/?p=2886#comment-72983 Folks have got to understand that qualitative data is about putting the tricky, shifting world of human meaning back into the picture and insisting that it can’t be understood better by stripping all of its semantic character. Across a very broad front, this is where the data-fetishists keep getting mugged by human reality, and their usual response is to try and make people act more like information and code. It is for me a familiar and dreary aspect of virtual worlds: designers want players to act a certain way; players don’t; designers do their best to stamp out all the ways that the players might break the design. What’s left is a kind of ludic version of that scene in “Metropolis” where the man gets trapped in the machine, both comic and horrifying (all the more because people are doing it for the sake of fun.)

]]>
By: Sam Zhang https://blogs.swarthmore.edu/burke/blog/2015/10/02/inchworm/comment-page-1/#comment-72982 Sun, 04 Oct 2015 04:38:15 +0000 https://blogs.swarthmore.edu/burke/?p=2886#comment-72982 One bizarre possible outcome is if a change of guard happens halfway through implementing a more qualitative assessment system. Suddenly a panic erupts around the lack of quantitative data, and the administration calls in Natural Language Processing experts to attempt to predict how well a teacher did based off this stack of essays. They maybe scramble to apply some labels to how well teachers did, then train a model based off text features like length, vocabulary, and so forth (both of the teachers, and the student feedback).

Of course that isn’t a reason to avoid this course of action. It’s just interesting to think how qualitative data doesn’t stay “qualitative” without a constant interpretative effort.

There are a variety of reasons why I think feeding assessments to machines could be disastrous (but perhaps appear magically efficient for the first year or two). But that might be a fight for another day — once I’m arguing against someone other than the devil in my brain.

]]>