I get the sense occasionally that some of my colleagues see me as an evangelist for the use of new technologies in the classroom and in academic research. If I’m a technological missionary, though, my faith and more importantly my knowledge of what I preach is pretty weak.
There are so many things I would like to know how to do, so many technologies that seem or sound interesting to me. But I have the same basic self-protecting instinct that my colleagues have: in many cases, it’s better not to know about a technological possibility at all than to know just a little bit. Knowing a little bit becomes an obligation to know more and more, to break out your pitons and ice axes and start climbing a steep learning curve. Expressing an interest in something off-handedly can be a signal that you’re seeking resources, training, access. And maybe you are, but not now. At the same time, though, you can’t just leap on people with support capabilities in late May and say, “NOW NOW NOW”. (As if I had the energy to in late May anyway.)
It’s also that the acquisition of new technological competencies is impossible (at least for me) if I’m not putting what I’m learning to immediate and repeated use. The immediate spur to this post was a reminder from a colleague in our Information Technology department that there were opportunities for me to further explore GIS. I’ve been generously supported to attend a GIS workshop in the past. I learned a lot, but most of the hands-on competency I acquired in three days I would have to re-acquire, because I wasn’t able to put it to immediate use.
The problem is also that I learned that to use GIS in a classroom in the way that I imagined using it (largely to talk about spatial and visual information as historical knowledge, or as a part of a class on using virtual worlds as a tool for representing and teaching history), I would almost certainly have to devote a significant portion of the class to instruction in the technology. That’s a pedagogical problem that isn’t just confined to information technology: when you try to teach both a technique or a literacy and some kind of content, you often do a bad job at both. But if you try to teach some form of literacy by itself, it’s often so arid and mind-numbing that students turn off. If you try to teach content that presupposes a kind of literacy, it’s equally unsatisfying. I think you end up having to limit yourself to exploring a single technological or technical skill in a content-centric course. In my History of Play and Leisure class next year, I’d like to get the students to make short machinima–but I have to balance that aspiration against basic literacies needed to explore computer and video games, which I don’t think you can assume all students naturally have already.
In a lot of cases, even when you’re very certain about the usefulness and intellectual power of a new technology, the cost-to-benefit ratio of actually using it is not very favorable. It might be too time-consuming to learn. It might be a technological platform which will shortly go obsolete. It might be too difficult to teach to students, or rely on lower-level competencies that you’ll also have to teach. It might be that you’re too busy or harried right now. Or that your available time to learn a new technology is unpredictable–I know I’m tremendously inhibited about calling up someone and saying, “Hey, I happen to have two hours free now, could you train me?” Other people have schedules and demands, too. Plus, once again, learning something in those serendipitious hours does you no good unless you’ve got a plan for using it right away. If you had a plan, you’d be able to schedule training at right moment.
The schizoid awareness of the usefulness and power of new tools versus fear of the costs to my time and energies leads me, at least, to cultivating a deliberately hazy attitude towards information technology in general. I keep my fingers on the pulse as much as I can, but try not to become too acutely aware of opportunities or facilities that might compel me to respond at this very moment. I got to see our current faculty resource room just yesterday for the first time in quite a while and it is unbelievably awesome. It was kind of painful to see, though, because now I feel this incredible desire, a burdensome desire, to do something more ambitious than the modest immediate project I had in mind for some of this equipment. It’s techno-guilt: the machines are like a beloved relative that I don’t call nearly as often as I should.
The further behind the bleeding edge you get, the more acute the anxiety becomes. I’ll often criticize the technological literacy of academics, especially when it comes to research tools and publication tools, but there is a point at which it may be entirely rational to simply shut down all awareness of change. The ways in which most institutions support faculty retraining or the acquisition of new literacies are interstitial and voluntaristic. You do it when you can and when the institution can. Which means, much of the time, that you don’t do it at all. To really do it right, in a way, faculty and staff would almost have to shut down all of our ongoing business and spend concentrated and dedicated time on acquiring new literacies or technical skills.
My first thought was that you could split off the basic technical instruction into its own class, and make that a prerequisite for the class you *really* want to teach, but of course the problem there is that you don’t want to create one class whose sole purpose for existing is to feed another– ideally, what you want is to find other professors (ideally in your department) also interested in using GIS in their classes, and make the technical course a prerequisite for all of them.
If you don’t have any such professors, of course, then you have a problem– you can either abandon your efforts, or begin yet *another* course, this one informal, to evangelise GIS to them. And there goes 4-8 hours a week (at *least*) you’ll spend helping them out that you can’t use for class prep or simply keeping up with your journals.
It’s not just you, though– even in IT there’s the problem of what happens when a guy on your team learns about this l33+ new program or language or what-have-you. We have to make the exact same sort of tradeoffs in terms of do we introduce technology that may perturb an existing system (but eventually leave it in a better state), or don’t touch something that ain’t broken. I suspect the major difference is that we do it more often, but you’d be surprised (or perhaps not) at how often the answer is, “Yeah, that is very cool tech– now drop it and go back to something that won’t break the world as we know it.”
(Disclosure: I’m in IT at Tim’s institution, but not the person referenced in the post.)
Tim–
Plenty of other faculty find ways of dealing with the problem without themselves having to do everything themselves. It sounds like the main obstacle is that you think that you have to do all the hands on work yourself, rather than planning with skilled partners to develop curriculum. Courses in video production, visual ethnography, etc. have gone on for years at our institution without the instructor having to do all of the “lab” work. How do faculty in the sciences manage experimental/applied components of instruction?
Many of the things you dream up could be pulled off, but would require a lot of advanced planning with others. For the most part, you’re used to being a solo act, right?
What about taking advantage of competencies that might exist within a class group, but not be evenly distributed? That might work for something like your machinima example, where the more technically inclined in the group might show examples or lead the groupwork?
You shouldn’t feel guilty to use technology just because you’re generally interested in it. You’re capable of teaching successful low-tech courses. But if you really think there’s value in using digital applications, simulations, experiences, etc. to teach a subject, it can be done if you’re willing to adapt your workstyle a little bit.
Yeah, you’re absolutely right that this is about a kind of neurotic inclination on my part to think about this in solo terms. But that’s the way I think about all of my pedagogy and for that matter my research methodology, that until I’ve got an experiential mapping of what I’m doing, I’m not comfortable imagining what parts of it could be done by others. Don’t forget that faculty in the lab sciences are comfortable entrusting lab instruction to others precisely because their own research practices and training center on laboratory or experimental work. Whereas for me, my laboratory has only recently been more IT-centric; it starts as an archival and library practice. There’s a difference between a biologist sending his students off to the lab to learn to use a centrifuge and me in the most ambitious case sending people off to an IT support context to learn base-level skills for modelling a 3d environment. It’s not just that it’s easy to call for improbable levels of skills acquisition, it’s also the case that it’s possible to just mislocate the competencies you need in a fundamental way.
This is also about an anxiety about imposition on others coupled with the fact that to externalize what I have in mind is actually a lot of work for me. This is one reason, for example, that co-teaching is hard not just for me but for everyone I know. Collaborating with people in a teaching environment is generally not less work (which is what outsiders think) but it’s more work, because it means the collaborators have to explain and externalize much of what might otherwise be implicit and on-the-fly. Let’s take the machinima case. If I practice a bit with FRAPS and other things over the summer on my own, I can probably explain to students how to do what I have in mind well enough for them to take a shot at it, as long as I’m not too hung up about the quality or proficiency of the results. If I get more ambitious or more demanding in my expectations, however, and I turn to a partner in ITS, now I have to make time early in the semester to sit down and explain what I have in mind. I need to review with my collaborator what we have available. We may have to take several passes at it to find out what’s going to work. The results will be far better, but the time and planning involved is more considerable. (This is kind of commonsensical: better planning, better results. But on the other hand, there’s the danger of the perfect being the enemy of the good.)
There’s also a problem that someone who is kind of sporadically literate about IT the way I am has big holes in his knowledge base. Sometimes you know what you don’t know, sometimes you don’t even know what you don’t know. In either case, that can really slow things down, or if the knowledge seems too basic, it can feel terribly embarassing. I know almost nothing, for example, about the UNIX structures that underpin OSX, and sometimes that’s actually pretty relevant to things I might want to do. (I was struggling the other day with a permissions issue, for example.)
There’s a discipline in programming known as “pair programming.” The idea is simple– have two people sitting at the same computer at the same time. In its most rigorous form, one tells the other what to type, but is not allowed to touch the keyboard himself. In more loose variations, both contribute code and ideas. As you point out with your lesson planning, it takes longer, but studies have shown that while productivity varies (one study I saw showed pair programming overhead was 41% of single, but most say it adds about 50%), readability and quality were significantly higher.
The downside, of course, is that you have to work with another person, and most programmers have a hard time doing that initially (after they try it, most like it). I’m not sure why I’m mentioning this, other than I think it’s interesting that programmers and professors have similar problems, with similar solutions, and similar barriers to adoption.