One thing I did notice at the NITLE meeting is a big variation even within the universe of small liberal arts colleges about the level of interest and investment at an institutional level in collaboration through digital media. I’m willing to wager that some of the most interested or engaged institutions are spending about as much as some of the least interested in raw dollar terms on information technology, but that in some cases, all of that money is just going to keeping the basic core services up and running rather than into innovation in instruction, research, publication or library services. That difference doesn’t necessarily have to do with IT or library staff and their level of engagement and knowledge, though in some cases it might. I think in a lot of cases, it’s a difference in the local faculty culture and in the extent to which senior administrative staff care about or even know about information technology. The more that senior staff see information technology as an expensive obligation rather than a place where interesting and open-ended innovations in the core mission of higher education are happening, the more that the money spent is likely to be largely a matter of making the hamster wheels continue to turn.
It has to be about internal motivation because there is not a lot of external pressure for innovation. As I noted in my comments in the NITLE panel that was devoted to some really interesting work that Wesleyan University has pursued in the last eight years, students don’t really know what they’re missing if faculty are teaching well with traditional materials. I think most of the stuff about the current generation of college students being “digital natives” is complete hooey, to be honest. Yes, they’ve grown up with computers and online media being natural presences in their lives, but for most of them, their use of digital tools is pragmatic and limited rather than exploratory and creative. Students at elite liberal arts colleges may be even less oriented towards digital tools and media than most in their generation: these institutions seem to me to have a vaguely antiquarian appeal that draws students who imagine their intellects and avocations in slightly “old-fashioned” ways.
So the students are not going to complain if there are cobwebs growing on the IT infrastructure and the pedagogy of most professors until and unless that gets in the way of core functionalities like email or accessing digitized readings kept in a content-management system. Via 11D, I read an interesting article that suggests a slow change in the prestige professions in upper-middle class life, but I think mostly parents are not going to push educational institutions to make creative, assertive use of information technology. Certainly most faculty are not going to build an institution’s use of IT into the way they calculate reputation capital.
So an institution like Wesleyan can make an expensive, interesting, and very skilled push into innovation in this area and find itself out a lot of money and with zero external recognition of what they’ve done. They can end up making resources that K-12 and community college students around the country (and their counterparts around the world) make productive use out of that go practically unrecognized at their peer level. (Hey, all you people into social justice: that strikes me as a bigger achievement than kicking Coke…)
But at the same time, the tipping point that maybe moves one institution with comparable resources into a much more forward-thinking and creative profile strikes me as very easily reached: it really only takes a small number of people in several staff areas, a small number of faculty, some appreciative students, and some support at the top to get things rolling.
What are the changes Wesleyan is making? And I assume you are referring to the Wesleyan in CT, not the various Wesleyan schools around the nation? I somewhat feel that, while I’m pretty much as plugged into technology as you can get at Swarthmore, I don’t really have a sense of what I’m missing in comparison to other schools. Perhaps this is because I don’t feel like I have any idea of how technology is being used in academia, beyond the painful implementations I saw in high school and here—blackboard, smart boards, teacherweb, etc. which all feel five years late to the party.
I think I found the report this discussion was based off of (here), and I was particularly struck at how small percentage of the respondents were from social studies … only 12%?!
I’ll find you in person to talk about this, but are there are steps you think SCCS should take (once we get over the stability problems) or software it could provide which would help in this project? One of the real issues the report highlights is image databases—and it wouldn’t be hard to install some gallery programs—or maybe some kind of internal academic wiki? Though I guess professors are more looking for a large inter-school sharing software?
Should there be basic required computer-use courses for students that, in a couple of hours look at basic image editing using photoshop, adding to things like wikis, setting up blogs (SCCS is working to install WordPress Multi User btw), using iMovie, etc. ?
Wesleyan sunk a lot of money, most of it soft (e.g., from grants) into creating very sophisticated multimedia “learning objects” (follow some of the links from the entry on the Wesleyan presentation) and into dedicated support for IT innovations. What they found is that most of what they created was primarily used externally, not by faculty inside the institution.
I think you’ve heard me talk before about the issue of incorporating pedagogy *about* information technology into most subjects. It’s a struggle to find time to do it well, and I wouldn’t compliment myself by saying I’ve figured out how to do it consistently. But I think that’s part of it: making digital and technological literacy a basic part of what the liberal arts are dedicated to.
Part of it is also making the shift towards seeing some of what faculty might do in online contexts as being as important a priority as analog publication of research findings, even when it takes a form that is very different than analog formats. On my own panel, the Pomona professor Kathleen Fitzpatrick had some really crucial points about the need to figure out how to do “long-form” scholarly work in digital environments that isn’t just an attempt to translate the book or codex into those environments, but is net-native in some respect. There are institutions that at least see this as an interesting or important or significant idea, and then there are institutions that basically see any faculty messing around on that terrain as having a quaint little hobby.
The kludginess of some instructional technology is also an issue, as you note. I think many institutions, including Swat, have historically had very low standards for the install base of software, etc., that we accept a lot of awkward or inefficient systems, largely because we haven’t had the staff to look for, maintain and support anything more elaborate, and the money that would make it a priority to do so. We’re far from alone in that respect, but judging from this meeting, a few institutions in our peer group have put more money or staff or senior administrative leadership behind academic computing.
I don’t think a single required computer-use class is a good idea, and maybe not something like a “W” class either. The key thing at a place like Swarthmore is to make this kind of material intellectually substantive rather than narrowly skills-oriented. So rather than a class where we just do wikis, blogs, etc., better we have classes where the students have to discuss whether to do wikis and blogs (and to do them as a part of evaluating their worth or value).
I guess I’m curious if there are basic skills—really simple HTML, basic image manipulation, etc.—that might form some kind of basic prerequisite for really having an opportunity to use online resources. I generally feel that most students at Swarthmore could pick those basic skills up quickly, and I think that it might encourage them to actually use those tools.
So I guess I wasn’t thinking of a for-credit semester long course looking at technology, but instead a couple of hours giving Swatties some kind of foundation to build up from. Show how to create PDFs, how to get a blog going, how to use a news reader—it might lead to some interesting results.
And then discussions of their merits and value might actually happen.
Well, here’s the heart of it. What’s the most atomic unit of digital literacy? I don’t think it’s HTML–PDFs, starting a blog, RSS, are better suggestions. But you still want to leave people with flexible and adaptable skills rather than fixed ones. That’s the problem with most technology training: it’s about applications when it ought to be more about the underlying grammar of use.
While some professors do make an effort to address digital literacy, most don’t. More importantly, most can’t.
So if Swarthmore decides that it wants to make a campus-wide push towards digital literacy tomorrow (for the students), where do they start? A few classes—like your’s—might welcome such an challenge. But I have professors who still refuse to use _EMAIL_, let alone any technology invented in the past fifteen years.
I agree flexible and adaptable skills would be best, but teaching students to be flexible requires, in my opinion, particularly talented and knowledgeable teachers. I don’t think there is anyone at our institution who is really in a position to start teaching that to the whole community.
Honestly, while I think Swarthmore tries, I don’t even think the school manages to teach flexible and adaptable _writing_ skills particularly well, and the school has theoretically been trying to for 150+ years now.
Maybe the best step is to create some kind of technology discussion forum more like the WA program than anything else. A place where students can come for advice, lessons, and tutorials on their own projects—whether they are for class or not. A lot of people come to SCCS wanting to know how to make websites, for example, and we just don’t have the man-hours to really do the job well. But with institutional backing, I’m sure it could be developed.
And if such an organization existed, maybe it would encourage professors to get their students to do more technically, because they wouldn’t have to worry about each student’s level of experience and familiarity with tech as much.
A lot of schools are starting the kind of center you suggest in your last comment. At Penn, for example, they created the Academic Commons, which put the Writing Center, Study Skills center, and a Multimedia Lab all in the same place. It serves both faculty and students, but students can easily go from working on a paper to working on the video that will be embedded in it. Increasingly, no one needs to know HTML, RSS, etc. One only need to know the tools that manipulate them and/or the underlying concepts. Broadly, what we’re talking about here is a kind of literacy, of understanding that images, videos, and sounds have meaning in the same complex (more complex?) way that text does.
I don’t think faculty need to have these skills in order to accept projects from students that contain multimedia elements, but they do need to have a sense of what’s possible and encourage students to pursue those possibilities. And I also think it’s more about an approach to teaching and learning that goes against the current culture at most higher education institutions.
Honestly, the place I’ve seen that’s made the most progress in this area is the University of Mary Washington. They have a great Ed Tech staff, a CIO who “gets it” and faculty and students who are collaborating with them to pursue interesting avenues of teaching and learning. Check out what’s happening on the blog, a collection of over 600 blogs from students, faculty and staff: http://umwblogs.org/
“any digital archive is a good digital archiveâ€
I’m following your reports with some admittedly biased interest, as someone caught in the gears of ‘old school’ IT, wondering as I continually do how to improve the research database I manage, which heretofore has been an unwieldy behemoth continually being tweaked and prodded into something more along the lines of a wikipedia or a google. To its credit, and if you’ll excuse my puffery, so-called ‘full-text’ linking from our metadata via open-URL over to J-STOR and other archives is proving useful. But I’m noticing the ever-widening gap between the newer web-based collaborative model for academic research and the older ‘legacy’ paradigm of research published only in prestigious journals.
In any case, I’m hoping that anyone in process of setting up their own researcher/collaborationist sites (for whatever purpose: archival, recreational, lexical) will anticipate at least some of the unintended consequences of pursuing any fixed model. Just two trivial examples: I find Google and wikipedia useful, for sure, but know that any search results that pop up appear in order of what google wants me to see for the purpose of generating ad revenue—their not-so-hidden agenda, as it were, and not because google is truly collaborating with me; in wikipedia the problem is subtler, and I can’t put my finger on it. Maybe it’s that just because now that anyone may collaborate on an interactive website, at some level it can’t be trusted. And this isn’t just an elitist argument against a public free-for-all; it’s a very practical concern with potentially disastrous real-life consequences.
I guess that stakeholders in an IT network, the collaborators, already have to have a range of acceptable likely outcomes in mind. Which brings up kind of a paradox: beofre they can do anything interesting and useful collaboratively on the web, as everywhere else, they already have to know its use intuitively.
And all of the above just as a sideshow in keeping my legacy system—based as it is on privileged publications and protocols based on the MARC—up to the demands of the newer collaborative paradigm.
On some level, wasn’t this always true of scholarship, at least since the 19th Century? To collaborate across time and space using footnotes, indices, bibliographies, all the dense intertextual devices of scholarly writing, always took “knowing already” how to do it, what it was for. Scholars were trained to emulate the practices of scholarship before knowing the purposes of those practices, in many cases.
But part of the problem here is that scholars, experts and authorities are largely staring with incomprehension, lassitude or horror at a Web 2.0 information world. They understood how scholarship interacted with the wider world of print and public discourse, but most understand almost nothing of the interaction now, or where (if anywhere) there is an avenue for authority-driven creation and accessing of information. I think that there is, but it’s going to take some very new ways of thinking, not just an attempt to move existing practices lock-stock-and-barrel into new environments.