Telling Stories

Storybooks on a shelfLast week I participated in a workshop sponsored jointly by the Center for Digital Storytelling (CDS) and Swarthmore College.  It was an intense three-day experience, in which about a dozen participants were taught the basics of constructing an effective narrative using images, music, and voice.   The folks from CDS (Andrea Spagat, Lisa Nelson-Haynes) were just wonderful – skilled, patient, experienced – as were our ITS staff members who supported the workshop (Doug Willens, Michael Jones, and Eric Behrens).

I had wanted to learn more about this technology to see if it might be a useful way for IR to share information with the community.  I can envision short, focused instructional vignettes, such as tips on constructing surveys, everyday assessment techniques, or even how to interpret a particular factbook table that is vexing.   (Generally, a table that requires instructions ought to be thrown out!)   We may try one of these and see how it goes.

I learned about the technology, but I also learned some amazing stories about my Swarthmore colleagues who participated with me.   These stories often reflect important personal experiences, which could have been difficult to share if it weren’t such a supportive environment.  An unexpected outcome of the workshop is that a group of colleagues all got to know each other a lot better!

Catching our breath…

A lynx resting
photo by Tambako the Jaguar

It’s hard to believe that this semester is finally drawing to a close.   The multitudes of followers to our blog may have noticed our sparse posts this spring…   Shifting responsibilities, timing of projects, and just the general “stuff” of IR have left us little time to keep up.

Part of my own busy-ness has been due to an increased focus on assessment, as mentioned in an earlier post.  This spring, the Associate Provost and I met with faculty members in each of our departments to talk about articulating goals and objectives for student learning.   In spite of our being there to discuss what could rightly be perceived as another burden, these were wonderful meetings in which the participants inevitably ended up discussing their values as educators and their concerns for their students’ experiences at Swarthmore and beyond.  In spite of the time it took to plan, attend, and follow-up on each of these meetings, it has been an inspiring few months.

Spring “reporting” is mostly finished.  Our IPEDS and other external reports are filed, our Factbook is printed, and our guidebook surveys have been completed (although we are now awaiting the “assessment and verification” rounds for US News).  Soon we will capture our “class file” –  data reflecting this year’s graduates and their degrees, and that closes the year for freezing and most of the basic reporting of institutional data.

We also are fielding two major surveys this spring, our biennial Senior Survey (my project) and a survey of Parents (Alex’s project).    Even though we are fortunate to work within a consortium that provides incredibly responsive technical support for survey administration, the projects still require a lot of preparation in the way of coordinating with others on campus, creating college-specific questions, preparing correspondence, creating population files, trouble-shooting, etc.  The Senior Survey is closed, and I will soon begin to prepare feedback reports to others on campus.   The Parents Survey is still live, and will keep Alex busy for quite some time.

As we turn to summer and the hope of having a quieter time in which to catch up, we anticipate focusing on our two projects that are faculty grant-funded.   We don’t normally work on faculty projects – only when they are closely related to institutional research.

We are finishing our last year of work with the Hughes Medical Institute (HHMI) grant.  IR supports the assessment of the peer mentor programs (focusing on the Biology and Mathematics and Statistics Departments) through analysis of institutional and program experience data, and surveys of student participants.   We will be processing the final year’s surveys, and then  I will be updating and finalizing a comprehensive report on these analyses that I prepared last summer.

Alex is IR’s point person for the multi-institutional Sloan-funded CUSTEMS project, which focuses on the success of underrepresented students in the sciences.  Not only does he provide our own data for the project, but he will be working with the project leadership on “special studies,” conducting multi-institutional analyses beyond routine reporting to address special research needs.

I wonder if three months from now I’ll be writing… “It’s hard to believe this busy summer is finally ending!”

Autonomy and Assessment

Swarthmore presents an interesting mix of uniformity and decentralization.  As a residential, undergraduate liberal arts institution, it is easy to summarize.  Our size is small, and retention and graduation rates are very strong so that enrollment is very predictable from year to year (about 1500).  There are no graduate students.   Generating enrollment projections can be downright boring!   Standards are high for students coming in and going out.  Our faculty is heavily reliant on tenure lines.  There are no separate schools creating the silos that are so vexing to my counterparts trying to do institutional research at larger colleges and universities.

But due to a history and culture of very strong faculty governance, our departments are among the most autonomous that I’ve seen, even at very similar institutions.  The most important decisions are made with considerable input by and deference to the faculty, if not by the faculty itself.    On one hand that means that members of the administration are generally regarded in a collegial manner, and that once decisions are made, they are truly made.  On the other hand it can be a delicate matter to introduce change, especially change necessitated by external forces.  Though occasionally frustrating (and quite slow), I think this is generally an excellent thing.  (It does, however, take quite a toll on our faculty in terms of their workload.)

When Swarthmore, like most institutions, was first “dinged” by our accrediting agency for not doing enough formal assessment (2004), the initial response was understandable indignation.  Self-reflection and evaluation is what we do best.  We talk endlessly about what we do, how we do it, and how we could it better – in committees, in hallways, with our students, alumni, and each other.  I have never met anyone here who doesn’t care deeply about serving our students.

But upon gathering ourselves to address this concern, the faculty designated an ad hoc committee – comprised entirely of faculty.  This group considered the criticism, looked at what we do and what we might do better, and in 2006 recommended a plan.  This plan was discussed by the entire faculty, modified, and finally approved by the faculty, and stands as our foundational document for academic assessment.  It’s an elegant document.  They took a thoughtful and measured approach, included key elements and ideas that we’ll use and build on for years, and the best part is, the faculty owns it.

We are now at a stage of identifying places where we need to bolster our efforts in assessment.  My position has been modified so that I now report one third time to the Provost’s Office to work with faculty on this process.  It has been my privilege to participate in meetings with chairs in each division this past fall, and at those meetings I am struck again by the autonomy of our faculty, departments, and programs.  As an outsider, it is a little scary.  Though no one here is interested in creating a uniform approach or in any way dictating to departments what they should do for assessment, particular steps ought to be taken for the process to be meaningful, and I wonder how that will happen.  But then I remember what it is that the faculty are fiercely protecting – it’s not about turf, it’s about students and the experiences the department is providing them.   Since assessment is itself about student learning,  I have no doubts that the members of the faculty will make it work.

Presently Presenting

Present wrapped in red ribbonIn preparing to make a presentation at Swarthmore’s Staff Development Week next month, I thought it would be a good time to review some rules of thumb for making presentations that I’ve learned and discovered over the years.  Because institutional researchers generally have such a range of people in our audiences, it can be tricky!

Power Point – This package is both a curse and a blessing.  As a presenter I like having a visual reminder of key points, and to help frame for the audience where I’m going.  As an audience member I know too well that slides full of text are deadly boring.  Because I am a tactile learner, I have found that I like to organize my presentation by making a text-rich Power Point slide show, but then not actually showing much of it!  The advantage is that I can share the full document later as the version that “includes speaker notes.”  For the actual presentation, I try to use slides primarily for simple charts, illustrations, examples, and a minimal number of bullet points (with minimal associated text).   I want people to engage with what I’m saying, not read ahead.

Tell what you’re going to tell   – The importance of giving your audience a simple outline of the presentation was impressed on me by Alex.   After a particularly boring talk we’d attended, he persuaded me how much better it could have been if we’d simply been able to follow its logic, which an outline would have provided.

Tables – Avoid all but the simplest tables of data in a presentation, and make sure, if you want them to be read, that they are indeed legible from the back of the room.  If I am showing a table primarily to present a layout, I would make clear as soon as the slide is shown that it is not meant to be read.  (This is not uncommon for Institutional Researchers, who share strategies and techniques – sometimes it’s not the data we want to see, but how you presented it!)

Graphs – I personally love a graph that contains a ton of information on one page.  I could stare at it for hours, like someone else might stare at a painting and glean layers of meaning.  Alas, I would try not to make such a graph for others!  In general, the wider the audience, the simpler the chart should be.  Avoid ratios, or even percentages that aren’t immediately grasped.   And be sure to use colors.   A simple, attractive chart that reveals an important relationship can reveal meaning to even the most staunchly anti-data.

Involvement – Whenever possible, I try to involve the audience either through humor (but DON’T overdo it – I’m an institutional researcher rather than a comedian for a very good reason) or engaging with an exercise or activity.  At a faculty lunch presentation a number of years ago, before I began (during lunch), I left displayed a chart reflecting faculty opinions about their adequacy of sleep by career stage.  It certainly piqued interest – people love to hear about themselves!

Certainly none of this is new, but I find it helpful to review and remind myself of them before starting a new project.  As I look back through some of my past presentations I see that I haven’t always followed my own rules as well as I’d wish!   But presentations can be a powerful tool for accomplishing a primary goal of Institutional Research:  getting information to people who need it. And so it’s something I continue to try to learn about and work on.

 

A few of my favorite things…

Red Tree
Photo by Will.Hopkins

In a recent post I mentioned one of the things that amused me about Swarthmore when I first started working here. That got me to thinking about all the things that I found, then and now, to be so charming.  So in this Thanksgiving season, I thought I’d share a few of them …

  • Candy or snacks in all of the student services offices, as well as many academic department offices.
  • The occasional frisbee flying into my office (when I was on the third floor) from the adjacent wing of Parrish – which is a men’s residence hall .
  • Former Dean Bob Gross’s springer spaniel, Happy, roaming the hallways looking for the dog treats available to him in all the offices.    And all the other dogs around campus – George and Ali, the bookstore dogs, Dobby, and the rest.
  • Jake Beckman’s (’04) artwork – the big chair on Parrish lawn, the giant sneakers hanging off a chimney of Parrish, and the giant lightswitch on McCabe Library.
  • The tin of candy that one of my colleagues brings to meetings she attends, for sharing.  Round and round the table it goes…  sweet!
  • The fact that so few people refer to their own titles when introducing themselves – just their office.  (A little confusing at first, perhaps, but that’s alright.)
  • The Swarthmore train station (regional rail) at the end of Magill walkway.   In the snow.  It’s like a postcard.
  • The beautiful portrait (painted by Swarthmore’s Professor of Studio Art Randall Exon) in the entryway of Parrish of Gil Stott with his cello.
  • Discovering the hidden talents and passions of people who work here.  There are singers, actors, stargazers, songwriters, woodworkers, animal activists, knitters, world travelers – it’s amazing!
  • The “honker,” which is the Swarthmore’s fire station’s version of a siren.  Of course I’m not happy to think there might be a tragedy – I just enjoy its uniqueness.
  • The labels on all the trees and plantings, because the College grounds are the awesomely gorgeous Scott Arboretum.

I’m sure there are many things I’ve missed.  I’d love to hear about others’ favorites!

The End

Goal posts
Photo by DB-2

“Begin with the end in mind” is Stephen Covey advice I’ve always found useful.  Some people ask what you would want to have written on your tombstone.  (Writing this post on Halloween may be influencing my choice of images here!)   But in making many decisions I’ve found it helpful to think about what path I might wish I had chosen if I looked backwards from the future.  Many of us wrote “Histories of the Future” as part of our thinking about Swarthmore’s strategic planning.  Envisioning what you would like to see is a way of thinking through and clarifying your goals and what you might need to do to get to them.

Good assessment takes the same first step.  Rather than thinking about what things you could most easily measure, or how to prove the worth of your activities to an external audience, you start by articulating what results you would like your activity to achieve.  What are the key things that I want my students to have learned when they finish this course? What should a student who majors in my department be able to know and do when they graduate?  For an administrative department, what should be the result for someone working with my office?  What are the key outcomes that should be accomplished by this project?

This exercise is valuable before you ever start thinking about capturing information.  Having a conversation about goals with departmental colleagues can be challenging, but very rewarding, because so many of our goals are implicit.  Trying to capture them in words and hearing others’ thoughts make us think about them in new ways.  Explicitly identifying the goals of an activity can put a different frame around it.  As part of our tri-college Teagle Foundation grant “Sustainable Departmental Level Assessment of Student Learning,” one faculty member remarked that going through the exercise of stating goals and objectives has already changed the way she approached teaching her course.  It sounds hokey, but it really can be transforming.

If you’re just starting to think about this, look for places where you’ve described what you do.  How have you described yourself on your web site, in printed materials, or even in your job ads?  These sort of descriptions often reflect our priorities and goals.  Does your professional association offer any guidance on student learning outcomes, or on best practices?   These are all great starting points for this important work.   Later on, only after articulating goals and, based on them, more specific objectives, does it make sense to begin thinking about collecting information that might reflect them.

 

Rank

Pond Scum
Pond Scum photo by Max F. Williams

Happiest Freshmen?!”  OK, time to get in on the action – lets start a new ranking!   First, we’ll need some data.  That’s an easy one – most institutions post their “Common Data Set” on line, and that’s a really great source.   It has data on admissions, retention, enrollments, degrees, race, gender, you name it.  This is what institutions send to publishers of other admissions guidebooks and rankings – why don’t we get in on the free data?  The top three places to find them on an institution’s website are probably the Undergraduate Admissions, Institutional Research, or About areas.

Or we can go to publicly available sources, such as the U.S. government’s National Center for Education Statistics (NCES), the National Science Foundation’s “WebCASPAR,” and others.   The advantage of that is that we can download data by institution en masse.   Also, no one can claim that the data misrepresents them – hey, they provided it to the agency, right?  So what if the data are a little outdated.  We’re not building a rocket, just a racket.

Or we could send each institution a questionnaire.  Not exactly sure what to ask for or how?  Don’t worry, those folks are experts, we’ll just send a general question and they’ll call other folks on their campus, hold meetings, and jump through all kinds of hoops to be helpful, and eventually send us something that we can then decide if we want to use.  The kids at Yale have been doing this for years with their “Insider’s Guide.”  Well, off and on for years (when they think of it).

Maybe we could start a web site, and ask people to come enter data about the institutions they attend, or attended in the past, and then use that information for each institution.  That’s what RateMyProfessor.com did, and they got covered by CBSMoneyWatch,  and others!   True, I spotted at least three Swarthmore instructors who have not been with us for some time among those ranked, and a few others I never heard of (with 175 regular faculty members, how could I possibly have heard of everyone) but that’s the beauty of it, right?  Low maintenance!  And PayScale.com has become a force to be reckoned with.  Sure, their “average income” data for Swarthmore only represents about 2% of the alumni (estimating generously), but nobody bothers to dig that deep.  It doesn’t stop well-known publications like Forbes from using it.

OK, so that’s where we can get data for our ranking, now what data should we use, and what shall we call it?   We can take a lesson from the Huffington Post story about the “Happiest Freshmen.”   Now that’s clever!  And I’ll bet it generated a ton of visits, because it sure got attention from a lot of people.  The only data used in that ranking was retention rates – brilliant!  One number, available anywhere, call it something catchy (or better yet, controversial) and let ‘er rip!  (Shhh..  as far as I can tell, it was the press that provided the label – the folks crunching the data didn’t even have to think of it!)

I propose that we pull zip codes from NCES, sort in descending order, and do a press release about the “Zippiest institutions ever!”  No that’s no good – if it’s not something that changes every year, how will we make money from new rankings?!    Any ideas?

Surveys and Assessment

I’ll be talking a lot about Assessment here, but one thing I’d like to get off my chest at the outset is to state that assessment does not equal doing a survey.   I’m thinking of writing a song about it.  So many times when I’ve talked to faculty and staff members about determining whether they’re meeting their goals for student learning or for their administrative offices, the first thought is, “I guess we should do a survey!”  I understand the inclination, it’s natural – what better way to know how well you’re reaching your audience than to ask them!  But especially in the case of student learning outcomes, surveys generally only provide indirect measures, at best.  In the Venn diagram:

Venn diagram shows little overlap between Assessment and Surveys.

 

(Sorry, I’ve been especially amused by Venn diagrams ever since I heard comedian Eddie Izzard riffing on Venn…)

Surveys are great for a lot of things, and they can provide incredibly valuable information as a piece of the assessment puzzle, but they are often overused and, unfortunately, poorly used.  While it is sometimes possible for them to be carefully constructed to yield direct assessment (for example, if there are questions that provide evidence of the knowledge that was attempting to be conveyed – like a quiz), more often they are used to ask about satisfaction and self-reported learning.  If your goal was for students to be satisfied with your course, that’s fine.  But probably your goals had more to do with particular content areas and competencies.  To learn about the extent to which students have grasped these, you’d want more objective evidence than the student’s own gut reaction.  (That, too, may be useful to know, but it is not direct evidence.)

I would counsel people to use surveys minimally in assessment – and to get corroborating evidence before making changes based on survey results.

What can you do instead?  Stay tuned (or for a simple preview, see our webpage on “Alternatives“)…

 

“Freeze” time

Snow covered pumpkin.
photo by billhd

It’s getting to be that time of year.  I’m not talking about frost on the ground or midterms.    It’s time to “freeze” the data!

Institutional Researchers report data about our students to many constituencies, and use it for our research.  We must have data that is accurate,  and is consistent across reports and research projects, and over time.  When students are enrolling or dropping out at different points during the term, how do we keep track of it all?  We don’t!  Along with our Registrars, we select a single point in time early in the semester that best reflects our student body, and we essentially download a copy of relevant data about the population that is actively enrolled on that date.  We call it a “snapshot” of the data, a “census,” or a “freeze.”    This is what our institution looks like at that point, and we will use this data forever to reflect our students in this term.  If someone drops out after the freeze date, our data will still reflect that student.  If a student enrolled, but left before the freeze date, they will not be counted for general reporting or research purposes.

The date selected is typically far enough along after the start of the semester so that students have sorted themselves out.  Many institutions use a particular number of days from the start of classes.  The IPEDS* default language suggests October 15.  Swarthmore has always used October 1 for our fall freeze of student data.  (We have another date for freezing employee data.)   That’s why, if you ask us for the number of students enrolled in September, we’ll ask you come back in October.

Leading up to the freeze, the Registrar’s office is busy tracking down students to make sure their status is accurate, and IR is checking with other offices (especially IT) to make sure programs are ready to run and new coding hasn’t been introduced which might affect the data extraction process.   (We hate it when that happens – always give your IR shop a heads up about new codes!)

One of the interesting things about Swarthmore that is different from other institutions in which I’ve worked is that the default status for students who haven’t graduated assumes that they return each term.  If they don’t return, their status must be switched to “Inactive” before the freeze date so that we don’t accidentally count them.  In my other experiences, the default coding each term indicated that students were inactive, and their status must be switched to “Active” if they did return.  It certainly makes sense to do it this way here, as most students continue until they graduate.  It was just one of the many little things that charmed me when I first started working here.

 

*IPEDS stands for the Integrated Postsecondary Education Data System, the reporting system used by the National Center for Education Statistics (NCES) of the U.S. Department of Education.  All institutions in the country that participate in any kind of Title IV funding programs (federal student financial aid) must participate in this reporting.

 

It’s the Number 1 time for Rankings – Part II

As promised in the first part of this post, here is a description of US News’ ranking procedure, for non-IR types.

US News sends out five surveys every year – three to the IR office of every college and university, one each to Presidents, Provosts, and Admissions Deans at every college, and one to High School guidance counselors.   The surveys that go to H.S. Guidance Counselors and to college Presidents, etc. are very similar, and are called the “Reputation Survey” and the “Peer Assessment,”  respectively.  They list all of the institutions in a category (Swarthmore’s is National Liberal Arts Colleges), and ask the respondent to rate the quality of the undergraduate program at each institution on a one to five scale.  There is an option for “don’t know.”   Responses on these two surveys comprise the largest, and most controversial, component of the US News ranking, the “Academic Reputation” score.   It’s the beauty contest.

The three surveys that are sent to IR office ask questions about 1) financial aid; 2) finances; and 3) everything else.  This year, these three surveys included 713 questions.   I wish that were a typo.   We consult with other offices, crunch a lot of data, do an awful lot of checking and follow-up, and many, many hours and days later, submit our responses to US News.  Then there are several rounds of checks and verifications, in which US News flags items that seem odd based on previous years’ responses, and we must tell them “oops – please use this instead,” or “yes, it is what I said it is.”  Of those >700 items, US News uses about a dozen or two in their rankings, and the rest go into other publications and products – on which I’m sure they make oodles of money.   Here are the measures that are used for ranking our category of institution, and the weights that are assigned to the measures in computing the final, single score, on which we are ranked:

Category and Weight in Total Score Measurements and Weight in Category
22.5% Academic Reputation
67% Avg Peer Rating on “Reputation Survey”
33% Avg H.S. Counselor Rating on Rep Survey
15% Student Selectivity
10% Acceptance Rate
40% Percent in Top 10% of HS class
50% SAT / ACT
20% Faculty Resources
35% Ranked Faculty, Avg Salary+Fringe (COLA)
15% % FT Faculty with PhD or Terminal Degree
5% Percent Faculty who are Full-time
5% Student/Faculty Ratio
30% Small Classes (% < 20)
10% Big Classes (% > 50)
20% Graduation and Retention
80% 6-yr Graduation Rate
20% Freshman Retention rate
10% Financial Resources
100% Expenditures per Student
7.5% Graduation Rate Performance
100% Actual rate minus Rate predicted by formula
5% Alumni Giving Rate
100% # Alumni Giving / # Alumni of Record (Grads)

The percentages next to the individual “measurements” reflect the measure’s contribution to the category it belongs to.   So for example, the student selectivity measure is affected least by acceptance rate (only accounts for 10% of the overall category score).  The percentage next to the category reflects its weight in the overall final score.  As I mentioned, the Academic Reputation score counts the most.

The way that US News comes up with a single scores is by first converting each measure to a  z-score (remember your introductory statistics?), which is a standardized measure that reflects a score’s standing among all the scores in the distribution, expressed as a proportion of the standard deviation (z=(Score minus the  Mean)/Standard Deviation).  If an institution had a 6-year graduation rate that was one standard deviation above the average for all institutions, the z-score would be 1.0.

This transformation is VERY important.  With z-scores at the heart of this, one cannot guess whether an improvement – or drop- in a particular measure might result in an improved ranking. It is our standing on each measure that matters.   If our average SAT scores increased, but everyone else’s went up even more, our position in the distribution would actually drop.

So then they weight, combine, weight again, combine (convert to positive numbers somewhere in there, average a few years together somewhere else, an occasional log transformation, …),  and out pops a final score, which is again rescaled to a maximum value of 100.  (I always picture the Dr. Seuss star-belly sneetch machine.)  One single number.

But there are a couple of other features of the method worth mentioning.  One is that the average faculty compensation for each institution is weighted by a cost of living index, which US News doesn’t publish because it is proprietary (they purchased it from Runzheimer).  It is also very outdated (2002).  As Darren McGavin said when opening the leg lamp box in A Christmas Story, “Why, there could be anything in there!”  Another unique feature is the “Graduation Rate Performance” measure, which compares our actual graduation rate with what US News predicts that it ought to be, given our expenditures, students’ SAT scores and high school class standing, and our percentage of students who are Pell grant recipients.  Their prediction is based on a regression formula that they derive using the data submitted to them by all institutions.   Did I mention the penalty for being a private institution?  Yes, private institutions have higher graduation rates, so if you are a private institution, so should you.

Institutions are ranked within their category, based on that final single score, and with much fanfare the rankings are released.