Keeping Score

SwatScoreSmPresident Obama announced the new “College Scorecard” in his state of the union address, and the interactive online tool was released the next day.  The intended purpose of the tool is to provide useful information to families about affordability and student success at individual colleges.  Since then, the IR community has been buzzing.   Much of the data in the tool is reported via the IR offices, and many of us are already being asked to explain the data and the way it is presented.  Several of our listservs became quite busy as my colleagues compared notes on glitches in the lookup feature of the tool (zip codes searches were problematic early on) and the accuracy of the data, and debated the clarity of the labels and the wisdom of the simple presentation.

This project is an example of a wonderful goal that is incredibly hard to execute well.   Seeing all the press coverage (both mainstream and higher ed press) and hearing from my colleagues, I think about the balance of such a project.   It seems reasonable that after thorough development and testing, there would be a point at which the best course of action is to just move forward and release it even though it is not perfect.   But where is that point?  One could argue whether this was the correct point for the Scorecard project, but all of the attention is creating increased awareness by the public, as well as pressures on the designers for improvement, and on colleges for accuracy and accountability.

HarmGoodSmall

I wonder how many people remember the clunky online tool, COOL (the College Opportunities On Line), from the early 00’s, and the growing pains that it went through as it evolved into the College Navigator, a pretty spiffy – and very useful – tool for families to find a wealth of information about colleges?   These things evolve and if not useful and effective, won’t survive.   The trick is not doing more harm than good while the kinks are worked out.

What’s in the Scorecard and where did it come from?   The Scorecard has six categories of information:  Undergraduate Enrollment, Costs, Graduation Rates, Loan Default Rate, Median Borrowing, and Employment.   Information about the data and its sources can be found at the Scorecard website, but it takes a little work!   Click on the far right square that says “About the Scorecard” on the middle row of squares.  From the text that spins up, click “Here”, which opens another window (not sure if these are “pop-ups” or “floating frames”), and that’s where the descriptions are.

The data for the first three items come from our reporting to the federal government through the IPEDS (Integrated Postsecondary Education Data System), which I have posted about before.   Here is yet another reason to make sure we report accurately!  The next two categories, Loan Default Rate and Median Borrowing, get their data from federal reporting through the National Student Loan Data System (NSLDS).   The last item, Employment, provides no actual data, but rather a sly nudge for users of the system to contact the institutions directly.

While each of these measures creates its own challenge to simplicity and clarity of explanation, one of the more confusing, and hence controversial, measures is the “Cost.”   The display says “Net price is what undergraduate students pay after grants and scholarships (financial aid you don’t have to pay back) are subtracted from the institution’s cost of attendance.”  This is an important concept, and we all want students to understand why they should not just look at the “sticker price” of a college, but at what students actually pay after accounting for aid.   Some very expensive private colleges can actually cost less than public institutions once aid is factored in, and this is a very difficult message to get out!  But the more precise definition behind the scenes (that floating frame!) says “the average yearly price actually charged to first-time, full-time undergraduate students receiving student aid at an institution of higher education after deducting such aid.”  The first point of confusion is that this net price is calculated only for first-time, full-time, aided students, rather than averaged across all students.   The second is the actual formula, which takes some more digging.   It uses the “cost of attendance,” which is tuition, fees, room, and board, PLUS a standard estimate of the cost for books, supplies, and other expenses.   The aid dollars include Pell grants, other federal grants, state or local government grants (including tuition waivers), and institutional grants (scholarship aid that is not repaid).   And the third point that may cause confusion is, of course, the final, single figure itself which is an average, while no one is average.

Will a family dig that deep?   Would they understand the terminology and nuances if they did?   Would they be able to guess whether their student would be an aid recipient, and if so, whether they’d be like the average aid recipient?   The net price presentation that already exists in the College Navigator has an advantage over the single figure shown in the Scorecard, because it shows the value for each of a number of income ranges.   While aid determinations are based on much more than simple income, at least this presentation more clearly demonstrates that the net price for individuals varies – by a lot!

Transition

Just after the winter holidays, Alex shared with me the wonderful and sad news that he would be moving on to another position outside the College.   It’s a great opportunity for him for advancement, and also to be closer to his family.   But we’ll miss him a lot!    Alex’s last day was January 25th.    His departure is a loss to the office and to the College.

Check out Alex’s new gig!
Passaic County Community College – Institutional Research and Planning

Walking the Walk

walking feet
photo by :::mindgraph:::

In spite of providing support to other areas of the College undertaking assessment projects, like many IR offices, ours had yet to fully engage with our own assessment.  Sure, we had articulated goals for our office some time ago, and had reflected on our success on the simpler, numeric ones, but only this year (as part of increased College-wide efforts) began to grapple seriously with how to assess some of the more complex ones.

One of our take-aways from the Tri-College Teagle-funded Assessment project that we’ve been involved with was that the effort to design rigorous direct assessment is key, and the work that goes into thinking about this may sometimes be even more important than the measurement itself.   It’s one thing to observe this, and quite another to experience it firsthand.

As Alex and I looked at our goals, each phrase begged clarification!  We don’t want to just conduct research studies, we want to conduct effective studies.  But what do we mean by “effective”?   How do we meet our audiences needs?   How do we identify our audience’s needs?   What is the nature of relationship between the satisfaction of a consumer of our research and the quality of the research?   And on and on.   Before even identifying the sorts of evidence of our effectiveness that we should look for, we had already identified at least a dozen ways to be more proactive in our work.

The other revelation, which should not have been a surprise because I have said it to others so often, is that there are seldom perfect measures.   Therefore, we need to look for an array of evidence that will provide confidence and direction. There are many ways to define “effective” research – what is most important (and manageable) to know?

And finally, it’s easy to get caught up in these discussions and in setting up processes to collect more and more evidence.   We could spend a lot of time exploring how a research study informed someone’s work and decision-making, but that time that could have instead been used in expanding the study.   We have to find a balance, so that we do assessments that are most helpful, and don’t end up distracting us from the very work we’re trying to improve.

Fast zip code map

[click to enlarge]

I’ve recently been playing around with the ggmap package in R and was able to quickly put together a bubble chart version of student home zip codes.  As you can see from the two legends, the size and color both reflect the number of students in these zip codes.

I will certainly be playing around with ggmaps so more as this map required only two lines of code (after the ggmap library was loaded).

R CODE:

usmap<-qmap(‘united states’, zoom=4, source=’osm’,extent=’panel’)

usmap+geom_point(aes(x=X, y=Y, size=COUNT, color=COUNT), data=DATA, alpha=.5)

HAPPY HOLIDAYS!

“Optimal” Faculty to Staff Ratio

An article in the Chronicle today reports on a study by two economists about the optimal faculty to staff ratio.  The study is focused on Research 1 and 2 public institutions, but I couldn’t stop myself from applying the simple math formula to a small liberal arts college, such as Swarthmore, to see what would happen.

We are actually freezing our employee data today, and so I don’t yet have current numbers, but based on last year’s data we had 944 employees – 699 full-time.  The study identifies the optimal ratio as 3 tenure-tack faculty to each full-time professional administrator.  Using IPEDS reporting definitions, we had 162 tenured and on-track faculty members last year, and 242 full-time professional administrators (Executive/ Administrative/ Managerial, and Other Professional).   That’s a conservative estimate of “professional administrators,” because it’s unclear to me from the paper which categories are included in the final equation.   All non-faculty staff are considered at different points in their modeling.

So if that 3 to 1 ratio were desirable here, we would need to add 564 tenure-track faculty.   I don’t know how the 242 administrators would manage all the new buildings and infrastructure we’d need.   And our student to faculty ratio would drop to about 2:1.   Alternately, we could get rid of about 188 professional administrators to drop their total to 54.   In that case our 162 faculty would have to start managing housing, administering grants, raising funds, supporting IT, doing IPEDS reporting, etc., in addition to all their regular responsibilities.  I’m sure they’d enjoy that.

Guess I’ll just have to wait until these researchers tackle this issue for liberal arts colleges.

Time for a prediction

crystal ball
photo by Cillian Storm

I don’t know, is it me?   I think it gets quieter and quieter each year after US News releases its rankings.   Has the publication that all of higher education loves to hate lost its impact?  I saw very little press yesterday, and not even much buzz on the IR listservs, in response to the release of US News’ annual rankings.   Maybe it’s all the bratty little upstart rankings that have begun to get more attention, or that we’ve just reached a point of rankings saturation and there’s nothing more to say.

I’m not big on making predictions.   In fact, whenever anyone asks me to predict what our rank will be, I make a lame joke about leaving my dice at home.   But US News depends heavily on these rankings in their business model, and I wonder if they’re missing the press they used to get.   What they need is some controversy!  I predict that it’s time for US News to “tweak” its methodology, which will result in some upsets in the rankings and presto!  More press!   They could even just update their Cost of Living Adjustment on the Faculty Salary measure – as far as I can tell, they’ve been using the same index since 2002.   That would certainly be defensible, and could have the effect of shaking things up.   But mark my word, SOMETHING will change next year!