National Student Clearinghouse: Degree Title Categories

I enjoy working with National Student Clearinghouse (NSC) return data, but the differences between the way schools report Degree Titles can be frustrating.  For example, here’s just a few of the ways “juris doctor” can appear:

a few ways JD can appear

I’ve worked on a few projects where it was necessary to work with the type of degree that was earned.  For example, as part of Access & Affordability discussions, it was important to examine the additional degrees that Swarthmore graduates earned after their Swarthmore graduation by student need level to determine if graduates in any particular need categories were earning certain degrees at higher/lower rates than other need categories.

example data_degree cat by need cat

 

In order to do this, I first had to recode degree titles into degree categories.

The NSC does make available some crosswalks, which can be found at https://nscresearchcenter.org/workingwithourdata/

The Credential_Level_Lookup_table can be useful for some projects.  However, my particular project required more detail than provided in the table (for example, Juris Doctors are listed as “Doctoral-Professional” and I needed to be able to separate out this degree), so I created my own syntax.

I’m sharing this syntax (below) as a starting point for your own projects. This is not a comprehensive list of every single degree title that has been submitted to the NSC, so be careful to always check to see what you need to add to the syntax.

While I have found this to be rarer, there are the occasional degrees that come through without a title in any of the records for that degree.  I’ve therefore also included a bit of syntax at the top that codes those with a Graduated=”Y” but a blank Degree Title to “unknown.”  If you are choosing to work with those records differently, you can comment out that syntax.

Once you have created your new degree categories variable(s), you can select one record per person and run against your institutional data.  One option is to keep, for those who have graduated, the highest degree earned.  You can use “Identify Duplicate Cases” to Define Matching Cases by ID and then Sort Within Matching Groups by  DegreeTitleCatShort (or any other degree title category variable you’ve created).  Be sure to select Ascending or Descending based on your project and whether you want the First or Last record in each group to be Primary.

Hope this helps you in your NSC projects!

SPSS syntax:  Degree Title Syntax to share v3

Why IR is hiring

Application_Page_1With an increasing amount of my time for the past two years spent with the Provost’s Office and the College in general helping to guide our assessment efforts, the IR Office has been struggling mightily to keep up with our work.   In January we were approved for an additional limited term position in the IR office to help offset the loss of my time.  The need for the position will be reevaluated in three years, which corresponds to the term of the new (second) Associate Provost position.   This is not an accident.  Since both positions are intended to relieve overloads caused, at least in part, by our needs for assessment and institutional effectiveness leadership, it makes sense to review this new infrastructure a few years down the road to see how it is serving us, as our assessment processes improve and our work becomes more routine.

With this additional IR position and Alex’s departure, I’ll be in a way replacing both Alex and myself, as I continue focusing more on assessment and on our upcoming accreditation mid-term report.   But while Alex and I shared much of the responsibilities for IR reporting and research in the past, I’ll be structuring the two positions to more separately reflect these two key roles.   A Data and Reporting Officer will have primary responsibility for data management, and routine and ad hoc reporting for internal and external purposes.  An Institutional Research Associate (the limited term position) will focus more on special studies, and is expected to provide the advanced analytic skills to our projects.   These two positions, and mine, will share somewhat in responsibilities and have just enough overlap to serve us in those many “all hands” moments.   It should be an exciting time for Institutional Research – and for assessment!

Walking the Walk

walking feet
photo by :::mindgraph:::

In spite of providing support to other areas of the College undertaking assessment projects, like many IR offices, ours had yet to fully engage with our own assessment.  Sure, we had articulated goals for our office some time ago, and had reflected on our success on the simpler, numeric ones, but only this year (as part of increased College-wide efforts) began to grapple seriously with how to assess some of the more complex ones.

One of our take-aways from the Tri-College Teagle-funded Assessment project that we’ve been involved with was that the effort to design rigorous direct assessment is key, and the work that goes into thinking about this may sometimes be even more important than the measurement itself.   It’s one thing to observe this, and quite another to experience it firsthand.

As Alex and I looked at our goals, each phrase begged clarification!  We don’t want to just conduct research studies, we want to conduct effective studies.  But what do we mean by “effective”?   How do we meet our audiences needs?   How do we identify our audience’s needs?   What is the nature of relationship between the satisfaction of a consumer of our research and the quality of the research?   And on and on.   Before even identifying the sorts of evidence of our effectiveness that we should look for, we had already identified at least a dozen ways to be more proactive in our work.

The other revelation, which should not have been a surprise because I have said it to others so often, is that there are seldom perfect measures.   Therefore, we need to look for an array of evidence that will provide confidence and direction. There are many ways to define “effective” research – what is most important (and manageable) to know?

And finally, it’s easy to get caught up in these discussions and in setting up processes to collect more and more evidence.   We could spend a lot of time exploring how a research study informed someone’s work and decision-making, but that time that could have instead been used in expanding the study.   We have to find a balance, so that we do assessments that are most helpful, and don’t end up distracting us from the very work we’re trying to improve.

Catching our breath…

A lynx resting
photo by Tambako the Jaguar

It’s hard to believe that this semester is finally drawing to a close.   The multitudes of followers to our blog may have noticed our sparse posts this spring…   Shifting responsibilities, timing of projects, and just the general “stuff” of IR have left us little time to keep up.

Part of my own busy-ness has been due to an increased focus on assessment, as mentioned in an earlier post.  This spring, the Associate Provost and I met with faculty members in each of our departments to talk about articulating goals and objectives for student learning.   In spite of our being there to discuss what could rightly be perceived as another burden, these were wonderful meetings in which the participants inevitably ended up discussing their values as educators and their concerns for their students’ experiences at Swarthmore and beyond.  In spite of the time it took to plan, attend, and follow-up on each of these meetings, it has been an inspiring few months.

Spring “reporting” is mostly finished.  Our IPEDS and other external reports are filed, our Factbook is printed, and our guidebook surveys have been completed (although we are now awaiting the “assessment and verification” rounds for US News).  Soon we will capture our “class file” –  data reflecting this year’s graduates and their degrees, and that closes the year for freezing and most of the basic reporting of institutional data.

We also are fielding two major surveys this spring, our biennial Senior Survey (my project) and a survey of Parents (Alex’s project).    Even though we are fortunate to work within a consortium that provides incredibly responsive technical support for survey administration, the projects still require a lot of preparation in the way of coordinating with others on campus, creating college-specific questions, preparing correspondence, creating population files, trouble-shooting, etc.  The Senior Survey is closed, and I will soon begin to prepare feedback reports to others on campus.   The Parents Survey is still live, and will keep Alex busy for quite some time.

As we turn to summer and the hope of having a quieter time in which to catch up, we anticipate focusing on our two projects that are faculty grant-funded.   We don’t normally work on faculty projects – only when they are closely related to institutional research.

We are finishing our last year of work with the Hughes Medical Institute (HHMI) grant.  IR supports the assessment of the peer mentor programs (focusing on the Biology and Mathematics and Statistics Departments) through analysis of institutional and program experience data, and surveys of student participants.   We will be processing the final year’s surveys, and then  I will be updating and finalizing a comprehensive report on these analyses that I prepared last summer.

Alex is IR’s point person for the multi-institutional Sloan-funded CUSTEMS project, which focuses on the success of underrepresented students in the sciences.  Not only does he provide our own data for the project, but he will be working with the project leadership on “special studies,” conducting multi-institutional analyses beyond routine reporting to address special research needs.

I wonder if three months from now I’ll be writing… “It’s hard to believe this busy summer is finally ending!”

Autonomy and Assessment

Swarthmore presents an interesting mix of uniformity and decentralization.  As a residential, undergraduate liberal arts institution, it is easy to summarize.  Our size is small, and retention and graduation rates are very strong so that enrollment is very predictable from year to year (about 1500).  There are no graduate students.   Generating enrollment projections can be downright boring!   Standards are high for students coming in and going out.  Our faculty is heavily reliant on tenure lines.  There are no separate schools creating the silos that are so vexing to my counterparts trying to do institutional research at larger colleges and universities.

But due to a history and culture of very strong faculty governance, our departments are among the most autonomous that I’ve seen, even at very similar institutions.  The most important decisions are made with considerable input by and deference to the faculty, if not by the faculty itself.    On one hand that means that members of the administration are generally regarded in a collegial manner, and that once decisions are made, they are truly made.  On the other hand it can be a delicate matter to introduce change, especially change necessitated by external forces.  Though occasionally frustrating (and quite slow), I think this is generally an excellent thing.  (It does, however, take quite a toll on our faculty in terms of their workload.)

When Swarthmore, like most institutions, was first “dinged” by our accrediting agency for not doing enough formal assessment (2004), the initial response was understandable indignation.  Self-reflection and evaluation is what we do best.  We talk endlessly about what we do, how we do it, and how we could it better – in committees, in hallways, with our students, alumni, and each other.  I have never met anyone here who doesn’t care deeply about serving our students.

But upon gathering ourselves to address this concern, the faculty designated an ad hoc committee – comprised entirely of faculty.  This group considered the criticism, looked at what we do and what we might do better, and in 2006 recommended a plan.  This plan was discussed by the entire faculty, modified, and finally approved by the faculty, and stands as our foundational document for academic assessment.  It’s an elegant document.  They took a thoughtful and measured approach, included key elements and ideas that we’ll use and build on for years, and the best part is, the faculty owns it.

We are now at a stage of identifying places where we need to bolster our efforts in assessment.  My position has been modified so that I now report one third time to the Provost’s Office to work with faculty on this process.  It has been my privilege to participate in meetings with chairs in each division this past fall, and at those meetings I am struck again by the autonomy of our faculty, departments, and programs.  As an outsider, it is a little scary.  Though no one here is interested in creating a uniform approach or in any way dictating to departments what they should do for assessment, particular steps ought to be taken for the process to be meaningful, and I wonder how that will happen.  But then I remember what it is that the faculty are fiercely protecting – it’s not about turf, it’s about students and the experiences the department is providing them.   Since assessment is itself about student learning,  I have no doubts that the members of the faculty will make it work.

The End

Goal posts
Photo by DB-2

“Begin with the end in mind” is Stephen Covey advice I’ve always found useful.  Some people ask what you would want to have written on your tombstone.  (Writing this post on Halloween may be influencing my choice of images here!)   But in making many decisions I’ve found it helpful to think about what path I might wish I had chosen if I looked backwards from the future.  Many of us wrote “Histories of the Future” as part of our thinking about Swarthmore’s strategic planning.  Envisioning what you would like to see is a way of thinking through and clarifying your goals and what you might need to do to get to them.

Good assessment takes the same first step.  Rather than thinking about what things you could most easily measure, or how to prove the worth of your activities to an external audience, you start by articulating what results you would like your activity to achieve.  What are the key things that I want my students to have learned when they finish this course? What should a student who majors in my department be able to know and do when they graduate?  For an administrative department, what should be the result for someone working with my office?  What are the key outcomes that should be accomplished by this project?

This exercise is valuable before you ever start thinking about capturing information.  Having a conversation about goals with departmental colleagues can be challenging, but very rewarding, because so many of our goals are implicit.  Trying to capture them in words and hearing others’ thoughts make us think about them in new ways.  Explicitly identifying the goals of an activity can put a different frame around it.  As part of our tri-college Teagle Foundation grant “Sustainable Departmental Level Assessment of Student Learning,” one faculty member remarked that going through the exercise of stating goals and objectives has already changed the way she approached teaching her course.  It sounds hokey, but it really can be transforming.

If you’re just starting to think about this, look for places where you’ve described what you do.  How have you described yourself on your web site, in printed materials, or even in your job ads?  These sort of descriptions often reflect our priorities and goals.  Does your professional association offer any guidance on student learning outcomes, or on best practices?   These are all great starting points for this important work.   Later on, only after articulating goals and, based on them, more specific objectives, does it make sense to begin thinking about collecting information that might reflect them.

 

Surveys and Assessment

I’ll be talking a lot about Assessment here, but one thing I’d like to get off my chest at the outset is to state that assessment does not equal doing a survey.   I’m thinking of writing a song about it.  So many times when I’ve talked to faculty and staff members about determining whether they’re meeting their goals for student learning or for their administrative offices, the first thought is, “I guess we should do a survey!”  I understand the inclination, it’s natural – what better way to know how well you’re reaching your audience than to ask them!  But especially in the case of student learning outcomes, surveys generally only provide indirect measures, at best.  In the Venn diagram:

Venn diagram shows little overlap between Assessment and Surveys.

 

(Sorry, I’ve been especially amused by Venn diagrams ever since I heard comedian Eddie Izzard riffing on Venn…)

Surveys are great for a lot of things, and they can provide incredibly valuable information as a piece of the assessment puzzle, but they are often overused and, unfortunately, poorly used.  While it is sometimes possible for them to be carefully constructed to yield direct assessment (for example, if there are questions that provide evidence of the knowledge that was attempting to be conveyed – like a quiz), more often they are used to ask about satisfaction and self-reported learning.  If your goal was for students to be satisfied with your course, that’s fine.  But probably your goals had more to do with particular content areas and competencies.  To learn about the extent to which students have grasped these, you’d want more objective evidence than the student’s own gut reaction.  (That, too, may be useful to know, but it is not direct evidence.)

I would counsel people to use surveys minimally in assessment – and to get corroborating evidence before making changes based on survey results.

What can you do instead?  Stay tuned (or for a simple preview, see our webpage on “Alternatives“)…