Data Disconnect

Sadly, Thursday’s “Headcount” blog of the Chronicle reports on another institution misrepresenting data that is used in the US News rankings.   While there is plenty in the topic to be upset about, I found myself annoyed by this statement:

Nevertheless, replacing hard-and-fast numbers with mere estimates involves a conscious choice, and, it’s fair to assume, an intent to polish the truth.

Certainly there are situations when the intent is to “polish the truth,” and I have no idea whether this was the case at this institution, but I actually think it’s UNfair to assume the intent.   It reflects a fundamental misunderstanding of data systems and how they get used for reporting.   Most data are stored by institutions for operational purposes – admissions data are first stored for the purpose of making admissions decisions.  It’s easy to imagine estimates being made and fields being populated with less than perfect data, if those estimates might be helpful in the admissions review process.   The person making the estimates may not be aware that down the road those numbers get used for other purposes, such as reporting.   In an ideal world those recording the data would be aware of all the uses to which the data will be put, and those using the data would know exactly how the data got there, but there are thousands of fields in these databases, and as anyone in IR can attest, disconnects can happen.

Published by

Robin Huntington Shores

Currently the Director of Institutional Research and Assessment at Swarthmore College, Robin has worked in Institutional Research for over 20 years at a range of institutions.