Along with data collected by the federal government via IPEDS
(Integrated Postsecondary Education Data), information compiled by the Common
Data Set (CDS) pretty much forms the basis for metrics used to power the
college ranking industry.
These same numbers and statistics populate any number of
guides, websites, and search engines sold to college-bound students and their
families who have come to rely on them for making decisions related to “fit”
and likelihood of admission.
But sometimes numbers
lie. In absence of reliable systems
to prevent fraud, individual data points are subject to interpretation and can
be manipulated by college administrators anxious to improve their standing
among peers.
And this is true as much for data collected by the feds as
it is for the CDS. The difference is
that the folks overseeing
the CDS use the information to make money—lots of money.
Yet for years, the brain trust behind the CDS—the College Board, Petersons, and US News—has steadfastly
resisted calls for reform in the way they do business.
Robert Morse, the US
News rankings guru, insists that if colleges and universities are willing
to lie to the federal government, they’ll lie to any organization attempting to
rank or otherwise describe an institution based on information they freely
provide.
And in the face of a series of scandals involving deliberate
misreporting, Brian Kelly, also of US
News, insists to Boston
Magazine that “Ninety-nine point nine percent of the schools are
treating this seriously and reporting with integrity.”
But this is a point debated by college administrators
surveyed by Inside Higher Ed, 90
percent of whom believe “other” institutions falsify data to make themselves
look better in the eyes of the public.
Kelly also told Boston Magazine, “It’s not up to us to solve problems.
We’re just putting data out there.”
It’s a little like a green grocer who refuses to take
responsibility for selling bad fruit. The
grocer packages the fruit, displays it, and sells it at significant
profit. Yet when anyone complains about
the quality of the fruit, the grocer suggests they take it up with the growers.
And in this case the fruit can be questionable if not
outright rotten.
Take for example the question of how test-optional
colleges report standardized test scores.
This has been a recent issue for the CDS “listserve,” through which
“crowd sourced” technical assistance is provided to college-based staff
responsible for completing CDS forms.
Question C9 on the CDS asks colleges to report a range of
standardized test scores—ACT and SAT. For test-optional
colleges, the question arises as to which scores should be reported,
particularly in cases where students submit scores but request they not be used
for making an admissions decision.
When asked whether all scores or a subset of scores should
be reported, helpful colleagues (in absence of a more formal system of
technical assistance) responded in a number of different ways.
The first response came from a relatively new test-optional
college, “…we only report on test scores used in the admission decision.”
The next response came from a college that is not
test-optional but insisted “…the directions do not say submit scores only used
in the admissions decisions. It says use
all scores submitted.”
Another college asked if colleges were submitting ALL
scores received for each student or only the highest of each test, as the CDS does
not suggest or recommend superscoring
scores submitted.
After more debate about policies in use at different
schools, Robert Morse, whose official title is chief data strategist at US News, finally intervened with his
interpretation which was that schools should be submitting scores for all
enrolled students who submitted scores with no differentiation made as to
whether scores are used in admission or not.
This begged the question of what constitutes “submitted.” Does it mean that the scores were simply provided
on the application or those officially coming from the testing agency? And if a student asks that scores not be
considered for admissions, are the scores appearing on the form actually “submitted”?
The entire debate finally came down to the fact that the CDS
has not chosen, for whatever reason, to make definitions crystal clear
and/or to communicate exactly what scores are to be included—those used for
admissions decisions or all scores submitted.
And more importantly, the CDS has failed to provide an
explicit directive to ensure that schools do not exclude the “submitted” scores
of certain groups of applicants—legacy, international, athlete, AND
test-optional.
To the outside world, this debate may seem like counting the
number of angels who can dance on the head of a pin. But for applicants and those who advise them,
the standardized test score ranges reported by a college can have an effect on
determining “fit” for a particular college.
These scores also have a very real impact on a college’s US News rank and may explain why some
schools only want to report scores students ask to have used for admissions, as
they would likely be higher. And of
course, colleges want to superscore their reports providing numbers that only
reflect on the highest scores submitted by individual applicants.
But this is just one of many data points subject to
interpretation. It’s only when the “interpretation” strays too
far from the acceptable that scandals are revealed and administrators lose
their jobs.
With all the money made by the College Board, Wintergreen
Orchard House, US News, and
Petersons, surely some could be set aside for the development of clear
guidelines and the provision of annual training or more consistent technical
assistance for college-based staff.
At a minimum, college administrators should be asked to
“certify” the accuracy of the data provided.
Sanctions for falsified data should be clear. And while more invasive (and not likely to
happen in this lifetime), random data audits or spot checks should be conducted
every year.
Because at the end of the day, organizations profiting from the data need to take
some responsibility for its accuracy.
This is Part 2 of a two-part series. Part 1 may be found here.
No comments:
Post a Comment