I was pleased to see the release of Education Sector’s report, “Debt to Degree: A New Way of Measuring College Success,” by Kevin Carey and Erin Dillon. They created a new measure, a “borrowing to credential ratio,” which divides the total amount of borrowing by the number of degrees or credentials awarded. Their focus on institutional productivity and dedication to methodological transparency (their data are made easily accessible on the Education Sector’s website) are certainly commendable.
That said, I have several concerns with their report. I will focus on two key points, both of which pertain to how this approach would affect the measurement of performance for 2-year and 4-year not-for-profit (public and private) colleges and universities. My comments are based on an analysis in which I merged IPEDS data with the Education Sector data to analyze additional measures; my final sample consists of 2,654 institutions.
Point 1: Use of the suggested "borrowing to credential" ratio has the potential to reduce college access for low-income students.
The authors rightly mention that flagship public and elite private institutions appear successful on this metric because they have a lower percentage of financially needy students and more institutional resources (thus reducing the incidence of borrowing). The high-performing institutions also enroll students who are easier to graduate (e.g. those with higher entering test scores, better academic preparation, etc) which increases the denominator in the borrowing to credential ratio.
Specifically, the correlations between the percentage of Pell Grant recipients (average of 2007-08 and 2008-09 academic years from IPEDS) and the borrowing to credential ratio is 0.455 for public 4-year and 0.479 for private 4-year institutions, compared to 0.158 for 2-year institutions. This means that the more Pell recipients an institution enrolls, the worse it performs on this ratio.
While even though Carey and Dillon focus on comparing similar institutions in their report (for example, Iowa State and Florida State), it is very likely that in real life (e.g. the policy world) the data will be used to compare dissimilar institutions. The expected unintended consequence is “cream skimming,” in which institutions have incentives to enroll either high-income students or low-income students with a very high likelihood of graduation. (Sara and I have previously raised concerns about “cream skimming” with Pell Grant recipients in other work.)
The graphs below further illustrate the relationship between the percentage of Pell recipients and the borrowing to credential ratio for each of the three sectors.
There is also a fairly strong relationship between a university’s endowment (per full-time equivalent student) and the average borrowing to credential ratio. Among public 4-year universities, the correlation between per-student endowment and the borrowing to credential ratio is -.134, suggesting that institutions with higher endowments tend to have lower borrowing to credential ratios. The relationship at private four-year universities is even stronger, with a correlation of -.346. For example, Princeton, Cooper Union, Caltech, Ponoma, and Harvard are all in the top 15 for lowest borrowing to credential ratios.
The relationship between borrowing to credential ratios and standardized test scores is even stronger. The correlations for four-year public and private universities are -.488 and -.589, respectively. This suggests that low borrowing to credential ratios are in part a function of student inputs, not just factors within an institution’s control. In other words, the metric does not solely measure college performance.
It is critical to note that the average borrowing to credential ratio should be lower at institutions with more financial resources and who enroll more students who can afford to attend college without borrowing. However, institutions who enroll a large percentage of Pell recipients should not be let off the hook for their borrowing to credential ratios. These two examples highlight the importance of input-adjusted comparisons, in which statistical adjustments are used so institutions can be compared based more than their value-added than their initial level of resources. The authors should be vigilant to make sure their work gets used in input-adjusted comparisons rather than unadjusted comparisons. Otherwise, institutions with fewer resources will be much more likely to be punished for their actions even if they are successfully graduating students with relatively low levels of debt.
Point 2: The IPEDS classification of two-year versus four-year institutions does not necessarily reflect a college’s primary mission.
IPEDS classifies a college as a 4-year institution if it offers at least one bachelor’s degree program, even if the vast majority of students are enrolled in 2-year programs. Think of Miami Dade College, where more than 97% of students are in 2-year programs but the institution is classified as a 4-year institution.
For the purposes of calculating a borrowing to credential ratio, the Carnegie basic classification system is more appropriate. Under that system an institution is classified as an associate’s college if bachelor’s degrees make up less than ten percent of all undergraduate credentials. The Education Sector report classifies 60 institutions as four-year colleges that are Carnegie associate’s institutions.
This classification decision has important ramifications for the borrowing to credential comparisons. The average borrowing to credential ratio by sector is as follows:
Two-year colleges, Carnegie associate’s: $6,579 (n=942)
Four-year colleges, Carnegie associate’s: $13,563 (n=60)
Four-year colleges, Carnegie bachelor’s or above: $23,166 (n=1,421)
Ten of the twelve and 20 of the top 40 four-year colleges with the lowest borrowing to credential ratios are classified as Carnegie associate’s institutions. For example, Madison Area Technical College is 54th on the Education Sector’s list of four-year colleges, but is 564th of 1,002 associate’s-granting institutions. These two-year institutions with a small number of bachelor’s degree offerings should either be placed with the other two-year institutions or in a separate category. Otherwise, anyone who wishes to rank institutions based on their classification would be comparing apples to oranges.
In conclusion: the effort in this report to measure institutional performance is a laudable one. But the development and use of metrics is challenging precisely because of their potential for misuse and unintended consequences. Refining the proposed metrics as described above may make them more useful.
You have read this article college outcomes /
college performance /
Education Sector /
Kevin Carey /
metrics /
Miami Dade College
with the title Measuring Up? The Trouble with Debt to Degree. You can bookmark this page URL http://apt3e.blogspot.com/2011/08/measuring-up-trouble-with-debt-to-degree.html. Thanks!
No comment for "Measuring Up? The Trouble with Debt to Degree"
Post a Comment