It’s been interesting to watch the great debate unfold as the federal government intervenes to provide students and their families with a more rational sense of the “fit” between college applicants and colleges and universities. Enter the “College Scorecard” whose principal data points include average net price, loan default rate, six-year graduation rate, median borrowing amount, and eventually, career data.
Critics of the federal effort argue persuasively that the value proposition of the Scorecard is principally the return-on-investment, measured by employment and job earnings. They note that outcomes assessment – where historically colleges are weakest – is one form of measurement that does not account for the differences among students, institutional mission, and academic quality, among numerous other factors.
Both are compelling arguments whose origins are embedded deep in the history of American higher education. This history is actually where the roots of the current debate begin.
American higher education is a decentralized system, originally a collection of private colleges on to which state and federal governments grafted a public college and university system in the 19th Century. As land grant institutions took hold, so too did Americans belief that access and choice should be the foundation of postsecondary education. The GI Bill, the creation of Pell Grants, and numerous additional state and federal initiatives complemented growing commitments to upper division research institutions made “in the national interest.”
In fact, it’s the very diversity in mission, programs and outcomes that Americans celebrate that makes it so difficult to rationalize and regularize a “one set of metrics fits all” approach to higher education.
Hard as it may be to admit, colleges and universities have not made their case effectively because they have been late to develop a rational system of outcomes measurements. While we have thoughtful consideration on how colleges judge admissions fit – and it varies widely by type, mission, and size – we know less about how colleges value outcomes. Just look at how much money many institutions commit to attract students in comparison to the resources applied to ensure quality counseling, mentorship, and career centers.
For the federal government, efforts to score colleges and universities does not square up to the history of how higher education developed. Do we expect the same outcome from an Ivy League university that we might from an inner city university with a very different mission? Should students be judged by a standardized series of metrics that do not recognize fully institutional purpose, resources, and student demographics and preparedness?
Let’s agree that some schools do a terrible job and should close or merge. The danger is, however, that the less resourced ones who do heroic work with diverse populations of all types might be pushed into the endangered category. Will Harvard make itself available to large numbers of “at-risk” students once the College Scorecard metrics effectively “thumbs down” other good places that have served America well?
Let’s use the example of a Boston program – College Bound Dorchester – to illustrate this point.
The mission of College Bound Dorchester is to increase college attendance and graduation rates among low-income students to transform local communities. Its proponents developed a place-based model in the Bowdoin-Geneva section of Dorchester, with 12,000 young men and women of whom 6,000 are at-risk or proven-risk youth. Among them, 70 percent have dropped out of school, 20 percent are academically off-track or display socio-emotional risk factors, and 10 percent face significant language barriers.
College Bound Dorchester promotes a “core influencer” model that intentionally recruits students who exhibit high risk factors, whether based on attitude, behavior or academic aptitude. It currently enrolls about 400 off-track youth aged 14 to 27 in its College Connections program, targeting “core influencers” in Bowdoin Geneva. Students of color represent 95 percent of the enrolled and 92 percent are from households with an income below $35,000, with 53 percent of the latter from families that earn less than $14,900 a year.
CBC encourages students to complete high school, and equally significant, to pursue an associate or bachelors degree. Staff members provide targeted assistance including academic counseling and mentoring groups. Today, 63 CBC students are enrolled in college, and the program’s college retention rate is 61 percent, significantly higher than national college retention rates for similar populations.
College Bound Dorchester illustrates the complexity of the factors that can affect the College Scorecard rankings. It’s an intensive program with a built in expectation that many will try and fail. But the program saves souls one at a time and potentially can change the neighborhood’s culture, as “core influencers” become positive role models.
This brings us to the basic national policy question. In an effort to score well, will college administrators – fearful that federal officials might turn off federal funding spigots if they rank poorly – add College Bound Dorchester to their admission pipeline of students? Will they even take a chance on good, effective grassroots programs like College Bound Dorchester when it might negatively affect their College Scorecard rankings?
What a tragedy if federal policy meant to inform consumers effectively forced changes in admission practices that squeezed out local innovation. Should the bureaucratic “metrics” design of the College Scorecard – itself a good faith effort to help students and families — impair the community/college partnerships that potentially make the most difference in the neighborhoods across America where so many disadvantaged students live?