This is what Jamienne Studley, a deputy under secretary at the Education Department, told a group of college presidents who were meeting to talk about President’s Obama’s plan to rate colleges with the apparent aim of driving out of business schools that don’t meet the administration’s definition of success, as reported by The New York Times:
“It’s like rating a blender. This is not so hard to get your mind around.”
And this is what Cecilia Muñoz, director of the White House Domestic Policy Council, said in the same article about whether it is possible for anybody to persuade the administration that their plan is a terrible one for many reasons, including the fact that rating a college is not really like rating a blender:
For those who are making the argument that we shouldn’t do this, I think those folks could fairly have the impression that we’re not listening. There is an element to this conversation which is, “We hope to God you don’t do this.” Our answer to that is: “This is happening.”
And there you have it. It doesn’t seem to matter what anybody else thinks. Though there are many definitions of success, the Obama administration is going to use its own to develop the rating system no matter how many people oppose it. They know better. Just ask them.
In this case, we are talking about a plan to rate (not rank) colleges on criteria that could include average tuition and how much graduates earn even though many higher education leaders have said it is a terrible idea.
The administration says it will rate colleges by “mission” as well as institutional type, and wants to link federal student aid to the rankings, giving more to schools that score highly, and thus ultimately driving out schools that do poorly on the ratings system. (The federal student aid piece involves congressional approval, which isn’t likely.) How much it is going to cost? Not known.
The administration thinks this will serve students well by revealing important data to families so they can better make college decisions. Critics say that all rating systems present a limited view of any institution and that the government already publishes a mountain of information on institutions of higher education. (See below for other problems with the plan.)
One of the critics is Janet Napolitano, president of the University of California system who had been Obama’s U.S. homeland security secretary; she said last December that she is “deeply skeptical that there are criteria that can be developed that are in the end meaningful.” The administration, apparently, doesn’t care much what its own former Cabinet member has to say.
This may seem painfully obvious but, for the record: Blenders mix things together. That’s it. They may do it on different speeds, but mixing things is what they do. Colleges do countless things for students, and people go to them for many different reasons, with many different goals. The administration’s focus seems to be on financial rewards after college, but that’s not why everybody goes.
Yes, some students want to go to Wall Street and make a fortune. But some want to go to college to become teachers and not make a fortune. Some students want to be poets, engineers, sociologists, urban planners, nurses, etc. Some go for a religious education. Some go without knowing what they want to be but want to expand their understanding of the world and develop analytical thinking, which, incidentally, can be done in just about any area, not simply the sciences but also philosophy and music and the whole range of humanities.
Schools are highly complicated institutions with countless moving parts. Unlike a blender.
Besides, there are a host of problems associated with the plan. The Education Department asked for public comments about its plan, and the National Association for College Admissions Counseling responded with some of the most interesting. I’ve published some of these comments before, but here they are again:
*Ratings and rankings can be skewed by the methodology used to create them.
* The federal government has major constraints in its ability to oversee data submission from colleges and “as a result, it may take years before institutions are held accountable for violating program integrity standards, including reporting false data to the Department of Education.”
At a minimum, a college ratings system in the current environment of program integrity enforcement would suffer from inaccurate and potentially misleading information if unscrupulous institutions are able to avoid accountability for reporting inaccurate information. At worst, decisions about the allocation of federal student aid will be made on information that has been manipulated to ensure continued eligibility for federal student aid programs, with little or significantly delayed corrective action.
* A rating system could create incentives for schools “to focus disproportionate resources on data elements that can change rankings without necessarily changing the quality of the institution.
* It is “virtually impossible to develop a ratings system that includes affordability as an input variable without also making an evaluation of the state funding mechanisms for higher education.”
* The administration suggested that colleges and universities would be classified in its ratings system by “mission” as well as institutional type, but schools “differ widely within some of these categories.”
* Using as a data point the number of students from low-income families with Pell grants is problematic because “some institutions that enroll the largest number of Pell grants are also the institutions with the worst track record for serving students.”