When President Obama initially declared the Department of Education would issue a ratings system for colleges and universities to help parents and prospective students find the best “value” for the money they spend on education, the ambitious idea raised eyebrows across the higher education landscape.
Some observers said a ratings system would indeed help the public sort value at a time of rising college costs and mounting piles of glossy marketing pitches about which school or career is best.
Skeptics and doubters argued “value” is in the eyes of the beholder, that it cannot be measured in dollars and cents alone. Critics asserted the proposed ways of measuring “value”—drawing data from several federal education surveys—were especially unfair to institutions that focus on careers in liberal arts that don’t pay salaries comparable to those in engineering and the sciences.
Also of concern was the proposed use of graduation rates, with critics asserting institutions that historically enroll a large percentage of first-generation students would not fare well since a number of those students don’t start and complete college within the historical time frame—four to six years—used to measure graduation rates.
This fall, the final Obama plan was quietly issued one September Saturday morning. The ratings approach to “value” had been replaced. The president announced a College Navigator scorecard offering a boatload of information about college programs, costs, degrees offered and what certain jobs pay. It offers no fixed opinions about values of one college over another.
In the weeks since being unveiled, the College Navigator scorecard has been rarely touted by federal education officials. It is not being widely used as a reference by college enrollment and recruitment officers and representatives. It has been panned by critics as much as praised.
“People who need it most aren’t using it,” said Arlene Cash, vice president for enrollment management at Guilford College in North Carolina. Her thoughts were widely shared by officials at four-year colleges and universities.
“It’s got no bite,” Cash said, explaining that the scorecard is sending mixed signals about who and what is best where. “It’s hard to measure success,” she said.
On the two-year, community college landscape, the reception among its leaders has been as negative.
An analysis of the scorecard issued last month by the American Association of Community Colleges (AACC) left no doubt it felt the scorecard left much to be desired.
“Much” of the scorecard data “are not pertinent” to community colleges, the AACC said in a statement in which it called other parts of the scorecard “irrelevant” to community colleges.
Among a shopping list of items that the AACC says essentially renders the scorecard useless, is the reliance on federal enrollment and graduation data in the legendary IPEDS report. It excludes a majority of the nation’s community college students since 55 percent of them get no federal student aid, thus are not counted in federal IPEDS reports, the organization said.
The community college group also notes that the scorecard section is also faulty since the SAT/ACT section does not apply to community colleges and other institutions that do not require admissions test scores.
Last month, The Washington Post cited a study by two academic analysts asserting “nearly one in four community colleges are missing” from the Navigator. In addition, the study by Phil Hill and Russ Poulin faults the Navigator for failing to include conservative religious institutions, most of which are not included in several major federal data surveys because they do not accept certain federal funds.
The Navigator is also misleading in the graduation rate information, the authors say, noting its measure of graduation rates is only for full-time students enrolled for the first time. Thus, the University of Maryland University College, a largely on-line, adult-learning program, has a 4 percent graduation rate, according to the new ranking system, they say. Other measures from the university assert it has a graduation rate of 20 to 60 percent, the authors of the study wrote.
“With students transferring to other institutions before earning an associate degree, community colleges also fare poorly on this measure,” Hill and Poulin asserted. The president and his Department of Education knew they were risking negative feedback long before issuing the College Navigator, said one higher education funding executive. He reasoned the administration knew it had to launch at some point, even if the effort were not as good as it may eventually be.
“The (Obama) administration went out and issued a scorecard knowing what the response would be,” says Johnny Taylor, president and chief executive officer of the Thurgood Marshall College Fund. The negative reception the Navigator is receiving from much of academia is because it is not “perfect” says Taylor, who came to academia two years ago after nearly two decades in the world of business and finance.
“Academia likes to hold ideas until they are ‘perfect,’” Taylor says, noting the scorecard would never have been issued if it were held to academic standards.
The Obama administration moved ahead using a business approach, Taylor says, explaining the business approach to innovation is at some point to let the best version float and make running changes to the product as real-time results are gathered.
For example, Apple has issued more than six versions of its IPhone and it’s still not perfect, says Taylor. Academia doesn’t approach change that way, he says.
Indeed, the new College Navigator may require a lot of overhauling and fine tuning to gain broader acceptance and credibility, say several higher education observers and college officials who have tracked the evolution of the ratings idea from its early days. The new Navigator scorecard is akin to a car missing some tires and other needed parts.
On the frontlines of recruitment and enrollment, the Navigator is barely on the radar.
“I haven’t heard anybody talking about it,” says Dr. Ontario Woooden, an associate vice chancellor at North Carolina Central University, echoing others on the front lines. “Information continues to be a challenge even with the scorecard,” he says, adding that students are still selecting institutions “based on cost and based on fit…”
Wooden offered two close-to-home examples in expressing a sentiment offered by several peers in challenging the “value” of one college versus another. In North Carolina, he says, the scorecard on its face would imply attending North Carolina A & T University would be a better dollar return value because the income of its graduates reflects the high percentage of engineering graduates it has while the income assessment of his institution reflects lower incomes for graduates as many are in the liberal arts.
That compensation difference has little to do with the overall value of an education and one school or the other, says Wooden, echoing others in other states making similar examples.
“We’ll continue to improve the scorecard,” President Obama said in announcing the rollout of the College Navigator.
Wooden and others already have some suggestions. He says he agrees as much information as possible should be provided to parents and prospective students, especially those in homes with first-generation college students. In that regard, he suggests the Department of Education should target Upward Bound and TRIO programs, two federally assisted programs where the College Navigator is not being pushed.
Meanwhile, other schools are moving forward with their own scorecards.
This fall, Montgomery County Community College, in suburban Washington, D.C., issued a “Students Success Scorecard,” a first of its kind. The Montgomery scorecard focuses on arrival, progression and graduation at the school, which serves some 60,000 students in full and part-time classes. The scorecard is meant to offer a publicly accessible view of key data the college hopes will help it internally assess its own status, says college spokesman Marcus Rosanno.