Looking for our new site?

U.S. Department of Education

College Blackout

  • By
  • Amy Laitinen,
  • Clare McCann,
  • New America Foundation
March 11, 2014
Ever-rising college costs, more than $1 trillion in outstanding federal student loan debt, and graduates doubtful that they’ll be able to earn enough to repay their loans have driven college value to become a major concern for most prospective students. Yet students, families, and policymakers are finding their questions can’t be answered—because the higher education lobby has fought to keep it that way.

Key Questions: Education Policy in the President's Fiscal Year 2015 Budget

March 4, 2014
President Barack Obama submitted his fiscal year 2015 budget request to Congress on March 4, 2014. The proposal, which includes $1.014 trillion in appropriations spending, slightly exceeds the limit passed earlier this year by Congress and signed into law by the president of $1.012 trillion, with the exception of an Opportunity, Growth, and Security Initiative fund that would provide additional funding offset by revenue increases or spending cuts.

Colleges Are Supposed to Report Pell Graduation Rates -- Here's How to Make Them Actually Do It

October 30, 2013
​Since 2008, the federal government has spent nearly $200 billion on the Pell Grant program. We know that this sizeable investment has bought a 50 percent increase in the number of people getting these awards. But how many graduates did these funds produce? What percentage of the individuals graduate? And which schools are doing the best with the lowest-income students?

Congress wanted to know the answer to all these questions. That’s why it included requirements in the 2008 reauthorization of the Higher Education Act (HEA) that required colleges to disclose the graduation rates of Pell Grant recipients, students who did not receive Pell but got a Subsidized Stafford Loan, and individuals who got neither type of aid. But it only asked institutions to disclose this information, either on their websites or potentially only if asked for it, not proactively report it to the Department of Education. The results have gone over about as well as a voluntary broccoli eating contest with toddlers. A 2011 survey of 100 schools by Kevin Carey and Andrew Kelly found that only 38 percent even complied with the requirement to provide these completion rates, in many cases only after repeated phone calls and messages.

Absent institutional rates, the only information of any sort we have about Pell success comes as often as the Olympics, when the National Center for Education Statistics (NCES) within the Department updates its large national surveys. These data are great for broad sweeping statements, but cannot report the results for individual institutions, something that’s especially important given the variety of outcomes different schools achieve. Instead, these surveys can only provide information about results by either the sector or Carnegie type of institution. And the surveys are too costly to operate more frequently.

Fortunately, there’s a chance to fix this problem and get colleges to report this completion data. The Department is currently accepting comments on its plans for data collection under the Integrated Postsecondary Education Data System (IPEDS) for the next several years (see here to submit a comment, here for the notice announcing the comment request, and here for the backup documentation of what the Department wants to do). This means there’s an opportunity for the public to provide suggestions before the comment period closes on November 14 as to what additional information IPEDS should include
To be clear, a lot of what the Department is already proposing to add into IPEDS through this collection will help us get a significantly better understanding of student outcomes in postsecondary education. First, it would implement some recommendations from the Committee on Measures of Two-Year Student Success, which Congress called for in the 2008 HEA reauthorization to capture students that are not currently captured in the federal graduation rate because they are not full-time students attending college for the first time. The committee’s recommendations, which are being implemented here, aim to capture those missing students by requiring colleges to reporting on the success rates of three additional groups: (1) students who are enrolled part-time and attending for the first time, (2) those who are enrolled full-time and have attended college elsewhere, and (3) those who are enrolled part-time and have attended college elsewhere.  Colleges would then report how many of these students either received an award, are still enrolled, transferred, or dropped out after six and eight years. And it will start this reporting retroactively so that the public won’t have to wait until 2023 to find out the first results.

Other proposed changes to IPEDS are smaller-scale but also important. Colleges would be asked to provide information on the use veterans benefits on their campuses. And the way for-profit colleges report their finances data would be better-aligned with the way public and private non-profit colleges provide this information.
But these changes still leave us without one obvious set of completion information—rates disaggregated by socioeconomic status. Sure, attending full-time can be a proxy for a student’s financial circumstances, but not as definitively as getting a Pell Grant.

The Institute for College Access and Success and others have already argued that the Department should add these data into IPEDS. In response, NCES has noted that improvements to the federal student aid database may make it possible to calculate completion rates for Pell students. But that’s an incomplete solution. That database is legally prohibited from collecting information on students that don’t get federal student aid, so there’s no way to produce the HEA-mandated graduation rate for students who received neither Pell Grants nor subsidized Stafford loans.

Of course, you can’t bring up any discussion of data reporting without running into the “B” word: burden. But remember, this isn’t new burden—colleges are legally required by an act of Congress to provide these graduation rates. Any huge encumbrance these represent (and I’d argue it’s probably not much since you would just be taking a subset of an existing cohort that has easy to identify characteristics based on student aid receipt) has already occurred. In fact, U.S. News and World Report is already getting some schools to provide this information, but it won't share the raw data.

In an ideal world, we would not have to beg and plead with colleges to tell us whether they are successfully using the more than $30 billion they receive each year to educate low-income students. Instead, we would have a student unit record system capable of processing all this information without adding burden to colleges or forcing them to rely on costly alternatives like the National Student Clearinghouse. But thanks to Virginia Foxx (R-N.C.) and the college lobby (primarily the private institutions), we don’t live in that world. Instead, we’re left with IPEDS where these data should be.  

The Dark Side of Enrollment Management

October 28, 2013
Publication Image The dark side of enrollment management keeps rearing its ugly head.

Last week, The George Washington University was forced to admit that it has been lying for years about its admissions policies. While the school has long claimed to be “need blind,” it turns out that a student’s ability to pay is factored into its admissions decisions.  The best way to get off a wait list at GWU (and other colleges and universities that follow the high-tuition/high-aid model) isn’t to list your latest achievement or write another essay, but to say you don’t need to be considered for financial aid. This is enrollment management at its darkest—the university enrolls rich students to maximize its revenue, while leaving students from low- and moderate-income families out of luck simply because they lack the resources to pay full-freight.

That’s bad enough. But today, we learned about another trick that enrollment managers have up their sleeves. According to Inside Higher Ed, “Some colleges are denying admissions and perhaps reducing financial aid to students based on a single, non-financial, non-academic question that students submit to the federal government on their [FAFSA].”  The FAFSA asks students to identify the colleges they wish to attend. Colleges then get that information and can see the order in which they were listed by the student.

The problem is that enrollment managers and management firms like Noel Levitz have discovered that students tend to list colleges in preferential order. In an example from Inside Higher Ed, Augustana College found that 60 percent of the students that list the school first on the FAFSA end up enrolling, as do only 30 percent of those who list it second and 10 percent of those that list it third. In a world of maximizing revenues and yield, why admit, or offer a generous financial aid package to, someone who lists your institution third? Don’t forget, that the FAFSA also contains a family’s financial information and Expected Family Contribution—data that allow a college to better understand just how needy a student is. So if you have a Pell-eligible student, who lists Augustana third, honestly, tough luck for that student.

Apparently, this behavior has been going on for a while. But this type of policy should never be the industry standard. It makes the admissions and financial aid process even more opaque to students, especially first-generation college-goers who have no idea that this policy even exists. Such a policy takes choice away from students. It takes away their ability to freely list the colleges they’d like to attend, without fear of repercussion. It assumes that students only care about their first choice school.

When I worked with students at The College Planning Center in Boston, I saw firsthand that low-income, first generation students did list their first-choice college first on the FAFSA. But oftentimes what separated first and second and third ordering of colleges was negligible. They were excited to be going to college, period. For them, the financial aid package was more important than whether they got into their first-choice school. This policy prevents students from receiving financial aid offers that will help them choose a college that meets their needs both academically and financially.

It’s hard to know how many low- and moderate-income students have fallen victim to this policy, but there is, however, an easy solution. The FAFSA should either not allow institutions to see where students have applied or it should list the institutions in alphabetical order. The College Board and ACT should follow suit with the score reports they send to institutions. These score reports also list institutions in the order chosen by students. The admissions process is already opaque enough, putting low-income and moderate-income students at a disadvantage.

It’s becoming increasingly obvious that “need-blind” and “need-aware” policies rarely exist in the truest form. Instead, they allow institutions to hide behind a policy that sounds welcoming to low- and moderate-income families, when really all they’re doing is trying to maximize their revenues and yield rates. 

Cohort Default Rates Provide Insights into Outstanding FFEL Loans

October 23, 2013

Updated 10/24/2013 6 PM: This post was updated to include a better description of the Asset Backed Commercial Paper conduit program.

Hidden amidst the shutdown furor was the annual release by the U.S. Department of Education of new student loan default rates. The data measure how many borrowers who entered repayment in a single year defaulted on their federal student loans within two or three years. This year, the data show that 10 percent of borrowers default within two years of entering repayment and 14.7 percent do so within three years. As has historically been true, for-profit and community colleges had the highest default rates, well above those at public or private non-profit 4-year schools.

The overall trend here is not pretty. This is the sixth consecutive year in which two-year default rates increased, and they are now at the highest they’ve been since 1995. But with the growth in borrowing, this means there are significantly more people entering repayment and defaulting. More than 1.1 million more borrowers entered repayment in fiscal year 2011 compared to two years prior, and 10 percent defaulted, as compared to 8.8 percent in fiscal year 2009—an increase of more than 230,000 defaulters. Over those two years, enrollment in postsecondary education also increased, by more than 590,000 students, while the number of borrowers who entered repayment skyrocketed by 1.8 million students. See the chart below for more specific default rate figures.


Source: U.S. Department of Education

But beyond the school-based cohort default rates, the Department of Education also released some other interesting default rates: those for guaranty agencies and lenders under the Federal Family Education Loan (FFEL) Program.

FFEL is the now-defunct program replaced by the Direct Loan Program. Vestiges of the program remain, however, in the form of more than $400 billion in outstanding loans issued before the change. Under FFEL, government-backed loans were issued through a set of lenders, and 35 private non-profit organizations called guaranty agencies performed various administrative tasks, including providing federal default insurance to the lenders.

Default rates for lenders don’t carry much weight – there are no sanctions associated with high default rates. Each lender has a calculated two-year and three-year default rate, both for loans they originated and for loans they currently hold. Current lender two-year default rates range from 0 percent for over 500 lenders, including many who don’t hold any loans anymore, to a shocking 89 percent for Citibank, which still holds about 2,000 loans. Among the largest FFEL loan-holders (the 28 companies that hold 10,000 or more loans), rates average about 7 percent. Sallie Mae, the largest FFEL lender, has a default rate of 4.1 percent on the nearly 27,000 loans totaling almost $20 million it still holds from this cohort.

And the Department holds one set of loans with a very high default rate. During the financial crisis, in order to help FFEL lenders continue to make new loans, the Department of Education set up a financing vehicle called the Asset Backed Commercial Paper conduit program. The Department purchased some of the participants' FFEL loans, including all loans that were more than 210 days delinquent, as required by the contract. Those loans, now held by the Department but purchased through the conduit, carry a two-year default rate of 51.7 percent and a three-year rate of 56.6 percent. The requirement that the Department purchase those delinquent loans explains the abnormally high default rate.

The guaranty agency default rates provide another way of judging the results in the FFEL program. When a FFEL borrower defaults, the lender can file a claim to a guaranty agency to recover most of the outstanding loan balance. Then the guaranty agency—a true middleman—uses federal money to reimburse the lender, and the Department of Education reimburses those costs (this is known as “reinsurance”). But guaranty agencies with high default rates can’t receive the full amount of reinsurance reimbursement. If guaranty agency rates are below 5 percent, they get a 95 percent reimbursement; for rates between 5 percent and 9 percent, 85 percent; and for default rates that are 9 percent or higher, 75 percent.

As it turns out, at least when it comes to two-year cohort default rates, five of the reported guaranty agency default rates exceeded 9 percent for the 2011 cohort – Student Loan Guarantee Foundation of Arkansas, Texas Guaranteed Student Loan Corporation, Higher Education Assistance Authority (Alabama and Kentucky), Florida Department of Education, and Oklahoma College Access Program. Still, in every one of those states except Oklahoma, the statewide student two-year and three-year cohort default rates are even higher than the guaranty agency two-year default rate.

And although some guaranty agencies are private non-profit organizations, while others are state-based and may receive some state resources, there doesn’t seem to be much difference in their performances. The non-profits’ average default rate is 6.2 percent – effectively identical to the 6.3 percent default rate among state-based guaranty agencies.

Two-year cohort default rates don’t set a particularly high bar, as it stands, either for guaranty agencies and lenders or for students. Guaranty agencies are not held accountable for their borrowers’ defaults. Schools are – for rates at or above 25 percent three years in a row, or higher than 40 percent in one year, schools lose eligibility for Title IV federal financial aid – but not as much as they once were. The last time rates reached about 10 percent, in 1995, more than 200 schools were sanctioned by the Department of Education. Since then, the number of schools subject to sanctions has dropped precipitously – to just 8 colleges for the 2011 cohort. The 2010 cohort – the most recently available class of students – illustrates the limitations of the default rate. Consider that schools’ two-year default rates jumped from 9.1 percent to 14.7 percent when a third year was included in the window. And default rates in a cohort (unsurprisingly) continue to grow every year – even outside the 2-year or 3-year window.

Thanks to a change enacted in the 2008 Higher Education Act reauthorization, cohort default rates will get moderately stronger next year as the Department finally transitions to relying on three-year rates to determine whether a disconcertingly large share of a school’s students are unable to pay their loans. This year, over 130 schools would be in danger of facing sanctions if their default rates did not change in the third year of calculations (to date, only two official three-year default rates have been calculated). The hope is that a longer window would be harder for schools to game by utilizing temporary measures such as deferment or forbearance to avoid default up to the edge of the two-year window.

Default rates are by no means a perfect measure of a school’s value to students, but they are part of a scaffolding of restrictions on colleges – a sort of baseline quality metric to help students avoid low-value schools and to avert wasted taxpayer dollars. The numbers released by the Department offer valuable insights into students’ struggles.

What to Think About the DC IMPACT Study

October 17, 2013
Publication Image

Few teacher evaluation reforms have been as contentious as the IMPACT system in D.C. Public Schools. But a new study published by Thomas Dee and James Wyckoff provides the first empirical evidence that the controversial policy could be encouraging effective teachers to stay in the classroom – and improve their practice.

Dee and Wyckoff examined teachers that scored on the cusp of various IMPACT performance levels– namely, teachers just above and just below the cutoff for effective and highly effective (HE) ratings. The idea is that teachers near the cut points share similar characteristics, regardless of their final rating. By examining these teachers’ outcomes in subsequent years, researchers can isolate the effect of IMPACT’s incentives on teacher behavior. Do teachers that barely receive a HE rating fare differently than those that just missed the distinction? And do minimally effective (ME) teachers close to the effective cut point respond differently than teachers who barely cleared the effective hurdle?

Turns out, they do. The incentive structure within IMPACT had significant effects on retention and performance, particularly after the second year of implementation (2010-11) when IMPACT gained credibility. At that time, teachers with two ME ratings became eligible for termination and those with two HE ratings earned permanent salary increases, not just bonuses. Teachers that received their first ME rating after the 2010-11 year were significantly more likely to leave DCPS (over 10 percentage points) than teachers that scored just above the cut point. Further, the threat of dismissal improved the performance of ME teachers that chose to stay for the 2011-12 year – their scores improved by 12.6 IMPACT points compared to teachers that just received an effective rating, an increase of five percentile points. Similar effects were seen for teachers that could become eligible for increases in their base pay if they remained HE – their 2011-12 IMPACT scores improved by nearly 11 points compared to teachers that missed the HE cutoff, an increase of seven percentile points.

So what do these results tell us about IMPACT and teacher evaluation reform overall? Is this a moment for cautious – or all-out – optimism?

1. Evaluation systems like IMPACT don’t necessarily improve the performance of teachers across the effectiveness spectrum.  That’s because Dee and Wyckoff only examined a narrow band of DCPS teachers: those scoring right at the cut points between ratings. These teachers are the most likely to be influenced by the incentives built into IMPACT – say, when the ratings affect job security. Instead, the research demonstrates the effect of certain incentives, on a certain group of teachers. Those incentives worked –and worked well – but we still don’t know how the performance of most teachers changed in response to the new evaluation system.

2. That said, the research is rigorous, and the results are encouraging. There is evidence that the district’s teacher workforce improved overall. Some ME teachers voluntarily chose to leave DCPS, and the newly hired teachers that replaced them in the 2011-12 year had higher IMPACT scores, on average. And there is no evidence that highly effective teachers were pushed out of the system by IMPACT. Further, many ME and HE teachers tended to improve on IMPACT when they remained with DCPS.

However, more research is needed to determine what interventions were most effective in helping these teachers improve – and to determine whether other teachers (not just those near the cut points) saw similar outcomes. Evaluation systems must define what effective teaching is, and also provide the knowledge and support for teachers to meet these expectations. We know far more about identifying effective teachers than we know about what to do next.

Of course, that brings up another important caveat: improvements in performance here are measured based on changes in IMPACT scores. The authors don’t link these results to student learning explicitly – another area for future research.

3. Finally, while the results are positive and provide some of the best evidence to date on the success of IMPACT, the research may not be widely applicable to other districts and states. IMPACT and DCPS remain outliers in many respects:

  • IMPACT uses value-added data to measure an individual teachers’ contribution to student learning, which many evaluation systems have eschewed.
  • IMPACT includes not one, not two, but five observations of classroom practice over the course of the year. Further, two of these observations are conducted by master educators, rather than school principals. Hiring and training objective observers takes time, capacity, and resources that many states and districts do not have – or are unwilling to dedicate – for evaluation.
  • IMPACT’s improvement and incentive structures are also well-developed and supported. DCPS has made a concerted effort to improve the quality of its coaching and professional development and link it to IMPACT. Further, the bonuses and salary increases for highly effective teachers are substantial, thanks in part to foundation funding. While this external support may raise questions of sustainability, these incentives have been institutionalized in the district’s contract with the Washington Teachers Union.
  • In a way, IMPACT operates at both a state- and district-level. Some of the lessons learned from IMPACT may not be applicable in states, which face additional layers of governance and greater heterogeneity. On the flip side, IMPACT may not be a model for other districts, where administrators could have less autonomy to develop, implement, and revise evaluation systems.

In other words, the results from D.C. are encouraging, but there is still much to learn. More worrisome, as teacher evaluation reform takes hold across the country as part of Race to the Top and states’ ESEA waiver plans, these positive results may prove to be a one-off. IMPACT is as rigorous and comprehensive as teacher evaluation systems get – especially compared to the rudimentary, half-baked, and vague evaluation systems described in many states’ waiver requests. While it is important for states to follow through with their promises to implement new evaluation systems, the quality of this implementation should be of equal – or even greater – concern to policymakers, educators, and advocates moving forward. 

Our Long National Nightmare…Will Return Shortly

October 17, 2013

This post originally appeared on our sister blog, Ed Money Watch.

Last night, as the 16th day of the federal government shutdown drew to a close, the House and Senate approved, and President Obama signed into law a budget deal that restored funding for federal agencies and brought the nation back from the brink of a debt default. But celebrations will be short-lived. The temporary spending bill will expire again on January 15, and the increased debt ceiling will run out again on February 7 – evidence that the last month of congressional debate had virtually no long-term implications.

The shutdown began just over two weeks ago, with House Republicans insisting on defunding or at least delaying a portion of the Affordable Care Act, the healthcare law President Obama pushed through Congress in 2010. But Senate leadership and President Obama remained dead-set against the changes. (Only one, relatively minor change to “Obamacare” was made in this latest deal, requiring the Department of Health and Human Services to verify the incomes of those applying for tax credits or cost reductions under the law.) So instead, the debate morphed into one over a more workable issue: spending levels.

Our Long National Nightmare…Will Return Shortly

October 17, 2013

This post also appeared on our sister blog, Early Ed Watch.

Last night, as the 16th day of the federal government shutdown drew to a close, the House and Senate approved, and President Obama signed into law a budget deal that restored funding for federal agencies and brought the nation back from the brink of a debt default. But celebrations will be short-lived. The temporary spending bill will expire again on January 15, and the increased debt ceiling will run out again on February 7 – evidence that the last month of congressional debate had virtually no long-term implications.

The shutdown began just over two weeks ago, with House Republicans insisting on defunding or at least delaying a portion of the Affordable Care Act, the healthcare law President Obama pushed through Congress in 2010. But Senate leadership and President Obama remained dead-set against the changes. (Only one, relatively minor change to “Obamacare” was made in this latest deal, requiring the Department of Health and Human Services to verify the incomes of those applying for tax credits or cost reductions under the law.) So instead, the debate morphed into one over a more workable issue: spending levels.

Under a law passed by Congress in 2011, known as the Budget Control Act (BCA), lawmakers established a congressional “supercommittee” to create a framework for $1.5 trillion in deficit reduction. When they failed to do so, the law reverted to Plan B: spending limits for fiscal years 2012 through 2022. Mid-2013, the White House was required to sequester a portion of that year’s spending with across-the-board spending cuts, but an eleventh-hour deal in Congress (the American Taxpayer Relief Act) meant that a portion of the cuts were pushed off to fiscal year 2014 instead. This year, then, the spending cap drops by another $18 billion.

That is a key point that has been lost in the debate: The “second sequester” was not part of the original Budget Control Act as passed in 2011. It came later, through the American Taxpayer Relief Act of 2013 (the law that extended most of the Bush-era tax policies), which Congress passed with overwhelming, bipartisan support in January 2013.

The trouble is, the spending bill passed last night, like both the House and Senate proposals that came out ahead of the shutdown, continues funding the government at 2013 post-sequester levels (about $985 billion this year, instead of $967 billion as required under the BCA as modified in early 2013). That means another sequester will hit federal programs on January 15 – the same date that funding expires under this plan.

That’s no accident. Senate Majority Leader Harry Reid (D-NV) wanted to push the deadline for the continuing resolution up against the deadline for sequestration to force the issue further. He hopes to use the next debate over funding the government in just a few short months to press Republicans to provide federal agencies with flexibility to implement the sequester, rather than to apply it evenly to all programs, or even to cancel the sequester entirely. (The proposal to give agencies flexibility was discussed during these budget negotiations, but was ultimately left out of the final bill.)

And Senate Democrats effectively queued up this situation when they passed their budget earlier this year, ignoring sequestration and setting spending at $1.058 trillion instead of at the House Republicans’ approved (and the Budget Control Act’s mandated) $967 billion. (President Obama followed suit, with his budget clocking in at $1.057 trillion.)

Republicans, meanwhile, have little incentive to alter sequestration – and got cold feet when it came time to actually draft an education spending bill that meet the new spending caps. Efforts earlier this year to bolster funding for the Department of Defense by reducing substantial amounts of funding for the Departments of Labor, Health and Human Services, and Education failed because of internal dissent among House Republicans about the size of the reduction. But spending cuts remain a major priority of most GOP lawmakers, and the political will doesn’t yet exist—among Republicans or some Democrats—to cancel sequestration.

Another provision of last night’s agreement, though, would attempt to end such “governing by crisis” in favor of a return to regular order in Congress. A bicameral, bipartisan budget conference committee will begin meeting soon to attempt to reach an agreement on government funding – undoubtedly, with a focus on altering or eliminating sequestration in favor of more targeted cuts.

Sound familiar? That’s because the supercommittee whose failure spurred the implementation of sequestration in the first place was tasked with a similar goal of reaching a broad deal on budget policy. Some of the committee’s appointees – Rep. Clyburn (D-SC), Rep. Chris Van Hollen (D-MD), Sen. Patty Murray (D-WA), Sen. Rob Portman (R-OH), and Sen. Pat Toomey (R-PA) – even served as supercommittee members a few years ago.

It seems unlikely that enough has changed politically to spark much agreement. And if that’s the case, we’ll be right back in the same place, facing a potential government shutdown (and soon after, another possible government default), by mid-January. 

Obama Administration Should Stop Punting on For-Profit College Job Placement Rates

October 17, 2013

[This post is largely adapted from a previous post that ran on Higher Ed Watch in October 2011.]

Last week I argued that the U.S. Department of Education needs to develop a single, national standard that for-profit colleges would be required to use when calculating job placement rates. Department officials could go a long way in achieving this by revisiting a proposal they offered in the summer of 2010 that would have established a standard methodology to use when determining these rates.

Currently, the federal government leaves it up to accrediting agencies and states to set the standards that for-profit schools must use to calculate the rates, and to monitor them. The only exception is for extremely short-term job training programs, which must have employment rates of at least 70 percent to remain eligible to participate in the federal student loan program.

In June 2010, as part of a package of draft regulations aimed at improving the integrity of the federal student aid programs, the administration proposed extending the standards that short-term programs are required to use to all for-profit college and vocational programs that are subject to the Gainful Employment rules. The proposal was met with a firestorm of protest from for-profit college officials, as the federal methodology is much more strict than that used by accreditors and state agencies.

For example, under the Education Department’s requirements, students are only considered to be successfully placed if they have been employed in their field or a related one for at least 13 weeks within the first six months after graduating. In comparison, some accreditors and state agencies apparently allow schools to consider a graduate to be successfully placed if they work in their field for as little as a day.

Meanwhile, the Education Department has established a strict regulatory regime to make sure the rates are not rigged (the extent to which the agency actually holds short-term programs to these standards is unclear). Institutions are required to provide documentation proving that each of the graduates included in their rates is employed in the field in which he or she trained. According to the Department’s rules, acceptable documents “include, but are not limited to, (i) a written statement from the student’s employer; (ii) signed copies of State or Federal income tax forms; and (iii) written evidence of payments of Social Security taxes.” 

To be fair, for-profit colleges were not the only institutions that objected to the proposal. Community colleges and state universities that have training programs that fall under the Gainful Employment requirements also complained that the plan was too stringent. These institutions may have found these requirements to be especially daunting since they generally have not had to track job placements before.

A Recipe for Failure

How did the Education Department’s political leaders respond to this criticism? They punted. Instead of sticking to their guns or devising an alternative proposal, they kicked the issue to the National Center for Education Statistics (NCES). Under the final program integrity regulations, which were released in October 2010, the Department directed the NCES to convene a Technical Review Panel “to develop a placement rate methodology and the processes necessary for determining and documenting student placement” that schools would be required to use to fulfill this mandate.

But putting NCES in charge of developing a federal standard for calculating these rates turned out to be a major blunder. First, this was not an assignment that the NCES had sought out or has typically been asked to do. After all, the Department was not just asking the center to provide technical assistance in devising a new methodology but to take the reins in setting a new federal policy in this highly contentious and controversial area. Second, the Technical Review Panel that the Department chose to carry out this assignment included a number of representatives from schools that were opposed to this effort.

All of this was a recipe for failure. So it was hardly a surprise that, after two days of discussions on this topic in March, the review committee was not able to reach an agreement. The panel suggested in a final report on its deliberations that "the topic be explored in greater detail by the Department of Education.” Translation: This is a job for the Department, and not NCES.

The Education Department's hands have been tied since because the final regulations explicitly require schools to use "a methodology developed by the National Center for Education Statistics, when that rate is available." In the meantime, the job placement rates that for-profit colleges are required to disclose under the new rules are the same ones they report to accreditors and state regulatory agencies. As I've written previously, the methodologies that for-profit schools use to calculate these rates vary state by state and accreditor by accreditor, making them impossible to compare. And because neither accreditors nor state regulators have historically put much of an effort into verifying these rates, the schools don’t seem to have any qualms about gaming them.

As Department officials rewrite the Gainful Employment rules, they need to revisit this issue. Otherwise, prospective students will have to continue relying on faulty information when choosing whether to attend a for-profit college.

Reporting Burden in Higher Education: The Case of the Clery Act

October 16, 2013
University of Denver Campus Safety Badge

Members of both political parties have decried two seemingly contradictory things in higher education. They want better information to inform students, families, taxpayers, and policy makers – but they also want fewer burdens on institutions, which some say increase costs, stifle innovation, and move schools’ focus away from the primary mission of educating students. While these are both laudatory goals, they appear, at face value, to call for action in opposite, conflicting directions. Students and institutions are left with the worst of both worlds—too much data, reporting, and burden and not enough usable information.

To escape this seeming contradiction between reporting burden and access to information, public discourse and debate should shift away from talking about burden in the generic, abstract sense to the specific ways in which it affects institutions and policy makers. So, let’s look at one of the most heavily cited sources of burden: consumer disclosures. In a 2013 GAO report, this category, which includes campus safety and security reports, was the most frequently cited as burdensome in interviews of experts and higher education officials.

The campus safety component of these disclosures stems from the Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act, first passed in 1990 as the Student Right-to-Know and Campus Security Act. The law requires colleges to annually report campus security statistics, maintain a public log of recent crime, and provide timely warnings of ongoing threats to students.

The provision grew out of campus safety advocacy efforts led by Connie and Howard Clery, who founded Security On Campus, Inc. (now the Clery Center for Security on Campus) after the brutal and shocking 1986 rape and murder of their daughter Jeanne in her freshman dorm at Lehigh University. The subsequent investigation revealed lapses in security oversight by the university. Her murderer, Josoph M. Henry, a fellow student she did not know, was able to gain access to her dorm by passing through three automatically locking doors that had been propped open with boxes for convenience.  The Clerys also discovered that there had been 38 violent crimes on campus over the prior three years, but no laws at the time required the university to report them to students or prospective students.

After the passage of multiple state laws, the 1990 federal bill was introduced in Congress by Representative William Goodling (R-PA) in response to the Clerys’ advocacy efforts. In introducing the bill, Goodling testified: “This resolution will ensure the Department of Education gives priority status to this important responsibility [of protecting students].... Colleges are trying to hide [crime incidents] because they're in a very competitive business. There's no question they are putting students in danger if they try to cover up the crime that's going on in order to recruit students."  In 1998, Senator Arlen Specter (then R-PA) sponsored legislation tightening the reporting requirements and officially renaming it after Jeanne Clery. At the time of the bill’s passage, Specter spoke at a conference with the Clerys in which he emphasized the importance of campus safety and the lives that would be saved by the bill.

The evidence on whether the act has actually led to a decrease in campus crime in the decades since its passage is mixed. There were no reliable figures before the legislation, and the crime rate fell broadly across the U.S. over the same time period. And although a significant percentage of senior safety and security officials in one study said the law helped bring about improvements to their policies and procedures, most did not see the law as being specifically related to a decrease in crimes in and around campus.  More importantly, it does not seem like students and perspective students are actually using the specific reports and information the law requires. Previous studies and surveys show that the majority of students were not aware of the law and had not read the annual report that it requires, and only 10 percent of students said that they had factored campus crime statistics into their choice of school.

But colleges and universities that don’t meet the law’s stringent disclosure requirements do face significant penalties for violating the act. Each violation is punishable by a fine up to $35,000 per violation and possible loss of Title IV eligibility for the institution. In 1998, Eastern Michigan University was fined $350,000, at that point the largest-ever penalty for violating the law, for failing to quickly and accurately issue warnings after the murder of a student Laura Dickinson in her dorm room. Other institutions, including USC    , have been accused of reporting incidents inaccurately to lower the overall numbers of violent crimes appearing in the log and reports mandated by Congress.

The Clery Act was a strong response by lawmakers to a personal and shocking tragedy. Support for the bill was overwhelming – it passed the House without objection, and the Senate on a voice vote.  The law is not likely to disappear anytime soon – in fact, members have Congress have only piled on more and more requirements to the law. For example, the 1998 reauthorization required institutions to report off-campus crimes that occurred in close proximity to the institution. This led to concern from some institutions on where exactly to draw the line of “close proximity,” given that any tragic event near campus but outside the specified area could bring further negative attention to a school’s policies. Industry organizations also complain that the frequent changes to the law (four in 10 years following its passage) made it nearly impossible to systematically collect and accurately report the information.

Despite the burden and mixed evidence on its utility to students and their families, then, the Clery Act seems deeply entrenched as a key reporting requirement. And yet, key higher education questions for students, families, and the nation – for example, accurate graduation rates, complete student debt figures, and students’ post-education employment prospects – still can’t be answered. And yet, lawmakers have resisted asking schools to report those outcomes, hiding behind the generic guise of burden.

The difference is that campus crime advocates like the Clerys have an evocative story, a powerful movement, and personal champions on Capitol Hill behind them. That combination was enough in this case to overcome the higher education lobby’s pleas for relief from reporting burden. Meanwhile, students’ voices and their families’ interest in the unknowable information about students’ outcomes are drowned out by lobbyists. That’s why the reporting requirements under the Clery Act will be reliably maintained – and other critical questions of the value of college have been shoved to the back.

Syndicate content