Looking for our new site?


Pre-K is Win-Win, Concludes a New Report

October 23, 2013

Early education is one of the most powerful ways to close the achievement gap between low-income and minority children and their more-advantaged peers. But all too often, pre-K advocates cite the same, decades-old research studies – the Abecedarian Project and the HighScope Perry Preschool Study, in particular – to prove the value of high-quality programs. A new report, Investing in Our Future: The Evidence Base on Preschool Education, published by the Society for Research in Child Development and the Foundation for Child Development earlier this month, offers an updated view of the research, and a path forward for scaled-up pre-K programs.

Researchers were on hand for an event at the New America Foundation last week to answer some questions (click here for the event video, or here to see a Storify summary of the Twitter conversation). Here are the report’s headline findings:

Cohort Default Rates Provide Insights into Outstanding FFEL Loans

October 23, 2013

Updated 10/24/2013 6 PM: This post was updated to include a better description of the Asset Backed Commercial Paper conduit program.

Hidden amidst the shutdown furor was the annual release by the U.S. Department of Education of new student loan default rates. The data measure how many borrowers who entered repayment in a single year defaulted on their federal student loans within two or three years. This year, the data show that 10 percent of borrowers default within two years of entering repayment and 14.7 percent do so within three years. As has historically been true, for-profit and community colleges had the highest default rates, well above those at public or private non-profit 4-year schools.

The overall trend here is not pretty. This is the sixth consecutive year in which two-year default rates increased, and they are now at the highest they’ve been since 1995. But with the growth in borrowing, this means there are significantly more people entering repayment and defaulting. More than 1.1 million more borrowers entered repayment in fiscal year 2011 compared to two years prior, and 10 percent defaulted, as compared to 8.8 percent in fiscal year 2009—an increase of more than 230,000 defaulters. Over those two years, enrollment in postsecondary education also increased, by more than 590,000 students, while the number of borrowers who entered repayment skyrocketed by 1.8 million students. See the chart below for more specific default rate figures.


Source: U.S. Department of Education

But beyond the school-based cohort default rates, the Department of Education also released some other interesting default rates: those for guaranty agencies and lenders under the Federal Family Education Loan (FFEL) Program.

FFEL is the now-defunct program replaced by the Direct Loan Program. Vestiges of the program remain, however, in the form of more than $400 billion in outstanding loans issued before the change. Under FFEL, government-backed loans were issued through a set of lenders, and 35 private non-profit organizations called guaranty agencies performed various administrative tasks, including providing federal default insurance to the lenders.

Default rates for lenders don’t carry much weight – there are no sanctions associated with high default rates. Each lender has a calculated two-year and three-year default rate, both for loans they originated and for loans they currently hold. Current lender two-year default rates range from 0 percent for over 500 lenders, including many who don’t hold any loans anymore, to a shocking 89 percent for Citibank, which still holds about 2,000 loans. Among the largest FFEL loan-holders (the 28 companies that hold 10,000 or more loans), rates average about 7 percent. Sallie Mae, the largest FFEL lender, has a default rate of 4.1 percent on the nearly 27,000 loans totaling almost $20 million it still holds from this cohort.

And the Department holds one set of loans with a very high default rate. During the financial crisis, in order to help FFEL lenders continue to make new loans, the Department of Education set up a financing vehicle called the Asset Backed Commercial Paper conduit program. The Department purchased some of the participants' FFEL loans, including all loans that were more than 210 days delinquent, as required by the contract. Those loans, now held by the Department but purchased through the conduit, carry a two-year default rate of 51.7 percent and a three-year rate of 56.6 percent. The requirement that the Department purchase those delinquent loans explains the abnormally high default rate.

The guaranty agency default rates provide another way of judging the results in the FFEL program. When a FFEL borrower defaults, the lender can file a claim to a guaranty agency to recover most of the outstanding loan balance. Then the guaranty agency—a true middleman—uses federal money to reimburse the lender, and the Department of Education reimburses those costs (this is known as “reinsurance”). But guaranty agencies with high default rates can’t receive the full amount of reinsurance reimbursement. If guaranty agency rates are below 5 percent, they get a 95 percent reimbursement; for rates between 5 percent and 9 percent, 85 percent; and for default rates that are 9 percent or higher, 75 percent.

As it turns out, at least when it comes to two-year cohort default rates, five of the reported guaranty agency default rates exceeded 9 percent for the 2011 cohort – Student Loan Guarantee Foundation of Arkansas, Texas Guaranteed Student Loan Corporation, Higher Education Assistance Authority (Alabama and Kentucky), Florida Department of Education, and Oklahoma College Access Program. Still, in every one of those states except Oklahoma, the statewide student two-year and three-year cohort default rates are even higher than the guaranty agency two-year default rate.

And although some guaranty agencies are private non-profit organizations, while others are state-based and may receive some state resources, there doesn’t seem to be much difference in their performances. The non-profits’ average default rate is 6.2 percent – effectively identical to the 6.3 percent default rate among state-based guaranty agencies.

Two-year cohort default rates don’t set a particularly high bar, as it stands, either for guaranty agencies and lenders or for students. Guaranty agencies are not held accountable for their borrowers’ defaults. Schools are – for rates at or above 25 percent three years in a row, or higher than 40 percent in one year, schools lose eligibility for Title IV federal financial aid – but not as much as they once were. The last time rates reached about 10 percent, in 1995, more than 200 schools were sanctioned by the Department of Education. Since then, the number of schools subject to sanctions has dropped precipitously – to just 8 colleges for the 2011 cohort. The 2010 cohort – the most recently available class of students – illustrates the limitations of the default rate. Consider that schools’ two-year default rates jumped from 9.1 percent to 14.7 percent when a third year was included in the window. And default rates in a cohort (unsurprisingly) continue to grow every year – even outside the 2-year or 3-year window.

Thanks to a change enacted in the 2008 Higher Education Act reauthorization, cohort default rates will get moderately stronger next year as the Department finally transitions to relying on three-year rates to determine whether a disconcertingly large share of a school’s students are unable to pay their loans. This year, over 130 schools would be in danger of facing sanctions if their default rates did not change in the third year of calculations (to date, only two official three-year default rates have been calculated). The hope is that a longer window would be harder for schools to game by utilizing temporary measures such as deferment or forbearance to avoid default up to the edge of the two-year window.

Default rates are by no means a perfect measure of a school’s value to students, but they are part of a scaffolding of restrictions on colleges – a sort of baseline quality metric to help students avoid low-value schools and to avert wasted taxpayer dollars. The numbers released by the Department offer valuable insights into students’ struggles.

What the College Board Trends Reports Won't Tell You

October 22, 2013

Today, the College Board released its annual sets of trends reports, one on college pricing and one on student aid. Dense, chart-filled works, the documents tell a story of what today’s postsecondary students are facing. But each report typically carries a message with it, one that often tries to dampen the sense of unabated cost escalation.

This year’s desired headline is 2.9 percent. That’s the change in published tuition and fees at four-year public institutions from last academic year to this one in current dollars. Though an increase, it’s described as the smallest percentage increase in the last 30 years.

But herein lies the difficulty with percentage increases and college costs. One of the benefits of decades worth of uninterrupted price increases is that eventually the same size price hike leads to a smaller percentage change. And sure enough, that 30-year-low in percentage terms is actually a $247 increase in published tuition—the 19th lowest in the past three decades (or 12th highest if you want to look at in a more pessimistic light). In fact, it's larger in real terms than any single year increase that families at public four-year colleges felt from 1971-72 to 2000-01.

In fairness, that $247 increase is the lowest that families have faced in current dollars since the 2000-2001 year. But following on the heels of over a decade of stark increases, it means the base price families are paying is $5,400 more in current dollars than it was at the turn of the century. In that regard, the $247 only feels like some relief from charitable schools only when compared to some theoretical higher price they could have been charged.

Private nonprofit 4-year colleges provide an even better illustration of the wonders of the percentage increase bait and switch. From 2011-12 to 2012-13, published prices in current dollars went up 4.0 percent. But this year, they went up only 3.8 percent. A victory for families, right? Hardly. Published prices went up exactly $1 less than they did the year before--$1,106 versus $1,105. But thanks to prior jumps, that 3.8 percent increase was the third lowest in 30 years, even if the dollar change was the sixth highest in 30 years.

Understanding the dollar versus percentage dynamic is especially important for interpreting charts like the one below. What it shows is the average annual change in tuition and fees over a ten year period, adjusted for inflation. So from 1983-84 to 1993-94, the average real increase in tuition and fees at public four-year colleges was 4.3 percent. By contrast, in the past decade, which we tend to think of as a time of excessively high cost increases, the average annual change at public four-year colleges was just 4.2 percent. If it’s about the same as historical trends, then we’re not seeing bad behavior. It’s just how things go—death, taxes, college costs, as the cliché would go.

But again, smaller absolute changes on a lower base lead to higher percentage increases than they would on a larger amount. And sure enough, this chart essentially lets colleges off the hook through their own increases. Here’s the same chart recreated below, only instead of percentage changes, it shows how much tuition and fees changed when measured relative to the base year of 1983-84. In other words, if the base year is 100 and the following year is 103, then the change is 3. And each type of college has its own base year. So a change of 6 points for a community college is still going to be less of a dollar change than 6 points for a nonprofit college.

Suddenly that last decade does not look quite so rosy. Rather, it rightly shows that the amount costs have been going up at public 4-year schools actually exceeds older decades by a good bit. The chart below makes the same point framed a different way by showing the change in the cost of tuition and fees from the start to end of each decade. These figures also are measured in comparison to the base year of 100 for 1983-84, which represents a different dollar amount for each type of school.

The last decade has not been a good time for families. Incomes are down and have not really recovered except for those at the top of the income spectrum. Meanwhile, state budget struggles, unabated spending at private nonprofit colleges, and a host of other reasons have collaborated to keep college tuition on a steady upward path. While this year's figures show that the dollar change is lower in the public sector than it has been in the past couple of years, it's still greater than it was 12 months ago and still above the rate of inflation. That's not good news. That's just less bad news than usual. And we should not be desensitized by price increases to the point where that's acceptable.

Can New Accreditation Standards Improve Teacher Preparation?

October 22, 2013

Teacher preparation programs have come under fire in recent years for poorly preparing new teachers to meet the needs of today’s students and the demands of education reforms. Most recently, the National Council on Teacher Quality released its survey of about 1,200 prep programs. (Spoiler alert: Only four programs made the top tier.)

What to Think About the DC IMPACT Study

October 17, 2013
Publication Image

Few teacher evaluation reforms have been as contentious as the IMPACT system in D.C. Public Schools. But a new study published by Thomas Dee and James Wyckoff provides the first empirical evidence that the controversial policy could be encouraging effective teachers to stay in the classroom – and improve their practice.

Dee and Wyckoff examined teachers that scored on the cusp of various IMPACT performance levels– namely, teachers just above and just below the cutoff for effective and highly effective (HE) ratings. The idea is that teachers near the cut points share similar characteristics, regardless of their final rating. By examining these teachers’ outcomes in subsequent years, researchers can isolate the effect of IMPACT’s incentives on teacher behavior. Do teachers that barely receive a HE rating fare differently than those that just missed the distinction? And do minimally effective (ME) teachers close to the effective cut point respond differently than teachers who barely cleared the effective hurdle?

Turns out, they do. The incentive structure within IMPACT had significant effects on retention and performance, particularly after the second year of implementation (2010-11) when IMPACT gained credibility. At that time, teachers with two ME ratings became eligible for termination and those with two HE ratings earned permanent salary increases, not just bonuses. Teachers that received their first ME rating after the 2010-11 year were significantly more likely to leave DCPS (over 10 percentage points) than teachers that scored just above the cut point. Further, the threat of dismissal improved the performance of ME teachers that chose to stay for the 2011-12 year – their scores improved by 12.6 IMPACT points compared to teachers that just received an effective rating, an increase of five percentile points. Similar effects were seen for teachers that could become eligible for increases in their base pay if they remained HE – their 2011-12 IMPACT scores improved by nearly 11 points compared to teachers that missed the HE cutoff, an increase of seven percentile points.

So what do these results tell us about IMPACT and teacher evaluation reform overall? Is this a moment for cautious – or all-out – optimism?

1. Evaluation systems like IMPACT don’t necessarily improve the performance of teachers across the effectiveness spectrum.  That’s because Dee and Wyckoff only examined a narrow band of DCPS teachers: those scoring right at the cut points between ratings. These teachers are the most likely to be influenced by the incentives built into IMPACT – say, when the ratings affect job security. Instead, the research demonstrates the effect of certain incentives, on a certain group of teachers. Those incentives worked –and worked well – but we still don’t know how the performance of most teachers changed in response to the new evaluation system.

2. That said, the research is rigorous, and the results are encouraging. There is evidence that the district’s teacher workforce improved overall. Some ME teachers voluntarily chose to leave DCPS, and the newly hired teachers that replaced them in the 2011-12 year had higher IMPACT scores, on average. And there is no evidence that highly effective teachers were pushed out of the system by IMPACT. Further, many ME and HE teachers tended to improve on IMPACT when they remained with DCPS.

However, more research is needed to determine what interventions were most effective in helping these teachers improve – and to determine whether other teachers (not just those near the cut points) saw similar outcomes. Evaluation systems must define what effective teaching is, and also provide the knowledge and support for teachers to meet these expectations. We know far more about identifying effective teachers than we know about what to do next.

Of course, that brings up another important caveat: improvements in performance here are measured based on changes in IMPACT scores. The authors don’t link these results to student learning explicitly – another area for future research.

3. Finally, while the results are positive and provide some of the best evidence to date on the success of IMPACT, the research may not be widely applicable to other districts and states. IMPACT and DCPS remain outliers in many respects:

  • IMPACT uses value-added data to measure an individual teachers’ contribution to student learning, which many evaluation systems have eschewed.
  • IMPACT includes not one, not two, but five observations of classroom practice over the course of the year. Further, two of these observations are conducted by master educators, rather than school principals. Hiring and training objective observers takes time, capacity, and resources that many states and districts do not have – or are unwilling to dedicate – for evaluation.
  • IMPACT’s improvement and incentive structures are also well-developed and supported. DCPS has made a concerted effort to improve the quality of its coaching and professional development and link it to IMPACT. Further, the bonuses and salary increases for highly effective teachers are substantial, thanks in part to foundation funding. While this external support may raise questions of sustainability, these incentives have been institutionalized in the district’s contract with the Washington Teachers Union.
  • In a way, IMPACT operates at both a state- and district-level. Some of the lessons learned from IMPACT may not be applicable in states, which face additional layers of governance and greater heterogeneity. On the flip side, IMPACT may not be a model for other districts, where administrators could have less autonomy to develop, implement, and revise evaluation systems.

In other words, the results from D.C. are encouraging, but there is still much to learn. More worrisome, as teacher evaluation reform takes hold across the country as part of Race to the Top and states’ ESEA waiver plans, these positive results may prove to be a one-off. IMPACT is as rigorous and comprehensive as teacher evaluation systems get – especially compared to the rudimentary, half-baked, and vague evaluation systems described in many states’ waiver requests. While it is important for states to follow through with their promises to implement new evaluation systems, the quality of this implementation should be of equal – or even greater – concern to policymakers, educators, and advocates moving forward. 

Our Long National Nightmare…Will Return Shortly

October 17, 2013

This post originally appeared on our sister blog, Ed Money Watch.

Last night, as the 16th day of the federal government shutdown drew to a close, the House and Senate approved, and President Obama signed into law a budget deal that restored funding for federal agencies and brought the nation back from the brink of a debt default. But celebrations will be short-lived. The temporary spending bill will expire again on January 15, and the increased debt ceiling will run out again on February 7 – evidence that the last month of congressional debate had virtually no long-term implications.

The shutdown began just over two weeks ago, with House Republicans insisting on defunding or at least delaying a portion of the Affordable Care Act, the healthcare law President Obama pushed through Congress in 2010. But Senate leadership and President Obama remained dead-set against the changes. (Only one, relatively minor change to “Obamacare” was made in this latest deal, requiring the Department of Health and Human Services to verify the incomes of those applying for tax credits or cost reductions under the law.) So instead, the debate morphed into one over a more workable issue: spending levels.

Our Long National Nightmare…Will Return Shortly

October 17, 2013

This post also appeared on our sister blog, Early Ed Watch.

Last night, as the 16th day of the federal government shutdown drew to a close, the House and Senate approved, and President Obama signed into law a budget deal that restored funding for federal agencies and brought the nation back from the brink of a debt default. But celebrations will be short-lived. The temporary spending bill will expire again on January 15, and the increased debt ceiling will run out again on February 7 – evidence that the last month of congressional debate had virtually no long-term implications.

The shutdown began just over two weeks ago, with House Republicans insisting on defunding or at least delaying a portion of the Affordable Care Act, the healthcare law President Obama pushed through Congress in 2010. But Senate leadership and President Obama remained dead-set against the changes. (Only one, relatively minor change to “Obamacare” was made in this latest deal, requiring the Department of Health and Human Services to verify the incomes of those applying for tax credits or cost reductions under the law.) So instead, the debate morphed into one over a more workable issue: spending levels.

Under a law passed by Congress in 2011, known as the Budget Control Act (BCA), lawmakers established a congressional “supercommittee” to create a framework for $1.5 trillion in deficit reduction. When they failed to do so, the law reverted to Plan B: spending limits for fiscal years 2012 through 2022. Mid-2013, the White House was required to sequester a portion of that year’s spending with across-the-board spending cuts, but an eleventh-hour deal in Congress (the American Taxpayer Relief Act) meant that a portion of the cuts were pushed off to fiscal year 2014 instead. This year, then, the spending cap drops by another $18 billion.

That is a key point that has been lost in the debate: The “second sequester” was not part of the original Budget Control Act as passed in 2011. It came later, through the American Taxpayer Relief Act of 2013 (the law that extended most of the Bush-era tax policies), which Congress passed with overwhelming, bipartisan support in January 2013.

The trouble is, the spending bill passed last night, like both the House and Senate proposals that came out ahead of the shutdown, continues funding the government at 2013 post-sequester levels (about $985 billion this year, instead of $967 billion as required under the BCA as modified in early 2013). That means another sequester will hit federal programs on January 15 – the same date that funding expires under this plan.

That’s no accident. Senate Majority Leader Harry Reid (D-NV) wanted to push the deadline for the continuing resolution up against the deadline for sequestration to force the issue further. He hopes to use the next debate over funding the government in just a few short months to press Republicans to provide federal agencies with flexibility to implement the sequester, rather than to apply it evenly to all programs, or even to cancel the sequester entirely. (The proposal to give agencies flexibility was discussed during these budget negotiations, but was ultimately left out of the final bill.)

And Senate Democrats effectively queued up this situation when they passed their budget earlier this year, ignoring sequestration and setting spending at $1.058 trillion instead of at the House Republicans’ approved (and the Budget Control Act’s mandated) $967 billion. (President Obama followed suit, with his budget clocking in at $1.057 trillion.)

Republicans, meanwhile, have little incentive to alter sequestration – and got cold feet when it came time to actually draft an education spending bill that meet the new spending caps. Efforts earlier this year to bolster funding for the Department of Defense by reducing substantial amounts of funding for the Departments of Labor, Health and Human Services, and Education failed because of internal dissent among House Republicans about the size of the reduction. But spending cuts remain a major priority of most GOP lawmakers, and the political will doesn’t yet exist—among Republicans or some Democrats—to cancel sequestration.

Another provision of last night’s agreement, though, would attempt to end such “governing by crisis” in favor of a return to regular order in Congress. A bicameral, bipartisan budget conference committee will begin meeting soon to attempt to reach an agreement on government funding – undoubtedly, with a focus on altering or eliminating sequestration in favor of more targeted cuts.

Sound familiar? That’s because the supercommittee whose failure spurred the implementation of sequestration in the first place was tasked with a similar goal of reaching a broad deal on budget policy. Some of the committee’s appointees – Rep. Clyburn (D-SC), Rep. Chris Van Hollen (D-MD), Sen. Patty Murray (D-WA), Sen. Rob Portman (R-OH), and Sen. Pat Toomey (R-PA) – even served as supercommittee members a few years ago.

It seems unlikely that enough has changed politically to spark much agreement. And if that’s the case, we’ll be right back in the same place, facing a potential government shutdown (and soon after, another possible government default), by mid-January. 

Storify: Too Much Evidence to Ignore

October 16, 2013

This week, New America's Early Education Initiative hosted an event reviewing the research on pre-K, published in a new report, “Investing in Our Children: The Evidence Base on Preschool Education," from the Foundation for Child Development and the Society for Research in Child Development.

Reporting Burden in Higher Education: The Case of the Clery Act

October 16, 2013
University of Denver Campus Safety Badge

Members of both political parties have decried two seemingly contradictory things in higher education. They want better information to inform students, families, taxpayers, and policy makers – but they also want fewer burdens on institutions, which some say increase costs, stifle innovation, and move schools’ focus away from the primary mission of educating students. While these are both laudatory goals, they appear, at face value, to call for action in opposite, conflicting directions. Students and institutions are left with the worst of both worlds—too much data, reporting, and burden and not enough usable information.

To escape this seeming contradiction between reporting burden and access to information, public discourse and debate should shift away from talking about burden in the generic, abstract sense to the specific ways in which it affects institutions and policy makers. So, let’s look at one of the most heavily cited sources of burden: consumer disclosures. In a 2013 GAO report, this category, which includes campus safety and security reports, was the most frequently cited as burdensome in interviews of experts and higher education officials.

The campus safety component of these disclosures stems from the Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act, first passed in 1990 as the Student Right-to-Know and Campus Security Act. The law requires colleges to annually report campus security statistics, maintain a public log of recent crime, and provide timely warnings of ongoing threats to students.

The provision grew out of campus safety advocacy efforts led by Connie and Howard Clery, who founded Security On Campus, Inc. (now the Clery Center for Security on Campus) after the brutal and shocking 1986 rape and murder of their daughter Jeanne in her freshman dorm at Lehigh University. The subsequent investigation revealed lapses in security oversight by the university. Her murderer, Josoph M. Henry, a fellow student she did not know, was able to gain access to her dorm by passing through three automatically locking doors that had been propped open with boxes for convenience.  The Clerys also discovered that there had been 38 violent crimes on campus over the prior three years, but no laws at the time required the university to report them to students or prospective students.

After the passage of multiple state laws, the 1990 federal bill was introduced in Congress by Representative William Goodling (R-PA) in response to the Clerys’ advocacy efforts. In introducing the bill, Goodling testified: “This resolution will ensure the Department of Education gives priority status to this important responsibility [of protecting students].... Colleges are trying to hide [crime incidents] because they're in a very competitive business. There's no question they are putting students in danger if they try to cover up the crime that's going on in order to recruit students."  In 1998, Senator Arlen Specter (then R-PA) sponsored legislation tightening the reporting requirements and officially renaming it after Jeanne Clery. At the time of the bill’s passage, Specter spoke at a conference with the Clerys in which he emphasized the importance of campus safety and the lives that would be saved by the bill.

The evidence on whether the act has actually led to a decrease in campus crime in the decades since its passage is mixed. There were no reliable figures before the legislation, and the crime rate fell broadly across the U.S. over the same time period. And although a significant percentage of senior safety and security officials in one study said the law helped bring about improvements to their policies and procedures, most did not see the law as being specifically related to a decrease in crimes in and around campus.  More importantly, it does not seem like students and perspective students are actually using the specific reports and information the law requires. Previous studies and surveys show that the majority of students were not aware of the law and had not read the annual report that it requires, and only 10 percent of students said that they had factored campus crime statistics into their choice of school.

But colleges and universities that don’t meet the law’s stringent disclosure requirements do face significant penalties for violating the act. Each violation is punishable by a fine up to $35,000 per violation and possible loss of Title IV eligibility for the institution. In 1998, Eastern Michigan University was fined $350,000, at that point the largest-ever penalty for violating the law, for failing to quickly and accurately issue warnings after the murder of a student Laura Dickinson in her dorm room. Other institutions, including USC    , have been accused of reporting incidents inaccurately to lower the overall numbers of violent crimes appearing in the log and reports mandated by Congress.

The Clery Act was a strong response by lawmakers to a personal and shocking tragedy. Support for the bill was overwhelming – it passed the House without objection, and the Senate on a voice vote.  The law is not likely to disappear anytime soon – in fact, members have Congress have only piled on more and more requirements to the law. For example, the 1998 reauthorization required institutions to report off-campus crimes that occurred in close proximity to the institution. This led to concern from some institutions on where exactly to draw the line of “close proximity,” given that any tragic event near campus but outside the specified area could bring further negative attention to a school’s policies. Industry organizations also complain that the frequent changes to the law (four in 10 years following its passage) made it nearly impossible to systematically collect and accurately report the information.

Despite the burden and mixed evidence on its utility to students and their families, then, the Clery Act seems deeply entrenched as a key reporting requirement. And yet, key higher education questions for students, families, and the nation – for example, accurate graduation rates, complete student debt figures, and students’ post-education employment prospects – still can’t be answered. And yet, lawmakers have resisted asking schools to report those outcomes, hiding behind the generic guise of burden.

The difference is that campus crime advocates like the Clerys have an evocative story, a powerful movement, and personal champions on Capitol Hill behind them. That combination was enough in this case to overcome the higher education lobby’s pleas for relief from reporting burden. Meanwhile, students’ voices and their families’ interest in the unknowable information about students’ outcomes are drowned out by lobbyists. That’s why the reporting requirements under the Clery Act will be reliably maintained – and other critical questions of the value of college have been shoved to the back.

Shutdown Got Your Data? Check Out Our Federal Education Database

October 15, 2013
Publication Image

The federal government has been officially shut down for over two weeks now, and the impact has been real: furloughed employees across the country, Head Start programs shut down (and some reopened), and confusion and delays in many federal programs. But for education experts and data geeks, another issue has been highly inconvenient, if less severe: the disabling of federal education data websites.

Fortunately, Higher Ed Watch’s sister initiative, the Federal Education Budget Project, maintains one of the most comprehensive federal education databases in the country for every state, school district, and institution of higher education. The data are collected from state and federal sources and updated regularly. The higher education data cover more than 7,500 institutions and all 50 states, plus Washington, D.C. and Puerto Rico, and include:

  • Tuition and fees, price, endowment, and net price for all and for low-income students;
  • Federal finance data on student loan recipients and disbursements for schools, as well as Pell Grant and other federal aid data;
  • Student demographics, including full-time, part-time, and graduate student enrollment, as well as racial subgroups;
  • Outcomes as defined by graduation rates, retention rates, student loan default rates, and repayment rates; and
  • The share of students receiving federal, state, and local financial aid, as well as the average award size.

Check it out now, and until the shutdown is over! For some background on the data and on other education policy topics, check out our Background & Analysis pages.

Syndicate content