Introduction

This paper presents a brief history of how voluntary nongovernmental accreditation first emerged around the turn of the 20th century and how it evolved, presented in order to highlight and understand the erosion of accreditation’s most salient quality assurance features in recent decades. A study of the early history clearly indicates that accreditation has always been controlled by the very institutions it oversees and that its effectiveness has been a function of the mission and intended goals of the institutions that formed and governed it. The early effectiveness of voluntary nongovernmental accreditation in the collegiate sector,[1] manifested in the meteoric rise of American institutions on the world stage, was so impressive that federal policymakers chose not to devise their own quality assurance framework when they began funneling ever-greater subsidies into higher education after World War II. While the 1992 reauthorization of the Higher Education Act did attempt to push accrediting bodies to be more accountable, fundamental macro-level changes in higher education frustrated those attempts, mainly because institutions continued to control their accreditors.[2] That control was and remains enshrined in the continued acceptance of such contradictory dualities as the statutory sanctioning of “voluntary” accreditation as a mandatory requirement for eligibility or the accepted practice of claiming “separate and independent” status for accreditors even as most of their board members are college executives.

This paper suggests that accreditation, controlled by institutions as it is, began to be undermined near the end of the last century because changes in the higher education ecosystem—primarily consisting of vastly greater tuition dependency in both the public and the private non-profit sectors—have significantly altered institutional motivations and attenuated (if not eviscerated)[3] their mission-driven aspirations in favor of revenues. And, even though an increasing share of tuition dollars that flow to institutions comes in the form of federal grants and loans, federal policies have failed to catch up with the changing nature of quality assurance in the “market” they are supposed to be regulating.

Preface: Understanding Quality in Higher Education

Before outlining the history of accreditation, a prefatory discussion of quality in higher education is required. Like the very concept of education itself, academic quality is an equivocal and amorphous notion. In most policy conversations, it is asserted as an inextricable attribute of the educational process but without the requisite philosophical rigor that such an assertion would typically entail. The lack of rigor is not due solely to modern tendencies against absolute assertions and transcendental beliefs of the kind that allowed Plato’s Socrates to distinguish philosophy from sophistry and the teaching of truths from spreading falsehoods. Rather, the diversity of the disparate fields of study, the rapid pace of advancements in virtually every discipline, and the different long-term expectations of their outcomes render a universal definition not only difficult but also of little value for practical use. In the absence of a single all-encompassing definition of “the thing itself,” attention has rightly shifted to its external manifestations through the identification of observable indicators or markers of quality in various fields. These have historically included systematic assessments of inputs, internal processes, and outcomes as well as more ambiguous (and related) criteria such as reputation, prestige, selectivity (for admission of students and appointment of faculty), student ratings, and external rankings.

Beyond understanding quality as a multifaceted notion that designates nuances and related meanings across the multiple dimensions of the higher education enterprise,[4] it is crucial to understand the term ‘quality’ in two distinct ways, regardless of its intended sense. The first is quality as a teleological and aspirational ideal that all institutions, regardless of their particular level of quality, should continually strive for. The second and more practical task for public policy and governmental purposes is to define an acceptable threshold for quality without engaging in a quest for idealized perfection. This latter task is a pragmatic necessity for the purposes of recognizing certain entities as institutions of higher education and for allocating public subsidies.

The fundamental challenge in determining the quality of educational institutions lies in the fact that education and training are not what economists refer to as ‘search goods.’ These are goods or services whose quality can be determined before purchase by consumers. The most important characteristic of search goods is that users can inspect and evaluate them before consumption. Their assessment may not be perfectly accurate, and there may be a time delay before any final judgment about the purchase may be made, but, in general, consumers tend to have access to much greater information about the products or services categorized as search goods. The broad alternative category to search for goods is ‘experience goods.’ These are goods and services that consumers can inspect only after they purchase and use the product. Because adequate information about the quality of the product is unavailable before consumption, consumers of many experience goods end up relying on such highly manipulable proxies for quality as branding and reputation.  Restaurants and cruise vacations are examples of experience goods.[5]

As challenging as the evaluation of the quality of experience goods may be, education falls into a third and even more opaque category of “credence goods,” whose quality consumers cannot truly evaluate even after purchase. Credence goods are products or services where consumers find it difficult or impossible to assess the quality or necessity of the good, even after consumption. Credence goods typically have prolonged periods of expected benefits that make it impossible for consumers to know, sometimes for years after the purchase, whether they received the value they were promised. Annuities and education, which impose significant upfront costs and are expected to provide decades of future benefits, are examples of credence goods.

In the case of services like education, which transcend straightforward financial transactions and require significant preparation and effort by the consumer for successful outcomes, even post facto evaluations of educational quality by consumers are highly susceptible to error. The time lag between the decision to enroll and the accrual of future benefits, uncertainty regarding the allocation of relative praise or blame for outcomes to the student and the institution, and the fundamental information asymmetry between institutions and prospective students all combine to make consumer assessments of educational quality highly unreliable, if not entirely impossible. Transactions about credence goods are significantly trust-based and are extremely prone to misrepresentation and fraud. To decide the quality of credence goods, consumers often must rely on a combination of factors, including the expertise and honesty of the seller and third-party validation or seal-of-approval. Credence goods markets present unique challenges in economics and consumer protection. Regulations, professional certifications, and reputation systems are often used to mitigate the inherent risks in these markets.[6]

To varying degrees worldwide, governments address the challenge of educational quality by directly establishing and operating institutions that they themselves fund. The quality of these institutions is deemed inherently acceptable to the governments that fund them as creatures of their own making. In the U.S., for example, the federal government directly established and operates five military service academies and several other federal institutions of higher education, including the National Defense University, the National Intelligence University, the Uniformed Service University of the Health Sciences, and the Naval Postgraduate School. In addition, the States have established numerous public institutions that satisfy their respective quality requirements. Government-sponsored colleges may voluntarily seek accreditation for licensure purposes and to ensure recognition of their degrees by other institutions, as is the case with all federal institutions in the U.S., but their funding is not legally dependent on accreditation.

In general, most countries address educational quality in two distinct stages: a recognition or authorization process that allows certain entities to legally operate as institutions of higher learning, and a quality assurance process through which institutions not directly controlled by them[7] become eligible for various forms of governmental support. In some cases, such as in Australia, the two functions may be assigned to the same entities, but it is more typical for the recognition and quality assurance functions to be delegated to different (often governmental) bodies.[8]

In the U.S., the recognition function is primarily reserved for the states,[9] which establish or “authorize” postsecondary institutions and grant them permission to award degrees and certificates. Prior to the late 1800s, the recognition process—which established the majority of institutions through state action, along with a handful of colleges authorized through federal or colonial charters—was the sole public oversight mechanism for colleges and universities. In general, public scrutiny of private institutions was limited to formal, if not entirely perfunctory, reviews of their organization and practices and did not intrude into deeper questions of quality. Even the New York Board of Regents’ oversight of institutions, for example, was limited to annual visits to colleges to ensure they operated as schools.

The prevailing pre-20th-century view of quality bifurcated such judgments into two distinct categories. The first of these addressed the legitimacy and efficacy of internal institutional practices, which were thought to be the proper domain of funders, the states in the case of public colleges, and the various religious denominations or donors and patrons in the case of private ones.[10] The presumption was that financial support from third parties, sometimes with little regard to the views of even the faculty, [11] was the best mark of at least the adequacy, if not excellence, of the education provided. The second category of quality assessment was a market-based and transactional approach to higher education that equated success in attracting students with institutional validation. In that laissez-faire ecosystem, broader quality distinctions among institutions were entirely informal, often class-driven, and mainly based on funders and the vagaries of reputation and prestige. This informal sorting regime might have functioned well enough with a small student population attending fewer than 500 institutions of varying quality, but it could not satisfy the needs of a fast-expanding higher education ecosystem of hundreds of real and fake colleges and tens of thousands of students.

An Historical Perspective

As has been documented elsewhere, perhaps the most essential motivation for the emergence of accreditation in the early 20th century was to temper the power of external forces—funders (state governments and philanthropists) and the excesses of market-based consumerism—and their internal institutional allies, the administrators whose primary priority was to secure resources to maintain institutions as going concerns.[12] In this sense, accreditation was predicated on a realistic concern that funders, while essential to the very existence of institutions, could, for a variety of ideological or financial reasons, distort the substance of education and undermine its purposes by promoting their preferred vision in lieu of scholarly consensus. Autonomous disciplinary standards and scholarly consensus needed an enforcement mechanism that was sufficiently independent of institutions and their funders’ preferences and could validate the integrity and reliability of the credentials they conferred. Voluntary accreditation served that validation role and began indirectly enforcing the academic community’s constructivist consensus on quality upon gaining broad acceptance as an industry seal of approval. The assertion of academic standards, moreover, applied not only against the power of money but also against popular opinion.

With the unfolding of the Second Industrial Revolution, basic educational credentials, even below high school diplomas, began to gain value in the labor markets. This created strong incentives for subpar schools, public and private, and out-and-out diploma mills that easily attracted eager consumers. One of the fundamental changes that today’s policymakers have failed to address is how accreditors could be described as having switched sides. As guaranteed sources of revenues have diminished and tuition dependency has increased, accreditors have shifted their primary allegiance to administrators struggling to ensure institutional survival through whatever means necessary. The change in the role of accreditation as a reliable external ally of the faculty is probably the single most determinative cause of its inability to perform either of its currently assigned functions—quality improvement and quality assurance—today.

Some observers yearnfully celebrate the early chaos of American higher education’s lack of steady funding, high dependence on tuition and alumni donations, and lack of central planning as the main drivers of its stunning success in the 20th century.[13] But such triumphalist perspectives overlook the critical role that the emergence of accreditation, as both an external quality assurance and an external quality improvement mechanism, played in producing that success. America’s decentralized and consumer-oriented approach to higher education succeeded not just because institutions were leaner, nimbler, and more diverse but also because they embraced academic standards that served as guardrails against overreach by funders and consumers. The resulting arrangement was a tenuous equilibrium that balanced the influence of diverse funders, faculty, and students. That equilibrium generally held through the 20th century, although it faced periodic challenges ranging from postwar McCarthyism to the Vietnam antiwar student movement.

One of the several oversimplified but reasonable ways of understanding how accreditation is failing would be to underscore the dynamics that have significantly undermined the critical weight of the faculty and, by extension, disciplinary standards in governing institutional practices. The eclipse of the faculty has predictably strengthened the ability of funders (most notably state policymakers) and students, who increasingly demand to be treated as customers.[14] In addition, accreditation as a federal funding eligibility requirement has all but eviscerated the voluntary nature of the accreditation process by turning it into a survival precondition. This has, in turn, introduced a strong purpose of evasion into a process that relies on the veracity of the representations and unverified claims of the institutions it oversees. Furthermore, the continued insistence of both the accreditors and the accredited that the process is voluntary is not harmless fiction: it introduces a fundamental conflict in the basic purpose of accreditation by assigning the increasingly incompatible goals of quality assurance and quality improvement to the same entities.[15] In practical terms, accreditors have turned to the latter function as the perpetual excuse for failing to deliver on the former.

Late 19th and Early 20th Centuries: The Emergence of Accreditation

The last three decades of the 19th century witnessed a rapid expansion of higher education in the United States. The enactment of the first Morrill Act in 1862, establishing land grant colleges, provided an enormous federal boost to the pre-existing efforts of the states to expand and improve higher education. The mostly unregulated free-for-all environment in which diploma mills and legitimate educational institutions operated side by side began to prove inadequate to the needs of a rapidly industrializing nation aspiring to greater socioeconomic parity with Europe. Colleges and universities felt the impact of governmental noninterference in education most acutely in their admissions policies. The lax and, in many cases, nonexistent oversight of high schools made admissions judgments extremely time-consuming and unreliable. This was a difficult challenge for newer public colleges, which, unlike well-established private eastern institutions, did not have the luxury of feeder boarding schools.

In the absence of meaningful secondary education standards or regulations, the University of Michigan pioneered the process of voluntary review of high schools in 1871 by accrediting high schools in the state,[16] a practice that the University of Wisconsin followed and labeled high school “certification” in 1877.[17] The high school certification model consisted of annual site visits by a faculty inspector, who reviewed facilities and administrative resources as well as the high school’s academic curriculum and instructional practices. Graduates of “certified” high schools would gain automatic admission to public universities if recommended by their principals. The certification model was quickly adopted by other land grant universities and even some private institutions as a workable means of ensuring the academic preparation of their incoming students. Two critical features of the high school certification practice, dubbed “accreditation” by the early years of the 20th century, worth noting here are, first, that the party relying on certification—the higher education institution—was so deeply vested in the integrity of the process that it chose to administer it itself at its own cost. Higher education’s accreditation of high schools, furthermore, was a quality assurance process and external audit. Its explicit focus was to determine the academic adequacy of the high schools’ programs, and it did not assume formal responsibility for quality improvements.[18] Both factors—reliance on college faculty to conduct the accreditation reviews and the predictable consequences of colleges’ quality assurance judgments on high schools and their communities—led to the decline and ultimate demise of the practice in the 1930s. The process proved too expensive for colleges and too arbitrary and “undemocratic” for the high schools and their communities. The vacuum that had caused colleges to step into the high school quality assurance role was recognized as a governmental responsibility and taken over by state public education authorities.

The emergence of accreditation for colleges and universities was a natural progression from the high school certification process. It was also very much in keeping with the progressive spirit of the times. The chaotic proliferation of fake or questionable colleges credentialling virtually any paying customers and the absence of any overarching framework for higher education proved untenable for the several hundred public and private institutions that were genuinely committed to the advancement of learning and the education of their students. The post-Civil War industrialization of the economy and the exponential growth of scientific knowledge in fields ranging from agriculture and medicine to mining and engineering made reliable educational credentials all the more critical. Voluntary accreditation was a collective response by colleges to the clear need for standardization and quality assurance in a rapidly expanding, diverse, but under-regulated higher education system. The most salient feature of the academic oversight regime that emerged was that, while it was organized by collegiate leaders, it relied on the judgment of practicing faculty (“visitors”) in making accreditation determinations as well as in the recommendations it made for quality improvement. This stands in stark contrast to today’s practice, in which accrediting bodies are disproportionately dominated by administrators with a direct interest in enrollments and revenues.

In 1885, the New England Association of Schools and Colleges was founded as the first regional accrediting association. Other regional associations followed: the Middle States Association of Colleges and Schools in 1887, and both the North Central Association of Colleges and Schools and the Southern Association of Colleges and Schools in 1895. By 1924 and the founding of the Western Association of Schools and Colleges, the basic contours of today’s accreditation system had fully emerged. The founding characteristics of U.S. accreditation—its voluntary, collegial, self-regulating, and holistic approach to quality—have certainly survived, though some only in name, to the present day. However, the system’s failure to adapt to a series of fundamental changes in the higher education sector over the intervening decades has, ironically, turned its original quality improvement features into flaws that now accommodate subpar and questionable practices and outright fraud.

While the non-governmental approach to quality was—like its predecessor, the original high school certification system—the higher education sector’s response to the limited federal role and the absence of national standards, the tradition of federal non-interference with quality was not entirely an organic and inevitable outcome. As early as the 1910s, as part of the movement to replace patronage appointments with merit-based selection through Civil Service reforms, the federal government felt the need for a binding federal higher education standard for federal employment. Kendrick Babcock, Chief of the Division of Higher Education at the Bureau of Education from 1910-1913, made a valiant attempt at establishing a federal standard for acceptable college credentials. The most salient feature of his proposed federal classification was the academic nature of the judgment it rendered. His approach sought to identify institutions whose graduates were adequately prepared for graduate academic study:

In 1911, Kendrick Babcock, acting for the Bureau of Education, prepared a classification of American colleges on the basis of the extent to which their graduates were able to complete graduate work without remediation. The impending publication caused a political furor, for it was apparently inconceivable that any institution could be held up by the federal government as anything less than stellar. President Taft ordered publication to be withheld, and President Wilson declined to rescind the order. (Babcock, as noted, later published the list under the imprimatur of the Association of American Universities.) Thereafter, the Bureau of Education ceased any effort to classify or pass judgment upon the quality of collegiate institutions. Instead, it remained content to publish a directory of accredited institutions. To be listed, the institution had to be “accredited or approved by a nationally recognized accrediting agency, a State department of education, a State university, or operating under public control . . ..” Institutions not meeting any of these requirements were listed “if their credits are accepted as if coming from an accredited institution by not fewer than three fully accredited institutions.” These lists, and the Office of Education itself, were consulted by the Civil Service Commission, the Department of Defense, and other government agencies to determine the bona fides of the educational credentials of applicants for government employment, of chaplains in the military, or for the educational placement of federal personnel.”[19]

The successful pushback against Babcock’s proposed federal standards is reminiscent of the Obama Administration’s substantially more limited attempt a century later at rendering a comparatively milder federal judgment on colleges and universities. In both cases, the proposed federal systems would have relied on empirical data about collegiate outcomes without reference to internal practices or institutional efforts at quality improvements. The decisive rejection of Babcock’s quality assurance model, though its utility was limited only to qualification for federal employment, checkmated virtually all subsequent federal efforts to directly address quality. The de facto historical tradition of mandatory federal agnosticism about quality is particularly striking today because of the vastly expanded role of the federal government in financing higher education institutions whose quality it is prohibited to address.

Whether one celebrates or bemoans the demise of Babcock’s proposed federal standard, it would be unfair to accuse institutions whose opposition defeated the idea of sheer obstructionism. Babcock himself went on to play an important role in the development of collegiate accreditation, and accreditation did, in fact, function for several decades exactly as promised and intended. Remarkably, mere decades after the first nascent steps toward national higher education standards, American higher education did begin to rival its British and German counterparts due in no small part to the earnest efforts of institutions to promote quality, protect academic freedom, and set ever higher standards for themselves.

Several aspects of the design of collegiate accreditation during this formative period are worth highlighting. Unlike the high school version of accreditation, in which the collegiate visitors served as outside inspectors and reviewers, collegiate accreditation was devised as a peer-review process from its very inception. Also, whereas high schools derived the specific benefit of admissibility of their students to college by being accredited, the benefits of collegiate accreditation were more amorphous and primarily consisted of the prestige of being a “member of the club.”[20] In addition, while colleges’ external reviews of high schools did influence academic and administrative practices on both sides, they were designed for quality assurance rather than as a formal quality improvement scheme. The collegiate version, in contrast, focused on assuring a minimum level of quality while also formally serving as a collegial quality improvement mechanism. The duality of serving as both an enforcer and a collaborator may not have been counter-intuitive at the time, but it has certainly become one of the more problematic and controversial features of accreditation today.[21]

The importance of private philanthropy’s role in facilitating a coherent national quality assurance system at the time cannot be over-emphasized. In the absence of robust federal oversight, private philanthropy played a critical part in expanding and institutionalizing the role of accrediting bodies that had already emerged under the leadership of more prestigious universities in different regions of the country. The establishment of the Carnegie Foundation for the Advancement of Teaching in 1905 proved especially effective in structuring and standardizing important characteristics of the higher education system. The foundation’s initial policy foray into higher education was to provide retirement pensions for college professors, but it soon expanded its scope to include research and pedagogical studies related to higher education. In 1906, it published the vaunted “Carnegie Unit,” which created a standardized measure of academic credit based on the number of hours spent in the classroom. This unit became widely adopted and helped to create a more uniform system of academic measurement across different institutions. Unlike previous philanthropic practices, which typically focused on support for specific institutions, the Foundation’s work sought to organize the U.S. higher education ecosystem, addressing issues such as admissions, the curriculum, and inter-institutional transfer of credit. Accrediting bodies quickly adopted important features of the new standards and served as mechanisms for their promulgation and enforcement. In any case, the confluence of private philanthropy’s interest in the promotion of standards, the expansion of the higher education sector in terms of the number of institutions and their enrollments, and the needs of a modernizing society had already produced, despite a debilitating decade of national economic woes, a fully developed and functioning accreditation system by the outbreak of World War II.

The American victory in World War II demonstrated, among other things, that the U.S. higher learning system, funded nearly entirely by the states, philanthropy, or private money, was world-class in its scientific and technical output. America’s success in the war owed much to its scientists, engineers, linguists, and medical experts, most of whom were educated at American institutions. In addition, the scale of the industrial mobilization for the war effort indicated the availability of numerous trained trade and technical workers. Prior to World War II, in addition to dedicated vocational high schools, training in various trades, especially in agricultural and industrial fields, was available at many regular high schools as well. While the federal government had provided some support for technical education in the Smith-Hughes Act of 1917, most training was sponsored by industry and provided through apprenticeships and on-the-job training.[22] America’s higher education system, modestly overseen by the states and nongovernmental accreditors, and its technical-vocational training pipeline, primarily managed by industry, the unions, and sometimes city governments, were both functioning well enough to win the war through scientific breakthroughs and massive industrial productivity. However, this high-functioning ecosystem had significant shortcomings. Racism and gender bias limited or denied opportunities for many. Despite the broad American belief in the Horatio-Alger rhetoric of mobility, access to higher education was not equally available to all.

The immediate postwar period inaugurated a series of radical changes in the postsecondary sector that gradually shifted responsibility for funding higher education and vocational training away from the states and employers, placing more of the burden on the federal government and individuals. The primary driver of quality in the higher education landscape until that time was the broadly shared view of postsecondary education and training as a public and necessary good. The proximity and the reliance of the third-party funders of postsecondary education (i.e., the states and employers) to the outcomes ensured administrative oversight in exchange for steady funding, while faculty control of accreditation drove the curriculum and instruction without much concern about recruitment or finances. The post-War growth of federal subsidies tied to enrollments slowly eroded the overwhelming dominance of “proximate” funders of postsecondary education and increasingly delegated all benefits and much of the judgment about institutions to individuals, most of whom relied on accreditation as an independent seal of approval that gave them access to various federal subsidies. The ensuing market competition among institutions made accreditation an absolute necessity for institutional survival and therefore a target of tighter institutional (as opposed to faculty) control. Accreditation’s failure to adapt to some of these changes is the main driver of its shortcomings today.

Postwar: The Expansion of Federal Funding

The federal government that came out of the Second World War would have been unrecognizable to previous generations of Americans. The New Deal had already greatly expanded federal powers and the national government’s role in the economy, but the nation’s postwar status as a global superpower and its Cold War struggles against its archrival, the Soviet Union, mandated an even more potent central government. On the higher education front, the enactment of the GI Bill in 1944 and the report of the President’s Commission on Higher Education (“the Truman Commission”) in 1947 were pivotal moments in the expansion of higher education and the democratization of access to a college education, which had generally been the exclusive domain of the wealthy elites until then. Historically dominated by the states and a handful of elite private institutions fueled by old money or Gilded Age riches, higher education policy became a national concern worthy of federal money and attention.

In sheer monetary terms, the sudden infusion of federal cash into higher education after World War II through the original GI Bill (officially the Servicemen’s Readjustment Act of 1944) amounted to approximately $14.5 billion between 1944 and 1956. That amount would put the cost of the original GI Bill roughly between $200 and $250 billion in 2024 dollars.  Even more far-reaching than direct federal funding, policy signals from the federal government began to actively shape American higher education’s hitherto organic evolutionary trajectory. The Truman Commission, for example, examined the role of higher education in American democracy and its responsibilities in fulfilling social and economic needs. Its key recommendations included a call to double college enrollment by 1960, expand public education through establishing a network of public community colleges, eliminate racial and religious discrimination in college admissions, and extend free public education through the first two years of college. The Commission’s report popularized the term “community college” and advocated for a system of public community colleges that would charge little or no tuition, serve as cultural centers, be comprehensive in their program offerings, and serve the area in which they were located. The report led to a significant expansion of the community college system in the United States. The states used the commission’s recommendations as a blueprint for the expansion and development of their higher education systems.

Increased federal funding through the GI Bill’s voucher system played a crucial role in expanding higher education opportunities in the postwar period, but it also created predictable side effects, most notably, the proliferation of subpar and low-quality programs designed to cash in on the new federal program. The problem was almost entirely concentrated in the for-profit vocational sector then.[23] The federal response to the worsening situation sidestepped the more direct approach of attempting to set actual quality standards for eligible programs—eminently possible since these were often vocational or technical programs subject to licensure standards—and focused instead on a procedural safeguard. In 1952, the 85/15 rule for the GI Bill was created as part of the Veterans’ Readjustment Assistance Act, also known as the Korean War GI Bill, which also tied federal funding for student veterans to accreditation of eligible institutions, marking the beginning of the federal government’s reliance on accreditation for determining eligibility for financial aid.[24] The Veterans Administration’s 85/15 rule mandated that no more than 85% of students in a given program could receive GI Bill benefits. Thus, rather than attempting to inspect or evaluate the programs, the federal government delegated quality assessment to the individual judgments of the 15% of students who were presumably paying out-of-pocket. While one may quibble about whether the hard-earned benefits of the 85% can be safely delegated to the opinions and consumer choices of the 15%, the approach does have some merit, but only as a procedural failsafe to an at least minimal regime of direct oversight. The 85/15 rule proved predictive of future federal initiatives to safeguard its growing postsecondary education and training investments. Henceforth, federal powers would be circumscribed to administrative and procedural measures, “market-based” metrics, and other circuitous methods of addressing quality.

Postwar quality concerns were by no means limited only to for-profit vocational programs. The rapid expansion of higher education, with hundreds of new institutions and millions of new students, led to inevitable challenges. The quality-quantity-trade-off phenomenon so vividly described by Tocqueville certainly manifested itself in the vastly expanded universe of postwar higher education.[25] But while the issues that led to the implementation of the 85/15 rule were driven by for-profit providers’ intentional cash-grab schemes, quality problems in the traditional collegiate sector were primarily driven by available resources. Given the mission-driven characteristic of the entire collegiate sector in the four decades immediately after the War, it would be fair to say that relative differences in quality directly correlated to financial factors. Without significant direct federal funding beyond the GI Bill, the traditional collegiate system relied on a combination of state support, philanthropy, and tuition revenues. Not surprisingly, resources were not evenly distributed among the various institutions. Public colleges, which derived most of their funding from the states, were stratified into a three-tier system with descending levels of per-capita support and, ergo, per-student expenditures, consisting of flagships, regional publics, and community colleges. Private institutions were likewise ordered by affluence and available resources.[26]

During much of this period, accreditation of traditional colleges (as different from the subpar vocational programs that the 85/15 rule sought to govern) still operated as intended. It provided a reliable seal of collective approval and fulfilled its parallel mission of improving institutional quality at the many new or expanded colleges across the nation. The environment in which accrediting bodies served their function was collegial, collaborative, and remarkably free of institutional manipulation or purpose-of-evasion.  In a setting where intentional malfeasance or abuse was almost unheard of, accrediting bodies could rely on the good-faith representations of institutional leaders and on the veracity of institutional self-studies that were backed by tenured faculty who exercised significant, if not exclusive, control on academic policies of their institutions. While institutional quality was uneven, institutional integrity was a given, and the output of the higher education ecosystem was sufficiently robust to propel the U.S. economy through the postwar economic boom, provide millions with access to middle-class living standards, and render the U.S. higher education system the one much of the rest of the world viewed as a role model. In many ways, this was the golden age of effective and reliable voluntary accreditation that today’s advocates claim is still intact. In reality, the vast majority of traditional higher education institutions are as dependent on enrollments as their for-profit counterparts, and they are, regrettably but perhaps not surprisingly, engaged in administrative and academic enrollment management practices that would have been unthinkable in previous decades.

Postwar Impact on Tuition Dependency and Quality

Changes came quickly, with a rapid expansion of higher education and increasingly heavy institutional dependence on tuition revenues. The implications for quality education were complicated. For starters, the larger student bodies of the postwar era brought uneven academic preparation and necessitated additional and costly support services at many institutions, leading to diseconomies of scale that drove up, rather than reduced, per capita costs. Furthermore, colleges’ growing dependence on tuition revenues required greater spending on marketing and recruitment and, over time, lower academic admissions requirements. This latter factor, in turn, mandated new expenditures on remediation and support services that often required additional tuition hikes, further escalating the recruiting and marketing arms race among institutions. The infusion of enormous amounts of federal subsidies compounded these complications in additional and sometimes unexpected ways. Public subsidies, for example, are designed to increase demand by lowering prices. In higher education, federal subsidies accomplished the former by vastly increasing participation in postsecondary education. Paradoxically, and for reasons that range from the real and perceived needs of the information economy to the predictable displacement of state support, they did not succeed in lowering prices in a sustained manner. While the price hikes in the proprietary sector were driven by profit maximization, tuition increases in the collegiate sector had more complex dynamics and reflected the escalating expenses of providing quality education, state disinvestment, and the commodification of higher education.

1960s-1980s: The Emergence of Broad-based Federal Student Aid

It is worth noting that, before the GI Bill, federal support for higher education[27] was mediated through the states. The GI Bill, in contrast, was set up as a voucher program because it provided benefits to individuals based on their veteran status. In the moral panic that followed the launch of the Soviet Union’s Sputnik in 1957, the federal government continued the well-established historical practice of funneling its funds for elementary and secondary education through the states. However, the centerpiece of the new federal initiative in higher education, the National Defense Student Loan program, was set up as a campus-based program in which the federal government provided capital to eligible institutions for loans. While federal regulations established student eligibility criteria, institutions were expected to provide some matching funds and were responsible for the administration of their respective portfolios, including collections. This same approach informed the creation of the Work-Study program in the Economic Opportunity Act of 1964. In creating the Guaranteed Student Loan program as a student-centered and portable aid mechanism, the Higher Education Act of 1965 (HEA) set a precedent for using federal vouchers in higher education. The decision to configure the federal government’s investments in higher education through direct student aid rather than mediating it through state or institutional partnerships would prove consequential in the coming decades. The 1972 reauthorization of the HEA was a particular watershed event in this regard, as it not only created the Basic Educational Opportunity (the future Pell) Grant program, it also expanded institutional eligibility for the new aid programs to for-profit schools.

The decision in favor of a voucherized system was not made in a vacuum. There was considerable debate about the wisdom of funding students rather than states or institutions, as there was about reliance on loans. The policy choice to directly fund students appears to have been based on the assumption that the programs would be more likely to be politically sustainable with millions of Americans as beneficiaries.[28] In contrast, the decision to finance the expansion of educational opportunities with loans was more of an economic judgment based on assumptions that were perhaps reasonable at the time but that have proven erroneous.[29] These included the belief that wages would continue their postwar increases, that the states would continue to fund public higher education to maintain low tuition even as enrollments grew, and that high standards of quality and integrity would be maintained despite the obvious lessons of past abuses of the GI Bill by for-profit schools. Although not knowable then, 1973 proved to be the peak year for real wages, which began to diverge from productivity and would stagnate for the next 50 years.[30] And the passage of California’s Proposition 13 five years later signaled the power of an anti-tax movement at the state and local levels that severely limited the states’ fiscal capacity to fund the increasing burdens of public priorities they viewed as more compelling than higher education.[31]

The federal government’s expanded role in covering college costs through student aid suffered multiple design flaws, directly affecting quality.

By opting for a portable voucher system, federal policy sought to empower needy students to attend colleges and schools of their choice. The dominant neoliberal economic consensus of the time was that individual consumers would be best positioned to make rational choices in a well-regulated marketplace. As developments of subsequent decades would prove, however, federal money did create a marketplace, but it was anything but well-regulated. The first of several successive waves of Title IV fraud sprang up in the 1980s and almost destroyed the guaranteed student loan structure. While fraud was generally concentrated in the for-profit sector, the other side-effect of vouchers, tuition inflation, began to register in that same decade.

Throughout the 1980s, the expansion of federal aid programs, especially student loans, began to mitigate or disrupt the forces that had organized and maintained a stable ecosystem in the postsecondary education and training sector as greater amounts of federal dollars began to flow.[32] Prior to the emergence of student aid, the states served as both operators and financiers of their respective public institutions and stood to gain the benefits or suffer the losses associated with unacceptable outcomes. They had, in other words, the much-talked-about “skin-in-the-game” that has re-emerged as a solution to runaway debt financing in recent years. The broad availability of free or low-tuition good-quality public venues, furthermore, functioned as a powerful price/quality discipline mechanism (in the form of competition) for private providers, whether they were non-profit colleges or for-profit trade schools. The growth of federal aid gradually undermined price discipline in all sectors, primarily by displacing state investments with debt-fueled tuition hikes.[33] Rising public tuition costs, in turn, enabled private institutions to increase tuition. The vicious cycle would then repeat as the states justified shifting more of the costs of public colleges to families based on tuition at comparable private colleges. In an era when college credentials were viewed as a secure pathway to middle-class status, demand for higher education proved price-inelastic despite demographic changes as older “non-traditional” students filled seats that would have gone empty because of the drastic drop in the number of the number of traditional-age students.[34]

1990s to Present: Erosion of Trust

By the early 1990s, a mere two decades after the inception of federal aid, admissions—once an academic (and arguably class-based) gatekeeping system—began to gradually morph into marketing and sales at many institutions, albeit under the genteel title of enrollment management.[35] Over the course of ensuing decades, as institutions became more dependent on tuition revenues, an increasingly fierce, and for some, desperate, competition for students ensued. By the end of that decade, the gold standard of need-blind admission/need-based aid vanished at all but the wealthiest institutions as colleges continued to recruit federally aided students whose full financial need they failed to meet. As the gap between stagnating family incomes and escalating college costs continued to increase, federal policy enabled institutions to maintain enrollments through a massive expansion of debt financing.[36]

In short, the 1980s and 1990s inaugurated a race for enrollments that gradually chipped away at academic quality over the next few decades. Admissions, hitherto a strictly academic and non-financial function, began to comingle with financial aid and morphed into enrollment management at more selective institutions and turned into something akin to outright sales at many non-selective tuition- colleges due to tuition-dependence.

This growth in the availability of significant federal dollars, combined with lax oversight during the 1980s led to massive and costly abuses and outright fraud in the for-profit sector. The extent of Title IV fraud became difficult to ignore mainly because under the cash basis of government accounting then in use, the cost of defaulted guaranteed student loans registered on the budget when guaranty agencies submitted to the Education Department the loan papers for re-insurance.[37] The escalating costs of the loan program led to the development of tracking cohort default rates by the George H.W. Bush administration. These were later legislatively enacted into law in the 1989 and 1990 budget reconciliation bills. The surge in defaults, more importantly, caught the attention of the Subcommittee on Investigations of the Senate Committee on Government Affairs, whose Chairman, Senator Sam Nunn of Georgia, held a series of hearings that documented structural flaws in the design and operations of the federal aid system.[38]

The “Nunn hearings” exposed a litany of design defects and poor oversight practices in the basic framework of the aid system, chief among them inadequate if not nonexistent quality assurance by accreditors of for-profit schools, insufficient oversight of recruitment practices, and poor regulation of eligible for-profit providers. The hearings painstakingly documented inadequate facilities, unqualified instructors, and shoddy and mislabeled programs at fully accredited for-profit schools. Deceptive marketing practices ranged from false and misleading advertising to commissioned recruitment of students by “admission counselors.” Many for-profit schools openly used such fraudulent recruitment tactics as bussing people from homeless shelters to sign grant applications and student loan promissory loans to enroll in nonexistent programs in exchange for a cash award. Others targeted state welfare and unemployment offices to prey on needy or unemployed people with promises of training for lucrative future jobs. Many such schools enrolled entire cohorts of “zero EFC,” i.e., the neediest students, and set their tuition levels to the exact dollar amount of the maximum Pell Grant plus the maximum student loan limit. Unlike traditional colleges with real assets and physical campuses, most for-profit schools participated with minimal private investment capital by their owners, who typically took as much of the incoming federal cash as profits out of the till as possible. The Nunn report found that, even when a reluctant and timid Education Department attempted to act on the mounting evidence of egregious misconduct, it faced administrative and judicial hurdles due to a lack of explicit statutory authority. In any case, even if consumer complaints or the toxicity of a school’s reputation caused the worst of the schools to close, owners would regroup and set up a new shop to continue scamming students and taxpayers. In retrospect, it is impressive to note the prescience of the Nunn hearings and its diagnosis of the underlying dynamics of waste, fraud, and abuse in the aid programs.

With regard to accreditors and their inaction on quality, even in the most obvious and catastrophic cases of outright fraud, the Nunn report noted a feature that is still a point of contention in today’s critiques of accreditation, i.e., that accreditors were run by the very entities that they accredit. While making that case today requires some discussion, the phenomenon was directly observable back then because the various accreditors of proprietary schools were run by the industry’s lobbying trade groups. While there had been various attempts by proprietary and vocational schools to emulate the collegiate sector’s accreditation practices in the early decades of the 20th century, these were piecemeal and generally ineffective and unserious attempts that served mainly as marketing gimmicks. The enactment of the Higher Education Act and the subsequent expansion of aid eligibility to proprietary schools, however, made federally recognized accreditation a prerequisite for schools seeking to participate in the new programs. The for-profit sector quickly organized accrediting agencies, literally run by the schools through their trade groups, and ritualistically adopted such collegiate accreditation practices as institutional self-study, peer review, and accreditation team visits. But, without the collegiate sector’s crucial elements of qualified tenured faculty and shared governance, the proprietary accreditors were eviscerated simulacra of the real thing, and their main function was to facilitate access to federal funds. The Nunn report’s criticism resulted in the 1992 HEA Amendments’ requirement that accrediting bodies be “separate and independent” from trade associations or membership organizations.[39] While this provision removed formal control of accreditors by schools, executives of participating institutions continue to dominate accreditation by holding most executive and board positions at accrediting bodies, creating an informal “soft-capture” culture in which accreditors can be viewed as extending the benefit of every doubt to the schools they oversee. The informal institutional control of accreditors by the very entities they oversee, along with the existential threat posed to institutions by decisive accreditation actions, has led to an enforcement ethos that requires proof beyond a reasonable doubt before accreditors take meaningful action. Most accreditation decisions thus end up as seemingly performative, or they arrive so late in the financial death spiral of institutions as to be inconsequential. Ending the de facto capture of accreditors and replacing college executives with pedagogical and disciplinary experts would go a long way to restore accreditation’s focus on quality and would be an absolute necessity for any federal quality assurance mechanism.

In addition to criticizing accreditors, the Nunn report also took on the states and the U.S. Education Department for failing to protect federal dollars. Congress attempted to address the most serious issues flagged in the Nunn hearings in the 1992 reauthorization of the Higher Education Act, which, among other significant reforms, came close to severing the link between accreditation and eligibility for Title IV programs.[40]

The Nunn hearings and early 1990s laid bare the extreme risks of subpar and bad actor programs and the serious need for a better gatekeeping system, and they did lead to a significant legislative overhaul of federal aid focused on meaningful quality assurance and consumer protection. However, the decisive bipartisan consensus that produced those reforms quickly fell victim to intense lobbying by institutions chipping away at its most effective features amid the escalating political polarization in Congress that rendered evidence-based bipartisan policies increasingly untenable.

The 1992 Reauthorization of the Higher Education Act

In crafting the 1992 Amendments[41] to the Higher Education Act, Congress faced the dual task of providing more funding for constituents seeking help to pay for ever-increasing college costs at a time when the integrity of the federal aid programs was being seriously questioned. The House and Senate authorizing committees had initially dismissed the Nunn Committee’s foray into student aid as unwelcome meddling. The sensational coverage of the hearings in the press and the force of the Committee’s final report, however, proved politically potent and induced the education committees to overhaul the program integrity provisions of the Higher Education Act quite significantly. The 1992 reauthorization, passed by a Democratically controlled Congress and signed by a Republican President, sought to radically redesign and strengthen the three legs of the gatekeeping “triad” in Title IV: state authorization, accreditation, and federal certification. However, it failed to have a lasting effect as its state authorization requirements were repealed two years later, and the Department of Education failed to implement the new legislative authority it was granted effectively. The bill’s accreditation provisions, in retrospect, its least radical reforms, did eliminate direct control of accreditors by schools but fell short of the kind of structural reform that might have prevented subsequent failures of accrediting bodies.

The accreditation reforms enacted in 1992 for Title IV purposes significantly expanded federal requirements for recognition. As has been mentioned above, this was a policy reaction to the scandals of the 1980s and the Nunn investigation. The scandals had so outraged the public that the congressional committees momentarily even contemplated ending the federal reliance on accreditors altogether. However, intense lobbying by traditional colleges persuaded the committees to keep but reform accreditation in Title IV. The 1992 legislative changes to accreditation vastly expanded the scope of accreditors’ responsibilities without contemplating the enormous new costs that proper execution of these responsibilities would entail for accrediting bodies. Leaving accreditors financially dependent on institutions and, furthermore, allowing school executives to run the accrediting bodies were two of the gravest shortcomings of the new law. Compounding the problem was the fact that accreditors could still purport to be voluntary organizations that were simultaneously engaged in collegial quality improvement while at the same time serving as quality assurance agents for federal purposes. The bill took no notice of the reliance of accrediting agencies on the unverified representations in institutional self-studies (significantly controlled by administrators and executives) and the waning power of the faculty to institute and maintain academic quality. This, along with the amorphous nature of what the legislative evaluation requirements really meant, created essayistic accreditation standards that were not (and arguably were not intended to be) susceptible to clear verification. Ironically, the vast expansion of federal recognition obligations combined with the flaws noted above to make accreditation less substantive and much more procedural. As long as accreditors said the right things in their applications for recognition, the fact that they had neither the will, nor the expertise, nor the resources to enforce their standards could be overlooked.

A proper assessment of accreditation legislation that emerged from the 1992 Amendments must factor in the significant changes that the law made to the other two components of the triad, i.e., state authorization and federal certification.

For the first time since it was built into the federal gatekeeping framework, specific requirements were spelled out in federal law for state authorization systems that could qualify colleges and schools for Title IV. To this end, the states would enter into formal agreements with the federal government, which would reimburse them for the costs of the oversight responsibilities it assigned to their State Postsecondary Review Entities (SPREs). These included initial approvals and continuing risk-based reviews of institutions with high default rates, high dependency on federal aid, and high rates of student complaints. Review criteria were expansive and remarkably comprehensive. They included oversight of the financial and administrative capacity of the schools, their marketing and recruitment practices, the appropriateness of program lengths and credit requirements, completion, placement, and licensure pass rates for vocational programs, and the relationship between tuition and fees and the earnings of graduates of vocational programs. The state authorization provisions of the 1992 reauthorization were summarily repealed a mere two years later by the newly elected Republican Congress after a furious campaign by private nonprofit colleges and proprietary schools. The repeal of the state authorization provisions of the 1992 Amendments turned that important triad component into a pro forma activity in many states. Even in the states where the function was voluntarily viewed as a critical responsibility, underfunding diminished the efficacy of state authorizers. The elimination of mandatory upfront state oversight shifted much of the on-the-ground gatekeeping function to accreditors, which were in some states, most notably California, simply delegated the task by decree.

In addition to a substantially more meaningful state role, Congress provided the Department of Education with virtually unrestricted authority to regulate postsecondary institutions’ financial and administrative practices, allowed the Department to define borrower defenses and closed-school discharge, commissioned sales, and even set a limit on Title IV dependency through a floor amendment that incorporated a financial version of the GI Bill’s 85/15 rule into Title IV. Despite the broad discretion it was granted, the Department of Education failed to fully capitalize on the grant of authority from Congress. The department’s greatest regulatory failure was its inability to create a proper framework for evaluating the financial viability of participating institutions. This fundamental flaw still afflicts the federal financial aid system and unfairly burdens accreditors with a financial gatekeeping responsibility that they are neither qualified nor funded to undertake. While the regulations emanating from the 1992 reauthorization did improve some upfront gatekeeping metrics, they would ultimately prove ineffective against waste, fraud, and abuse. Worse yet, they created a financing ecosystem in which even legitimate institutions would begin to gravitate toward diminished quality and created a path of least resistance for tuition hikes in a changing higher education environment of internal cost escalation, state disinvestment, and privatization.[42]

The years immediately following the enactment of the 1992 Amendments would have a lasting and formative impact on the federal approach to program integrity and shape accreditation through the present time. The 1990s, it should be remembered, were the beginning of the widespread adoption of enrollment management strategies and the end of need-blind admissions. Institutions that had previously used any internal resources to cover prospective students’ unmet need began to deploy those resources as merit aid to entice more affluent students to enroll. The rise of rankings created a heated race among various tiers of higher education for prestige, resources, and enrollments. Enormous sums of money began to be redirected to recruitment and advertising at even the wealthiest and most prestigious institutions in pursuit of these goals, which institutional administrators justified as necessitated for survival. Despite, or perhaps because of, significant increases in federal student aid funding, the inexorable downward trend of state funding shifted more of the cost burden to students and families and forced many public universities to chase out-of-state students for their higher tuition dollars. While institutions continued to proclaim idealistic goals as a compelling rationale for increases in federal and state funding, their own practices began to resemble business strategies designed to extract as much revenue as they could while minimizing costs. Both of these factors directly affected quality: marketing and recruitment took resources away from the academic side of the house and minimizing costs entailed reducing faculty power and over-reliance on underpaid adjuncts. In this Darwinian environment, federal aid funds were an indispensable foundation of survival, without which the vast majority of institutions would go under. The existential consequences of losing access to aid meant that institutions did their very best to resist effective rules that put them at risk. Having dispensed with robust state authorization and meaningful federal regulations, accreditation was the last remaining hurdle in institutional access to federal funds.

The 1990s were also the decade of globalization when the decline of manufacturing and the rise of the information economy significantly eroded employment opportunities and wages for those without postsecondary credentials. Higher education began to be perceived as a necessary condition of individual success in precisely the same period as institutions became more dependent on significant tuition hikes, which were increasingly financed with federal aid, particularly loans. Even as ever-higher shares of institutional revenues were coming from tuition, most institutions were struggling to keep up with escalating costs, which they attempted to control primarily through replacing tenured faculty with adjuncts, a move that undermined shared governance and decisively contributed to the erosion of not only instructional quality but academic policy itself.

Lessons for Today

This lengthy historical overview of accreditation highlights some important lessons, including that the shift from state-mediated funding to individual vouchers fundamentally changed the economics of higher education and that the decision to make aid “portable” through student vouchers rather than institutional funding had long-lasting consequences for cost control and for quality.  We see that critical assumptions proved incorrect, including that policymakers assumed wages would continue their post-war growth; that states would maintain strong funding; and they underestimated the potential for abuse by for-profit institutions, despite clear precedent following the emergence of the GI Bill.  We also see unintended market effects, including that the voucher system undermined the natural price controls that existed when states had more skin in the game, and that rising public tuition removed competitive price constraints. Institutional behavior changed too. Colleges shifted from academic gatekeeping to aggressive enrollment management; competition for students intensified as colleges became increasingly dependent on tuition; and only wealthy institutions could maintain need-blind admissions. The history also shows disappointing cycles where federal aid displaced state funding, which enabled tuition increases, which allowed private colleges to raise prices, which then justified states in further funding cuts with tuition increases to meet private tuition levels. We also witness poor quality control. The 1980s saw waves of Title IV fraud; the assumption that consumer choice would ensure quality proved overly optimistic; and making for-profit colleges eligible for federal student aid in 1972 brought negative implications for quality. In essence, expanding access to higher education through individual aid created perverse incentives that ultimately contributed to rising costs, decreased state support, and quality control challenges. Institutional behavior and market dynamics were not properly apprehended or accounted for.

Most of all, this historical account highlights the inescapable fact that despite cosmetic changes to federal recognition standards, today’s accreditation regime operates in a higher education landscape radically different from the one where its most fundamental features were first formulated. The unchanged assumptions, policies, procedures, and practices of the current accreditation system—chief among them the claim that accreditation is voluntary, that as such it should be governed by its members, and that it can simultaneously be a continuous improvement and quality assurance mechanism—regulate a bygone higher education sector whose incentives and motivations are markedly different than the ones existing when accreditation was first developed. Future reform efforts should take sober cognizance of the changes in question and devise an improved quality assurance configuration that effectively addresses postsecondary sector changes.

Severing the connection between voluntary accreditation and federal quality assurance.

As the cyclical school scandals of the past few decades indicate,[43] conditioning eligibility for federal funding on institutional accreditation has proven insufficient, if not ineffective, for federal quality assurance purposes. Equally as troubling, it has also significantly overwhelmed the quality improvement aspirations of accrediting organizations, as evident in the generally stagnant performance of accredited institutions on measurable outcomes metrics such as graduation rates. As the history of the rise of American higher education attests, traditional peer-review accreditation worked best when institutions were less dependent on marketing for recruitment and when accreditation was voluntary and non-governmental. During this period, which gradually faded away in the late 1970s and early 1980s, accreditation was a form of self-governance by the higher education sector, but its judgments reflected the collective academic views of the faculty without much concern about the financial implications of their decisions for institutions. The very act of basing eligibility for federal programs on accreditation created powerful incentives that altered accreditation as it had existed until then. With billions of federal funding at stake, accreditation, dominated as it has always been by institutional interests, gradually ceased to be the genuinely voluntary and independent collegial continuing-improvement catalyst it once was. Instead, it became a requirement for accessing federal funds, which was substantively and procedurally controlled by the entities seeking that access.

[1] Unless otherwise noted, references to accreditation throughout the paper pertain to historically regional institutional accreditors of collegiate degree-granting institutions. In most meaningful respects, national accreditation in the for-profit sector was devised primarily as a means of enabling access to federal subsidies by for-profit colleges without the competing pressures of abstract scholarship or prestige that might serve as a counterbalance to financial incentives. Today’s policy problem with accreditation is the historical convergence of the two regimes, in which national accreditors have adopted the external rituals of collegiate practices while formerly regional accreditors have increasingly prioritized continued institutional survival of their members over such ephemeral criteria as quality, scholarship, or even outcomes.

[2] The 1992 reauthorization represented a strong response to the waste, fraud, and abuse of Title IV programs throughout the 1980s. The 2008 reauthorization was passed in a more polarized Congress and attempted to at least partially address the Spellings Commission’s critique of accreditation while simultaneously protecting its vestigial features. Neither legislation managed to change the fundamental dynamics of substantive control of accrediting bodies by the collective will of their institutional members. However, the anemic requirements of the law have irked institutional advocates, who have criticized federal accountability requirements for being intrusive, burdensome, and unnecessary. See, for example, Eaton, Judith S. “Accreditation and the Federal Future of Higher Education.” Academe, Vol. 96, No. 5, Assessing Assessment (September-October 2010), pp. 21-24

[3] Blinder, Alan. “Students Paid Thousands for a Caltech Boot Camp. Caltech Didn’t Teach It” The New York Times, September 29, 2024.

[4] These include epistemological attributes such as academic rigor and research productivity, as well as public accountability expectations such as student outcomes, proper governance and stewardship, financial resources and institutional sustainability, community and civic contributions, and equity and access.

[5] Nelson, Phillip (1970). “Information and Consumer Behavior”. Journal of Political Economy, 78(2), 311-329. Also see Stiglitz, Joseph E. (2000). “The Contributions of the Economics of Information to Twentieth Century Economics”. The Quarterly Journal of Economics, 115(4), 1441-1478.

[6] Darby, Michael R. and Karni, Edi (1973). “Free Competition and the Optimal Amount of Fraud”. The Journal of Law and Economics, 16(1), 67-88. Also see Dulleck, Uwe and Kerschbamer, Rudolf (2006). “On Doctors, Mechanics, and Computer Specialists: The Economics of Credence Goods”. Journal of Economic Literature, 44(1), 5-42.

[7] It should be noted that the use of the term “public” to describe state-funded institutions in the U.S. obscures the fact that these institutions are not controlled by the federal government and are, therefore, external entities seeking access to federal subsidies like their private counterparts. Greater trust and deference have always been extended to such public institutions on the assumption that, as recipients of state and local financial support, they would be more likely to be accountable social institutions. Gradual state disinvestment, escalating costs, and vastly increased tuition dependency have, however, raised significant questions about the continued validity of that assumption at least for some of the programs marketed by public colleges.

[8] In the U.K., universities must be recognized by the government through an Act of Parliament or by the Privy Council. The Office for Students (OfS) is the regulatory body for higher education in England, while Scotland, Wales, and Northern Ireland have their own regulatory bodies. In Germany, universities must be recognized by the state (Länder) in which they are located. The Standing Conference of the Ministers of Education and Cultural Affairs (Kultusministerkonferenz) coordinates educational policies across the states. In France, higher education institutions must be recognized by the Ministry of Higher Education, Research, and Innovation. The High Council for the Evaluation of Research and Higher Education (Hcéres) is responsible for evaluating universities and research institutions. In Japan, universities must be approved by the Ministry of Education, Culture, Sports, Science, and Technology (MEXT). The National Institution for Academic Degrees and Quality Enhancement of Higher Education (NIAD-QE) is responsible for accreditation and quality assurance.

[9] For a variety of historical reasons, a number of private institutions also operate under federal charters. Examples include Gallaudet, Howard, Georgetown, George Washington, American, and Catholic Universities in Washington, D.C., and Morehouse College in Georgia and Carnegie Mellon University in Pennsylvania. Private institutions with federal charters still operate under the authorization requirements of their respective state or jurisdiction. The University of Guam is the sole federally chartered public land-grant institution.

[10] This was in keeping with the free-market political orientation of the young nation, and was driven by the Supreme Court’s Dartmouth College decision, which significantly limited state control of private institutions.

[11] So extreme was funders’ control of institutions that it was not until early 20th century and the establishment of the American Association of University Professors in 1915 that the contours of academic freedom, tenure, and shared governance began to emerge as essential features of postsecondary institutions.

[12] Shireman, R. Academic freedom is under attack. College accreditors may be the best line of defense. The Century Foundation. 2024.

[13] See, for example, Labaree, David F. A Perfect Mess: The Unlikely Ascendancy of American Higher Education. University of Chicago. 2017.

[14] Saul, Stephanie. “At N.Y.U., Students Were Failing Organic Chemistry. Who Was to Blame?” The New York Times, October 3, 2022.

[15] Neal, Anne D., and Armand Alacbay. “Fixing a Broken Accreditation System.” Accreditation on the Edge, edited by Susan D. Phillips and Kevin Kinser, Johns Hopkins University Press, 2018.

[16] Kelchen, Robert. Higher Education Accountability. Johns Hopkins University Press, 2018.

[17] Gough, Robert J. “High School Inspection by the University of Wisconsin, 1877–1931.” History of Education Quarterly, vol. 50, no. 3, 2010, pp. 263–97. JSTOR, http://www.jstor.org/stable/25703601.

[18] The review process did, however, improve communication between colleges and high schools and better align high school curricula and academic practices with collegiate requirements.

[19] Finkin, M. W. The Unfolding Tendency in the Federal Relationship to Private Accreditation in Higher Education. Law and Contemporary Problems, 57(4), 1994. 89-120.

[20] Transferability of credits and admissibility to graduate programs were the more concrete advantages of collegiate accreditation, but transfer and graduate education were both comparatively rare phenomena and had operated through informal bilateral arrangements between established institutions even before accreditation.

[21] Neal, Anne D., and Armand Alacbay. “Fixing a Broken Accreditation System.” op. cit.

[22] Kliebard, Herbert M. Schooled to Work: Vocationalism and the American Curriculum, 1876-1946. Teachers College Press, 1999.

[23] Whitman, David. “The Cycle of Scandal at For-Profit Colleges: Truman, Eisenhower, and the First GI Bill Scandal.” The Century Foundation, 2017.

[24] But it was not until 1968 that a formal process for federal recognition of accrediting agencies was established.

[25] “When none but the wealthy had watches, they were almost all very good ones; few are now made that are worth much, but everybody has one in his pocket. Thus, the democratic principle not only tends to direct the human mind to the useful arts, but it induces the artisan to produce with great rapidity many imperfect commodities, and the consumer to content himself with these commodities.” Alexis de Tocqueville, Democracy in America, Volume III, Part I, Chapter 11. For a seminal economic paper on the topic see “The Quality-Quantity Trade-Off” by Oz Shy, published in the American Economic Review, 1988.

[26] Non-financial resources, such as affiliation with a denomination or, in the case of Catholic colleges, various orders, could provide significant cost-reduction benefits to some private institutions and allow them to be financially sustainable with lower revenues.

[27] Most notably the first and the second Morrill Acts, the Hatch Act of 1887, and the Smith-Lever Act of 1914.

[28] Gladieux, Lawrence E., and Thomas R. Wolanin. Congress and the Colleges: The National Politics of Higher Education. Lexington Books, 1976.

[29] Mitchell, Josh. The Debt Trap: How Student Loans Became a National Catastrophe. Simon & Schuster, 2021.

[30] Mishel, Lawrence, E. Gould, and J. Bivens. “Wage Stagnation in Nine Charts.” Economic Policy Institute, 2015. Real wages have increased since the mid-2010s but in unequal patterns for different income levels.

[31] While Prop 13, which reduced property tax revenues for local governments and school districts, did not directly mandate cuts to college funding, it set in motion a series of budgetary pressures. Reduced local government revenues imposed greater responsibility for funding K-12 education on the state, ultimately reducing state support for higher education relative to historical levels and other state priorities. The effects were gradual and intertwined with other economic and political factors over the decades following its passage. This pattern would repeat itself in even more amplified fashion over the coming decades as Medicaid costs grew and were stacked on top of state K-12 expenses nationwide.

[32] The two signal developments in connection with the expanded role of federal aid are the enactment of the Middle Income Student Assistance Act (MISAA) in 1978 and the Omnibus Budget Reconciliation Act of 1981. In an era of high unemployment and inflation, MISAA removed income caps on student loans and expanded eligibility for BEOG, enabling millions of students to qualify for federal aid. The opening salvo in the Reagan administration’s remaking of the economy, the 1981 OBRA rescinded MISAA’s generous provisions. Even as it sought to reduce federal expenditures, the bill revamped eligibility for loans, which had prior to 1978 been tied to income, by basing it on “need,” which it defined as the cost of attendance minus available family resources. This change had the effect of accommodating cost-shifting from the states and institutional future tuition hikes, which were increasingly covered by student loans with progressively harsher terms.

[33] As the gap between wages and college costs widened and grew into a chasm, federal loans grew in volume and in the number of borrowers. The loan programs expanded in other ways to cope with tuition-driven demand for financing. To contain costs, Congress first created higher-interest parental loan programs and added a new program of Supplemental Loans for Students. It later bifurcated student loans into “subsidized” and “unsubsidized” varieties, with harsher terms and higher interest costs for borrowers. As the wage stagnation that afflicted families continued to also characterize post-attendance earnings of borrowers (for borrowers who graduated, but even more drastically for drop-outs and victims of poor gatekeeping), Congress prolonged payment terms and a variety of income-based options to make repayment more manageable.  There were multiple efforts at reducing borrowing costs, especially the 1993 creation of the direct loan program and its subsequent expansion in 2010, but none of these proved to improve affordability as they were continuously outstripped by escalating costs.

[34] While the availability of federal dollars did enable tuition hikes by creating a path of least resistance for families to cover increased costs, it would be simplistic to view them as the only, or even the main, driver of tuition inflation. There have been excellent studies of some of the other factors pushing collegiate expenditures and, therefore, their need for greater revenues, higher. See, for example, Archibald, Robert B., and David H. Feldman. Why Does College Cost So Much? Oxford University Press, 2011; and Clotfelter, Charles T. Buying the Best: Cost Escalation in Elite Higher Education. Princeton University Press, 1996.

[35] Burd, Stephen J. Lifting the Veil on Enrollment Management. Harvard Education Press, 2024.

[36] Loans served as a lifeline for colleges and a political device to satisfy upfront “affordability” for students seeking higher education. The far-fetched assumption behind over-reliance on debt was that, contrary to the evidence of the previous few decades, most borrowers would graduate with quality degrees and that wage enhancements would more than offset the ever-escalating burden of debt for borrowers.

[37] As default costs exceeded the available premiums at guaranty agencies, the largest one, the Higher Education Assistance Foundation (HEAF), became insolvent, sending destabilizing ripples through the entire student lending system. Ensuing investigations into HEAF and other student loan intermediaries revealed another layer of abuse by financial players in the aid system. See Committee on Banking, Housing, and Urban Affairs. Hearings on Implications of the Failure of the Higher Education Assistance Foundation. U.S. Senate, 1990.

[38] Senate Committee on Government Affairs. Abuses in the Federal Student Aid Program. U.S. Senate, 1991.

[39] See HEA Section 498(a)(3)(A).

[40] Despite this, two decades after the Nunn hearings, another Senate investigation, this time by the Senate Committee on Health, Education, Labor, and Pensions, chaired by Senator Tom Harkin, would find the same patterns of waste, fraud, and abuse and once again raise fundamental questions about the efficacy of Title IV gatekeeping and the role of accreditors. See Committee on Health, Education, Labor, and Pensions. For Profit Higher Education: The Failure to Safeguard the Federal Investment and Ensure Student Success. U.S. Senate, 2012.

[41] S.1150, Public Law 102-325.

[42] In what would prove to be another reform that backfired, Congress also mandated that the Department would have to engage in negotiated rulemaking to develop its enabling regulations to implement the legislative changes to Title IV. This was partly motivated by the Department’s failure under the Reagan and George H. W. Bush administrations to promulgate regulations for the 1986 reauthorization, which the Bush Department of Education finally issued in 1992 after the enactment of the 1992 bill. More to the point, mandatory negotiated rulemaking reflected the continuing bipartisan trust of the traditional collegiate sector as mission-driven and like-minded partners committed to the intended goals of federal policy. The assumption behind the decision was that the mandatory participation of a disproportionate number of representatives of traditional colleges and universities would improve and strengthen the regulations rather than weaken them. To this end, multiple tiers of collegiate administrative positions were written into the law. Ironically, the one group whose interests would continue to be aligned with that of students and the taxpayers, the faculty, were left out and have never participated in the regulatory negotiations process.

[43] Whitman, David. “The Cycle of Scandal at For-Profit Colleges: The For-Profit College Story: Scandal, Regulate, Forget, Repeat.” The Century Foundation, 2017.

Understanding College Accreditation's Current Weaknesses Through a Historical Lens