Changing the Way We Account for College Credit

Our system of certifying credit based on seat time rather than on learning no longer makes sense in an era in which college costs are skyrocketing and nontraditional students have become the majority.

For centuries, the United States has been the envy of the world in terms of its higher education system. But now we are largely coasting on a bygone reputation, obscuring the fact that high-quality, affordable college credentials are not getting into the hands of the students who need them most.

One of the greatest assets of America’s higher education system is that we try to provide broad access to college credentials. Instead of remaining content to have a handful of private institutions that largely served the elite, President Abraham Lincoln signed the Morrill Land Grant Act in the 1860s, providing support for the creation of our public land-grant universities. A century later, hundreds of community colleges were created to ensure open access for all who wanted to enroll in higher education after high school. As awareness grew about the prohibitive expense of a college education, we made sure that all students who wanted to attend could afford it by providing generous state subsidies and federal support such as the GI Bill and Pell Grants. These investments allowed unprecedented numbers of Americans to enjoy the benefits of higher education and helped make us the most college-educated country in the world.

But the tides are changing for our great system. We are slipping, fast. Once first in the world among the countries of the Organisation for Economic Co-operation and Development in terms of young adults with college degrees, the United States now ranks 14th. Whereas other nations’ young people are far more likely to have college degrees than their parents, the United States is on the verge of having an older generation that is better educated than the younger. This couldn’t come at a worse time. Technological development and structural shifts in the global economy mean that nearly two-thirds of U.S. jobs will require some form of postsecondary education in the next five years. One-third will require a bachelor’s degree or higher, and about 30% will require some college or an associate degree.

It’s not just the young who need higher education. Those who have seen their blue-collar jobs and membership in the middle class disappear are also yearning to learn the post-secondary skills essential for them to succeed economically. As routine work becomes increasingly automated, employers need workers with the skills necessary to handle complex and changing tasks and situations. A college credential is currently the easiest, if not necessarily the most accurate, proxy for those skills.

Yet even as college is becoming more essential, it is also becoming much more expensive. Tuition and fee increases have outpaced even health care costs, rising by more than four times the rate of inflation during the past 25 years. Students and families are covering those increases with student loan debt. Two-thirds of today’s students graduate with student loans, owing an average of $26,600. The nation’s collective student loan debt is more than $1 trillion, exceeding even our collective credit card debt.

Part of the problem is that our concepts of what colleges and college students look like have not kept pace with the realities. The collegiate archetype—a well-prepared 18-year-old ready to move into a dorm and study full time at the same college for four years, all of it paid for by mom and dad—is now the exception, not the rule. And as for the bucolic residential campus experience? That, too, is an exception. About 80% of today’s students are commuters. Nearly 45% of undergraduates attend community colleges. Nearly 60% attend two or more institutions before they graduate. More and more students are taking some or all of their courses online. In sum, students today are more likely to be older, working, attending part time, and learning outside of traditional credit-bearing classrooms than students in the past. Their lives demand a much different and much better kind of education.

Because many of today’s students are juggling work and family, higher education needs to be more responsive to these scheduling and financial constraints. It also needs to recognize the different experiences and goals that these students bring to the table. Right now, most colleges treat all students the same, as empty vessels that can only be filled within the confines of the college classroom. Compare two hypothetical students pursuing a bachelor’s degree in criminal justice. The first is an 18-year-old with no work experience whose interest in the criminal justice system comes from watching Law and Order reruns. The second is a 31-year-old high-school graduate who has hit a degree ceiling at the law firm where she has worked for 12 years. Colleges treat them the same; that is to say, largely like an 18-year-old.

The antiquated images of college and college students also rest on a false distinction between education and training, further harming students pursing education outside the framework of a residential college. More than 40% of students at community colleges are enrolled in noncredit courses, many of which are workforce-training courses requested by and developed in conjunction with employers. Many of these courses include highly sophisticated and complex subject matter, which is then assessed and certified by industry-normed exams. Although employers and students benefit from the training received from these courses, students do not receive college credit and thus miss out on the permanent, portable units on which the widely recognized and remunerated college certificates and degrees are built. This limits students’ ability to enter career paths and adjust to cyclical and structural changes in the economy over time.

Unfortunately, there are few incentives—the biggest one being federal financial aid—to address the needs of these students. Federal financial aid pays largely for time served rather than learning achieved in for-credit courses at established, accredited institutions. Granting credit based on seat time instead of learning gives credit where it shouldn’t and fails to recognize learning that happens outside of classroom walls. This hampers students’ acquisition of valuable degrees and credentials. It also creates structural disincentives for developing new means of giving students the kind of flexible, affordable, and effective education and credentials they need. Institutions are rewarded largely for inputs rather than outcomes. What are the grade-point averages or SAT scores of incoming students? How much have faculty published? Rarely is the question of what students are learning asked, let alone answered.

Why is this the case and what can we do about it? To answer, we need only invoke the old adage “you get what you pay for.” Right now, we are paying for time, not learning. In order to change that, we have to address underlying problems with the basic currency of higher education: the credit hour. This way of measuring student learning is putting our nation’s workforce and future prosperity at risk. That’s because when the credit hour was developed at the turn of the 20th century, it was never meant to measure student learning.

The curious birth of the credit hour

American secondary schools expanded dramatically around the turn of the 20th century, swelling the ranks of high-school graduates. But the extreme variation in high-school practice left many college admissions officers unsure as to what a high-school diploma meant. The National Education Association endorsed a “standard unit” of time students spent on a subject as an easy-to-compare measure. But the idea didn’t stick until later, when Andrew Carnegie set out to fix a problem that had nothing to do with high school: the lack of pensions for college professors.

As a trustee of Cornell University, Carnegie was troubled by poor faculty compensation. Professors made too little to prepare for retirement, leaving many to work far longer than was productive for them or their students. Carnegie decided to create a free pension system for professors, administered by the nonprofit Carnegie Foundation for the Advancement of Teaching. Not surprisingly, colleges were eager to participate. The foundation leveraged this excitement to promote another one of its goals—high-school reform—by requiring participating pension colleges to use the “standard unit” for college admission purposes. Colleges had nothing to lose and free pensions to gain, so the time-based standard unit, which became known as the Carnegie Unit, became the de facto standard for determining high-school graduation and college admissions requirements.

Carnegie’s pension system also spurred higher education to convert its own course offerings into time-based units to determine faculty workload thresholds to qualify for the pension program. Using the Carnegie Unit as a model, faculty members who taught 12 credit units, with each unit equal to one hour of faculty-student contact time per week over a 15-week semester, would qualify for full-time pension benefits.

Soon, the credit hour would become the fundamental building block of college courses and degree programs. The move to time-based units however, was never intended to be a measure of student learning. The Carnegie Foundation made this quite clear when discussing its “unit” in its 1906 annual report, in which it was explicitly stated that in the counting, the fundamental criterion was the amount of time spent on a subject, not the results attained.

But colleges did not heed this caveat, and it’s easy to understand why. The standardized nature of credit hours makes them convenient for a number of critical administrative functions, including determining state and federal funding, setting faculty workloads, scheduling, recording course-work, and determining whether students are attending college full time. The problem is that over the years, the credit hour has also come to serve as a proxy for measures of learning. Most importantly, college degrees came to represent the accumulation of credit hours, typically 120 to earn a bachelor’s degree.

More time does not equal more learning

College degrees are still largely awarded based on time served rather than learning achieved, despite recent research suggesting that shocking numbers of college students graduate having learned very little. The 2011 National Research Council study Academically Adrift found that 45% of students completing the first two years of college and 36% completing four years of college showed no statistically significant improvement over time on a test of critical thinking, complex reasoning, and communication skills. A U.S. government study found that the majority of college graduates could not do basic tasks such as summarize opposing newspaper editorials or compare credit-card offers with varying interest rates.

Perhaps time is still part of the equation; students should be spending two hours outside of class for every hour in class. But the reality is quite different. In 1961, two-thirds of students spent at least 20 hours a week studying outside of class. By 2003, the percentage had dropped to 20. But, theoretically, colleges supplement the credit-hour count of how much time students have spent in and outside of class with an objective measure of how much they have learned: grades. But it is hard to reconcile today’s grades with the research suggesting that poor learning outcomes are widespread. Whereas 15% of undergraduate course grades were A’s in 1961, today almost half are A’s. Nearly two-thirds of provosts and chief academic officers think grade inflation is a serious problem. Either college graduates have become much smarter over time—a possibility contradicted by all available research—or the function of grades in meaningfully differentiating and rewarding student learning has eroded.

Given these sobering findings, it is not surprising that employers are not particularly impressed with recent college graduates. Only one-third of employers say that college graduates are prepared to succeed in entry-level positions at their companies, and only about one-quarter said that colleges and universities are doing a good job in preparing students effectively for the challenges of today’s global economy. There is a curious disconnect between the widely held belief that American universities are great and the growing recognition that its graduates are not.

When an hour isn’t an hour

Perhaps the strongest evidence of the credit hour’s inadequacy in measuring learning can be found in the policies and choices of colleges themselves. If credit hours truly reflected a standardized unit of learning, they would be fully transferable across institutions. After all, an hour in Massachusetts is still an hour in Mississippi. But colleges routinely reject credits earned at other colleges, underscoring their belief that credit hours are not a reliable measure of how much students have learned.

Many students, however, believe that the credit hour is a standardized currency and assume that their credits will transfer from one school to the next. This is an unfortunate and costly assumption. Take the case of Louisiana community college students. Until recently, students with an associate degree typically lost between 21 and 24 credits when transferring to a four-year state school. That’s a year of time and money lost. Given that nearly 60% of students in the United States attend two or more colleges, the nontransfer of credits has huge individual, state, and national implications.

The government must be much more active in encouraging competency-based education and highlighting the competency models that use or could use the credit hour to receive financial aid.

Yet millions of credits are awarded that lead to degrees where very little, if any, learning is demonstrated. It is no wonder that the federal government has recently begun to weigh in on the credit hour. Because the cornerstone of federal financial aid, the credit hour, doesn’t represent learning in a consistently meaningful way, it is hard for the government to ensure that taxpayer dollars are well spent and that students are getting degrees of value. The problem has been exacerbated by two intertwined trends: the steady increase of online education and the growth of the for-profit higher education industry.

The concept of seat time becomes more difficult to measure when students aren’t in actual seats. From 2002 to 2010, the percentage of students taking at least one online class rose from under 10% to 32%. Online education fueled for-profit education most dramatically; students enrolled in for-profit programs increased more than 300% between 2000 and 2010. Federal policy fueled this boom, by removing a requirement that at least one-half of an institution’s students be enrolled in face-to-face courses to be eligible for financial aid.

One of the primary appeals of online classes is the flexibility they provide to students juggling work and family responsibilities. Online classes are often asynchronous, meaning that students don’t all gather in front of their monitors at the same time each week. Nor do students need to spend the same amount of time in front of their monitors; in many cases, students work at their own pace, going quickly through concepts they have mastered and lingering on those they have not. Although a boon to working students, online courses fit awkwardly with the seat-time basis of the classic credit hour and have become increasingly problematic for education regulators, particularly for the federal government.

Many online and for-profit colleges, as well as colleges of all kinds, are heavily financed by federal student financial aid dollars. In 2012, the U.S. Department of Education gave out more than $187 billion in grants, loans, and other forms of student financial aid, an increase of more than $100 billion in annual aid in just the past 10 years. And the building block of this aid is the credit hour.

Until recently, credit-hour determination was left entirely up to colleges and their peer accreditors. Institutions assigned a number of credit hours to a course and accreditors reviewed the process of determining credits. If the accreditors signed off, the department would open its financial aid coffers to students at the institution.

This began to change in 2009, when the department’s inspector general found inadequate accreditor oversight of the credit-hour assignment processes. Although colleges typically offer three credits for a 15-week course, the inspector general highlighted an institution that granted nine credits for a 10-week course. When the institution’s accreditor flagged what seemed to be an excessive amount of credits, the institution simply broke up the course into two five-week, 4.5-credit courses. The accreditor called the policy egregious, but approved the institution anyway.

In response to the inspector general’s report, and to a growing concern over poor quality controls for federal financial aid, the Department of Education decided there was a need for a consistent, standard definition of a credit hour. This would be tricky: Define the credit hour too tightly, and risk reifying seat time and stifling innovation; define it too loosely, and students and taxpayers could be taken for a ride.

After much discussion and controversy, the department released its final regulation in 2010. The definition used the time-based measure devised nearly a century earlier to determine eligibility for Carnegie’s pension plan. But it also provided alternatives based on evidence of student work or student learning. Unfortunately, the alternative parts of the definition were largely overlooked, in part because the credit-hour definition was just one piece of a series of regulations designed to reduce fraud and abuse in the financial aid program. Thus, many in the industry still believe that their safest and easiest bet is to do what they have always done: use time, rather than learning, to determine credits.

If we accept that college-level learning can occur outside of traditional institutions, then why shouldn’t we accept that college-level credit could be granted outside of traditional institutions?

The 15-week, one-hour-in-class-and-two-hours-out definition of a college course is not just easy to measure; it is a long-established practice and convention. The credit hour may be an illusion—studies suggest that typical students work nothing close to two hours out of class for every hour in—but it is an illusion that everyone understands and agrees to believe. This is in stark contrast to agreements about learning outcomes. Although colleges and their accreditors claim that learning outcomes are an integral part of an institution’s DNA, the research findings on poor learning outcomes and rampant grade inflation, combined with the difficulty of credit transfer, tell a different story.

Learning from others

Fortunately, there are institutions that have long used learning, rather than seat time, to award credits and degrees, in addition to more recent efforts to try and measure learning. In the late 1960s and early 1970s, the Carnegie Foundation emerged again as a central player in new approaches to higher education. As adults supported by the GI Bill and more women entered or returned to school, it became clear that existing time- and place-dependent colleges were often ill-suited to serving these learners. Carnegie produced a series of reports emphasizing that adults were not simply older 18-year-olds; they had skills, knowledge, and educational needs that traditional students did not. Institutions needed a different approach: one that started with recognizing, measuring, and awarding credit for the high-level knowledge and skills adults had acquired through life and work experience.

Several new programs and institutions were created in the early 1970s to address the needs of adult learners. Ewald Nyquist, New York State’s commissioner of education and president of the University of the State of New York, proposed a Regent’s degree program to help those unable to attend traditional college courses with the opportunity to earn a degree. The program used exams and validation of credits earned at other institutions to help students more quickly and inexpensively earn their degree. Learning outcomes and degree requirements were made clear, and students could demonstrate what they already knew and then spend their time learning what they did not. In 1972, the program’s first associate degrees were awarded.

The program soon became a college and eventually became Excelsior College, whose motto is “What you know is more important than where or how you learned it.” Over the years, Excelsior has broadened the ways in which students can earn credits and degrees, adding demonstration of prior learning through a portfolio of projects and in-person or online classes. In early 2012, it announced a modern and inexpensive twist to its competency-based programs. For less than $10,000, students can earn a bachelor’s degree by using free online courses and materials and demonstrating their mastery of the subjects on exams designed by experts from across the country. Excelsior has the largest nursing program in the country, and its graduates do as well as those from traditional time-based programs on national licensure exams.

Exams are commonplace in many industries: Lawyers need to pass the bar before being allowed to practice, doctors must pass the boards, and many professions have specific licensure exams that certify that individuals have the minimum competencies necessary to be credentialed in a given field (for example, Cisco information technology certifications). In higher education, however, learning standards and assessments are largely devolved to the level of the individual course. Individual professors often set their own singular standards, deliver instruction, and then measure the students against these standards. And although grades may be high, evidence suggests learning is scant. This is not to suggest that higher education can and should be measured by one big test. But it needs to do a far better job of identifying and objectively measuring what students are expected to and can actually do. Exam-based credits already exist, in a limited scope, in higher education; they just happen to be geared toward high-achieving, traditional students. In 2012, 3.7 million high-school students took Advanced Placement (AP) exams in the hope of testing out of courses whose material they had already mastered. Although students may have difficulty transferring credits from one college to another, they are likely to face less resistance in receiving credit for AP courses, because institutions know what AP tests mean and are more likely to trust them. These credits are trusted because they are based on learning, not time.

The financial aid hurdle

Despite these innovations, Excelsior and the handful of similar institutions that started in the 1970s remain relatively unknown commodities. This is in large part because students who enroll in competency-based programs typically have not had a key benefit available to students at most other accredited institutions and programs: access to federal financial aid. Although students in Excelsior’s online classes are eligible, students in its competency-based exam programs are not. According to the federal government, these programs are considered independent study programs, because they lack regular faculty-student interaction. This concept has been at the heart of many federal aid policies, largely to protect students and taxpayers from unscrupulous diploma-mill operators. If we can’t measure the time that distance education students spend in class, according to this the thinking, at least we can measure the time they interact with faculty.

In the 1990s, a new institution, Western Governors University (WGU), found a way to overcome the financial aid hurdle. WGU was started by the Western Governors Association, a nonpartisan group of governors from 19 states who were grappling with how to prepare their residents to meet the states’ workforce needs. Populations spread over sparsely populated stretches of the West and rapidly growing urban areas in states such as Nevada and Arizona needed much greater access to higher education. Creating hundreds of new brick and mortar institutions was not financially feasible, nor was expecting that working adults would leave their jobs and families to attend institution hundreds of miles away. The answer was a fully online institution.

But access alone was not enough. The governors heard employer complaints about the quality of college graduates and wanted to be sure students learned what employers needed. The key was competency. Groups of faculty, scholars, and industry experts would define the competencies that students would need to demonstrate for each degree program. Graders unconnected to the students would then evaluate the competencies. This approach not only provided a consistent benchmark for the quality of the degree, it also allowed students to move through the material at their own pace. New students are assessed for competencies they already have, and a learning plan is created to help them master the rest. Students pay a flat rate of less than $3,000 for six months, during which time they can move through as many competencies as they are able. The average graduate earns a bachelor’s degree in 30 months and pays a total of about $14,000. Employers are pleased with WGU graduates: According to a survey conducted by Harris Interactive, 98% rate the graduates as equal to or better than those of other universities, and 42% rate them as better.

Although WGU was originally built to serve students in the western states, today it serves students across the country. States are contracting to create state-branded versions of WGU, and enrollment is growing by 35% a year. This growth is made possible largely by the fact that students at WGU are eligible for federal financial aid.

It may be surprising to learn that WGU uses credit hours to receive federal aid. It wasn’t supposed to. Given the problems that seat time would pose to WGU’s competency-based model, Congress created an alternative way for institutions to receive financial aid, one that used the “direct assessment” of student learning. But WGU never used this authority, choosing instead to work creatively within the confines of the credit hour. Students are required to master 120 competencies, not coincidentally the standard number of credit hours required for a bachelor’s degree. WGU has regular faculty-student interaction, but its faculty members don’t teach. They function as mentors, helping students access the instructional resources they need to learn on their own. WGU’s creative use of the credit-hour requirement and faculty-student interaction has helped it access federal dollars critical to its growth.

Promoting new education models

Although Excelsior and WGU have broken away from seat time, the vast majority of colleges have not. Indeed, current government policies, and the misperceptions of these policies, have made it difficult for them to do so. If the United States is to reclaim its position as the most-educated nation in the world, then federal policy needs to shift from paying for time to paying for learning.

Many of the tools needed to make this shift are available to federal policymakers right now. Surprisingly, the first tool the government should use to help move away from time is its own recent definition of the credit hour. Many institutions and accreditors either don’t recognize the flexibilities in the new definition, or they don’t know how to use them. The government must be much more active in encouraging competency-based education and highlighting the competency models that use or could use the credit hour to receive financial aid.

The second tool is the direct assessment provision created for WGU. WGU hasn’t used it, nor has anyone else. That may soon change. In October 2012, Southern New Hampshire University, which is creating a $5,000 entirely competency-based associate degree, became the first institution to apply to use this provision. Freed from the history and practice of the credit hour, direct assessment could help institutions think more creatively about measuring student learning.

Although government can and should help push the boundaries of what is possible, it will not change the fact that measuring time is easy and measuring learning is not. This poses a real danger to innovation, because there are too many unknowns to safely broaden access to financial aid.

Fortunately, the third tool is the authority to test policies with small, voluntary groups of institutions. The government could ask a series of questions and try to answer them in controlled experiments. Should financial aid pay to assess learning that happened outside of the classroom? For learning achieved on the job? For learning achieved in Massively Open Online Courses (MOOCS)? How much should be paid to assess this learning? What should be the proof of student learning? Successful experiments could yield answers that would pave the way for wide-scale adoption.

These three tools offer a tremendous opportunity to move away from seat time. But a high bar must be set lest we recreate the grade inflation and weak academic standards in the existing time-based system or open up the floodgates to billions of dollars in federal aid to unscrupulous operators. Demonstrated learning outcomes are the key to this endeavor. The government should provide guidelines that are broad enough to support innovation yet stringent enough to prevent abuse. At a minimum, these guidelines should include transparent, externally validated learning outcomes.

But although these three policy tools could be extremely valuable in accelerating the completion of meaningful, learning-based degrees, they have limits. No matter what eventually might be covered by these tools, they apply only to accredited institutions. This means that noninstitutional providers of learning, no matter how good the outcomes, will remain ineligible. A biotech company could create a high-quality, work-based training program whose “graduates” have learned more than most students with an associate degree in science, but unless this training is attached to an accredited institution, the learning outcomes won’t count towards a credential.

If we accept that college-level learning can occur outside of traditional institutions, then why shouldn’t we accept that college-level credit could be granted outside of traditional institutions? For now, the law is very clear on who can grant credit and who can receive federal financial aid: institutions of higher education only. Perhaps after a few rounds of experimentation with the credit hour, direct assessment, and experimental sites, policymakers will see value in awarding credit for learning, irrespective of how long it took, where it happened, or who provided it.

As higher education becomes increasingly necessary and expensive, measuring time, rather than learning, is a luxury that students, taxpayers, and the nation can no longer afford. Moving to transparent, competency-based education would shed light on what students are learning in the classroom. It would also help millions of students capture and validate learning from outside the classroom, meeting students where they are and taking them where they need to go. Students and taxpayers can no longer afford to pay for a time-based measurement designed to help professors qualify for pensions. If we pay for what students learn and can do, rather than how or where they spent their time, it would go a long way toward providing students and the nation with desperately needed, more affordable, and better-quality degrees.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Laitinen, Amy. “Changing the Way We Account for College Credit.” Issues in Science and Technology 29, no. 2 (Winter 2013).

Vol. XXIX, No. 2, Winter 2013