About CHEA |
Government Relations |
Database of Institutions and Programs |
CHEAnews Room |
| Informing the Public | Recognition | Research & Publications | CHEA International Quality Group | Degree Mills |
| CHEA Board of Directors | The CHEA Chronicle | Upcoming Events | Site Map |
| Let me begin by saying how much I welcome the challenge of addressing the question "how accreditation might be different." The topic is in many ways a dream assignment and might easily be taken as license to say things that are truly outrageous. While I intend to take advantage of this opportunity to some extent, my remarks will also be tempered by some constraints. First and most important is the fact that accreditation already is different. For better or worse, the majority of accreditation agencies-both institutional and specialized-are right in the middle of re-thinking their approaches, and the degree to which this is happening is unprecedented. Second, I must confess to a pre-determined point of view about what some alternatives might look like-shaped largely through my engagement in a Pew-funded effort to help stimulate new "prototypes" for quality assurance. Clear recognition that change is being acted out by most of you already, together with my own commitment to an important but necessarily limited set of concrete alternatives, will likely temper the degree to which I can propose anything really radical this morning-or at least anything that you probably haven't already considered.
Given these caveats, I'd like to concentrate on two main topics. The first, prerequisite for any realistic conversation about alternatives, is why accreditation must be different. Addressing this question requires us to examine systematically some of the principal imperatives that are driving change in higher education itself and, as a necessary consequence, in the processes we establish to regulate and govern the enterprise. Not surprisingly, I believe that technology and distance education provide significant instances of these underlying environmental pressures. But they do not comprise their essence. The second topic consists of the specific alternatives to the way we currently do accreditation's business. And here, I think, we need to squarely recognize the fact that we have evolved a pretty standard set of practices that don't vary much from agency to agency, while the rest of the world employs approaches that are quite diverse in character.
A number of important drivers of change are, I believe, forcing us to re-think what we are doing. To introduce them, I'd like to note some recent personal experiences in working with a couple of regional accreditors that vividly illustrate why we cannot continue operating as we have been. In both cases, the indelible impression I took away with me was that staff had a virtually unmanageable workload. Colleges and universities, together their associated aspects and appendages, are becoming increasingly complex. And with this growing complexity, the need to look at more and more things in a review process is growing exponentially. Almost every accrediting staff with whom I speak confirms this phenomenon. To me, this situation echoes growing faculty complaints about not having enough hours in the day to cover the escalating demands of teaching, research, and service. On the one hand, experience suggests that situations like these will solve themselves, if only because they cannot continue. On the other, they make it imperative for us to engage in some fundamental re-examination of the continuing relevance of what we do, lest we be saddled with "solutions" that we do not like.
At bottom, I think, three major trends in higher education's environment are forcing us toward serious stock-taking. The first and most fundamental is a decisive change in the nature of teaching and learning itself which is affecting both individual college classrooms and how institutions "organize themselves" for learning. A second somewhat associated trend is the manner in which postsecondary teaching and learning is becoming "de-institutionalized"- becoming both the product of multiple institutions in the case of a given student and increasingly the province of providers beyond the academy. The third and final force is the pressure for public and "customer" engagement in the process of quality assurance, both to ensure that relevant information about institutional performance is collected and to address wider concerns about lack of accountability in higher education. All three of these trends, of course, have been with us for a while. Together and separately, though, their impacts have become impossible to ignore any longer.
1. The Revolution in Teaching and Learning
Although it poses significant challenges for review, the first trend is a very positive development. In fact, we appear to be right in the middle of a major renaissance of interest in the nature of collegiate learning itself and with it, in the things that institutions can do to occasion and create collegiate learning. On the matter of "deep learning" itself, more and more research suggests that current modes of instruction that emphasize information transmission are relatively ineffective. In parallel, calls are increasing to shift the axis of what colleges do from "delivering content" to providing students with multiple and diverse opportunities to actively engage in knowledge-construction and skills building on their own.
Many of the specifics of what some observers have called this "paradigm shift from teaching to learning" are already familiar. Learner-centered and self-paced instructional designs in which individual students proceed to master particular bodies of material independently and at their own pace are among the most prominent. This approach, which the Australians usefully term "resource-based learning" (and is probably most prominently practiced by the British Open University), does not always entail "distance" delivery; its most visible growth area right now, in fact, is as an adjunct to regular college classes. Closely related are "active learning" approaches which radically re-configure how faculty members interact with students. Laboratory or "studio" classes in which students engage directly in complex class-based research exercises and problem-solving scenarios under the guidance of faculty mentors are perhaps the most compelling examples. Both are often combined with collaborative learning approaches in which students work in teams to address posed problems or, at the very least, are organized into intentional "learning communities" taking several classes in common and engaging in mutual support. All of these techniques, research suggests, produce superior learning gains; more surprisingly, perhaps, all three can also be more efficient than traditional "instructional delivery" because of the ways they re-configure the use of expensive faculty time. Technology, of course, is a visible part of these developments and is often seen as a driver for change in itself. While partly the case, it is important to recall that every one of these methods-and their demonstrable effectiveness-predates the widespread application of technology. This leads me to the conclusion that the role of technology is not revolutionary in itself. Rather it renders the use of alternative instructional approaches far more feasible and efficient than in the past and makes their consequences for institutions unavoidable.
This "revolution in teaching and learning" has at least three specific implications for accreditation. First, it makes it even more important to focus on outcomes in the accreditation process. Certainly this is not news and, to be fair, most accreditors have been moving in the direction of assessment for over a decade now. But their coverage of outcomes has largely been in addition to activities already in place instead of substituting assessment-based approaches for more traditional curricular inspection. As new approaches to instruction proliferate, though, the question of how good they are in comparison to known approaches will increasingly arise. We see this already in distance education, where the focus of most review attention is placed on ensuring that off-campus or media-delivered classes are "as good as" those delivered in more familiar ways. Because of radical differences in modality, a comparison of outcomes is really the only way to determine this. The very same differences in modality-once a variety of instructional approaches is in place-render learning outcomes the only viable "coin of the realm" for looking at quality in the first place. Instructional time-our traditional way of counting and judging such things-simply becomes irrelevant.
Beyond outcomes, a second important implication for accreditation is increased attention on the degree to which institutions are organizationally aligned with the purpose of learning. Current accreditation practices attempt to comprehensively examine all aspects of institutional or program functioning by looking at such things as the overall adequacy of total resources or the quality of curricular and program design. Focusing instead on how the institution or program is "organized for learning" does not deny the importance of resources or design. But it does suggest that far greater attention be paid to how disparate aspects of the institution or program actually fit together to achieve intended learning goals. Most institutions are notoriously scattered in this respect with important functions like classroom instruction, library and information resources, student affairs, and the campus environment functioning independently. Current review processes, in parallel, tend to look at the "quality" of each of these components independently-usually through an implicit division of labor among review-team members. Focusing instead on the relationships among these various functions in the light of their intended and actual contributions to student learning might be both more revealing and more efficient.
A final implication of this line of reasoning is the need to examine the institution's own capacity for "learning." Indeed, the drive toward assessment as an emphasis in accreditation in recent years is as much about building an institutional capacity for self-examination as it is about assuring the quality of learning outcomes in themselves. An important implication here-and one that might also appropriately narrow the scope of examination-might be to concentrate most of a review on an institution's internal academic management information infrastructure and how information is actually used to guide institutional decisions.
2. The "De-Institutionalization" of Learning
As learning resources and processes become increasingly student-centered and consequently time-and-place independent, though, reviews of single institutions operating in isolation from one another become increasingly problematic. A first problem here is the fact that a majority of students no longer attend a single institution. According to figures recently compiled by the Department of Education for the graduating high-school class of 1988, for example, some 54% of those ultimately attaining a baccalaureate degree had attended more than one institution and almost a fifth had attended more than two. The notion of "transfer" has also become increasingly volatile, with most large community college districts reporting as many students coming in from neighboring four-year institutions as moving to such institutions as part of a "normal" pattern of progression. Technology and distance delivery, of course, amplify this condition because students can access multiple providers simultaneously. Under such conditions, attributing the "cause" of any particular learning outcome becomes extremely difficult because most of them are joint products of multiple systems of instruction.
More than a technical difficulty, this phenomenon raises the fundamental question of what to accredit. Focusing directly on the integrity of the credential granted by any given institution or program-wherever the learning came from-through a rigorous examination of outcomes is certainly part of the solution. But there remain plenty of good reasons to still look at institutions and programs. For one thing, these need to be examined not only in themselves but also as part of a larger system. Following Robert Zemsky's recent description of the majority of higher education institutions as "convenience stores," one would probably want to look at such outlets in terms of both "truth in advertising" and the basic "nutritional content" of the items that they stock-even if most never serve their customers a "whole meal."
As this analogy suggests, "de-institutionalization" has some easily-describable implications for the practice of accreditation. First, it suggest a need for greater cooperation among quality-assurance agencies because all may have an interest in a particular institution. Certainly the emergence of multi-campus "national" institutions like the University of Phoenix (and even more strikingly, the Western Governors University which has already occasioned an unprecedented example of inter-agency cooperation in the form of the Inter-Regional Accreditation Committee) demonstrates vividly how the need for such cooperation can arise and how it can be handled. Parallel instances of interchange between regional and specialized accreditors are equally compelling when program components are "brokered" among several institutions-a situation that is also beginning to happen.
A second implication of "de-institutionalization" is the need for accreditation to pay special attention to how the "hand-off" of students from one institution to another is managed. The ability to transfer credits, of course, has always been of concern and was part of the original business of accreditation. But growing volumes of activity in this realm-together with a less and less homogeneous range of institutional players-has rendered it a complicated and controversial matter these days. Particularly important here are transfers among purely vocational and more traditional institutions where the notion of "equivalent curricular content" is problematic. Such situations again force greater attention to outcomes. As repeated studies have demonstrated, vocational and academic curricula may differ radically in structure and pedagogy yet yield similar levels of achievement in such cross-cutting skills as communications, quantitative reasoning, and problem-solving. Equally important is the need to ensure proper student advising in multi-institutional enrollment situations. Indeed, if simultaneous or sequential participation becomes the norm for postsecondary enrollment-as it already is for so many students-attending to this function may be among the most important contributions to overall quality assurance for the nation's higher education system that accreditors can make.
3. The Need for Public Engagement
The third challenge, of course, is no stranger to accreditation. Calls to "open up" the process of peer review have been heard frequently since the mid-1980's when public accountability for higher education first became a major national preoccupation, and concrete proposals to make review results more public were a prominent part of the agenda advanced by the National Policy Board for Institutional Accreditation (NPB) in the early 1990s. Since that time, the accreditation community has reacted both positively and effectively to these admonitions. In fact, much of the considerable adaptation and experimentation now occurring in the accreditation community can be ascribed directly to the need to recapture public credibility for the process. Despite this progress, there is a long way to go. In higher education more generally-as in virtually every professional sector in society (including such stalwarts as health care and the law)-both the definition and oversight of "quality" are no longer the exclusive province of "experts." For accreditation, this development means at least two things.
The first is a growing demand for direct involvement in the process on the part of multiple stakeholders including employers, students, and public representatives. At minimum, "involvement" means participation in determining what counts as "quality," and therefore what specifically ought to be examined in a review. As any examination of increasingly-popular state-level performance indicator systems reveals, different stakeholder perspectives yield quite distinctive objects of attention. More significant, perhaps, is growing public interest in using the information derived from quality assurance processes to help inform consumer choice. Rightly or wrongly, many policymakers believe that widening choice through the "marketplace" is a better route to quality improvement in public higher education than traditional financial and regulatory policies. Accreditation's particular role in this new policy arena has yet to be determined. But it surely should be a matter of serious consideration and deliberate choice rather than-as the current U.S. News and Money Magazine examples epitomize-one of simply allowing others to fulfill this function by default.
At the other end of the spectrum, there may be a direct role for stakeholders in the conduct of reviews. This is already, in part, a feature of some specialized accreditation processes- especially those that have a clearly-identifiable employment community. In regional accreditation, though, the practice of including "outsiders" on review teams remains quite rare-even for institutions with strongly-established ties to a regional economy or an employment community. Several experiments are currently under way to explore such involvement (including one Pew-supported consortium of urban public institutions), and the practice is increasingly common in other nations. Indeed, the multi-national program review recently undertaken in the field of Electrical Engineering under the auspices of the European Union included both employer and student members of review teams.
A second implication of public engagement is broad and detailed access to the results of review. Again, at least in regional accreditation, this is an old debate. Among the many reform proposals originally advanced by the NPB, for example, were several that addressed this issue. One is simply the extent of disclosure-in particular, the fact that the general public typically knows only the "yes/no" result of any accreditation process and learns nothing about the details of strength and weakness that lie behind a particular grant of recognition. Strongly related, of course, was the NPB's original call for three "tiers" of recognition. Although strongly rejected by the academic community, the notion of some kind of rating retains strong appeal in public circles. Here again, it is useful to point out that many European and Commonwealth systems-despite considerable structural differences in the specific review processes that they employ-do "grade" institutions in terms of three or more categories. Assignments of institutions to categories, moreover, are typically made on multiple dimensions of institutional or program performance and are generally not combined to yield a single "bottom line." To policymakers, information on dimensional performance of this kind, of course, is imminently suited for driving a "market for quality" through consumer choice. At the same time, it provides immediate public feedback to institutions and programs about the things that they can be proud of and what they need to fix.
Reflection on these three major drivers for change in higher education leads directly to some alternatives to our current ways of accrediting both institutions and programs. To begin this discussion, it is important to recognize that we have established some pretty standard ways of doing things despite considerable diversity across agencies. Principal hallmarks of this established paradigm are two: a periodic standards-based self-study completed by the entity under review followed by a relatively open-ended, multi-day site visit by a team composed of peer-practitioners whose task is then to prepare a report that documents strengths and weaknesses and contains a recommendation about re-affirmation. Despite many changes in the content of accreditation standards, the fundamentals of this mode of operating have been relatively unaltered over the past half-century. Variations on it, in contrast, can best be described in three intersecting headings: what is reviewed, how a review is done, and who does it. In the baseline model, for instance, the answer to the first question is "everything." The familiar self-study/team-visit combination provides our current answer to the second question, while "academic peers" constitute the traditional way of dealing with the third. Each established "answer," though, is only one of many possibilities, most of which have already been proven elsewhere.
1. What is Reviewed.
A first design dimension for accreditation practice is the focus of inquiry itself-what, in particular, should be looked at when a given college or university is examined? In current U.S. practice, the answer to this question is embodied most visibly in the standards of each accrediting body. Ideally, these standards are intended to be "comprehensive." That is, questions are posed and reviewers are expected to make judgements about virtually every aspect of a given institution's assets and operations. Critics of this approach, and there have been many, contend that it inevitably forces excessive attention to the details of resources and processes- many of which lie outside the real core of the institution's educational mission. Recently, of course, accreditors have responded by enacting new standards on educational outcomes or institutional effectiveness. But as noted earlier, this "new look" has for the most part been implemented in addition to established standards organized on traditional "functional" lines, including "mission," "governance," "faculty," "instructional programs," "student services," "library and physical resources," and "finance."
This "comprehensive" approach to peer review contrasts strongly with similar practices in other countries and, indeed, in other sectors in the U.S. (e.g. health care organizations). In these cases, quality assurance procedures tend to be considerably more specific in what they examine- concentrating particularly on key processes or attributes deemed "essential" or "indicative" of institutional or program quality. Recommendations for reforming U.S. practice along similar lines have also been made in recent years-most notably three years ago in the debates accompanying the work of the NPB. All, however, can be usefully grouped under three main headings:
2. The Conduct of Reviews.
A second design dimension for peer review centers on the nature of the evidence considered and exactly how the review process is conducted. Current U.S. practice in this regard, as noted, is remarkably consistent. One component of the typical body of evidence is a comprehensive self-study produced every five to ten years by the institution or program under review. This generally consists of an extensive narrative, structured around the standards and intended to be both self-analytical and supportive of the institution's case for quality. Following submission and review of the self-study, a second typical element is a site-visit, the bulk of which consists of face-to-face interviews with various campus constituencies. Both components have recently been the object of increasing campus-level discontent, largely because they are seen by many institutions as both costly and unable to "add value" to local institutional planning processes. Partly as a result, there has been considerable flux of late in the manner in which these components are configured, at least among regional accreditors. All now allow substantial modifications to the traditional formula-principally by allowing institutions the latitude to "focus" their self-studies on specific topics of interest under defined conditions so long as they simultaneously can demonstrate basic compliance with Commission-established standards.
Few of these variations, however, substantially alter the basic types of evidence that are examined and how the examination takes place. Again, experience elsewhere suggests some quite different approaches to assembling and reviewing evidence that might be considered. These include:
3. Who Participates in a Review?
The traditional answer here, of course, is members of the academic community who can pass as "peers" of the institution or program under review. But if quality assurance based on the review of peers is to have real meaning, the institutions and programs participating must to some extent be of "like type." At the very least, minimal commonality allows appropriate comparisons across institutions to be made-presumably by individual peer professionals who are deeply familiar with how particular kinds of institutions and programs function. At the same time, it allows meaningful review standards to be developed-broad enough to be widely applied, but sufficiently specific to support real judgements about institutional condition and performance. "Like type," however, can be defined in many ways. At the same time, the issue of peer definition itself might be reconsidered to include different kinds of stakeholders in addition to academic insiders. Taken together, the following variations on who participates in reviews seem worth considering:
Other ways of establishing the appropriate dimensions, conduct and participants in a review can easily be imagined and the alternatives I have discussed are not mutually exclusive. But they do make more explicit the kinds of choices that must be considered in order to render accreditation systems more intentional and capable of meeting the demands created by new environmental pressures. Certainly, making such choices is difficult. The established approach still seems to work in most cases and we already know how to do it. Under such circumstances, it is easy to dismiss as fanciful things that haven't been tried. But we must remind ourselves again that workable prototypes for all these alternatives are not just theoretical. Elements of each are in place now, either in other nations to provide quality assurance for higher education or in other sectors of societal activity here in the U.S. Because many of these practices appear to me admirably adapted to meeting emerging conditions, I look forward to actively exploring them with you in the coming years. For now and in conclusion, I can only applaud CHEA for helping to occasion and to shape this dialogue through meetings like this one.
|Council For Higher Education Accreditation
One Dupont Circle NW, Suite 510
Washington DC, 20036-1135
| Send comments and suggestions to firstname.lastname@example.org
Last Modified: December 8, 1998