CHEA Main Graphic


Examining a Brave New World:
How Accreditation Might Be Different

Peter T. Ewell
National Center for Higher Education Management Systems
(NCHEMS)


 

CONTENTS

A. A Shifting Context for Accreditation

1. The Revolution in Teaching and Learning

2. The "De-Institutionalization" of Learning

3. The Need for Public Engagement

 

B. What Some Alternatives Might Look Like

1. What is Reviewed.

2. The Conduct of Reviews

3. Who Participates in a Review?

     Let me begin by saying how much I welcome the challenge of addressing the question "how accreditation might be different." The topic is in many ways a dream assignment and might easily be taken as license to say things that are truly outrageous. While I intend to take advantage of this opportunity to some extent, my remarks will also be tempered by some constraints. First and most important is the fact that accreditation already is different. For better or worse, the majority of accreditation agencies-both institutional and specialized-are right in the middle of re-thinking their approaches, and the degree to which this is happening is unprecedented. Second, I must confess to a pre-determined point of view about what some alternatives might look like-shaped largely through my engagement in a Pew-funded effort to help stimulate new "prototypes" for quality assurance. Clear recognition that change is being acted out by most of you already, together with my own commitment to an important but necessarily limited set of concrete alternatives, will likely temper the degree to which I can propose anything really radical this morning-or at least anything that you probably haven't already considered.

Given these caveats, I'd like to concentrate on two main topics. The first, prerequisite for any realistic conversation about alternatives, is why accreditation must be different. Addressing this question requires us to examine systematically some of the principal imperatives that are driving change in higher education itself and, as a necessary consequence, in the processes we establish to regulate and govern the enterprise. Not surprisingly, I believe that technology and distance education provide significant instances of these underlying environmental pressures. But they do not comprise their essence. The second topic consists of the specific alternatives to the way we currently do accreditation's business. And here, I think, we need to squarely recognize the fact that we have evolved a pretty standard set of practices that don't vary much from agency to agency, while the rest of the world employs approaches that are quite diverse in character.

 

A. A Shifting Context for Accreditation

A number of important drivers of change are, I believe, forcing us to re-think what we are doing. To introduce them, I'd like to note some recent personal experiences in working with a couple of regional accreditors that vividly illustrate why we cannot continue operating as we have been. In both cases, the indelible impression I took away with me was that staff had a virtually unmanageable workload. Colleges and universities, together their associated aspects and appendages, are becoming increasingly complex. And with this growing complexity, the need to look at more and more things in a review process is growing exponentially. Almost every accrediting staff with whom I speak confirms this phenomenon. To me, this situation echoes growing faculty complaints about not having enough hours in the day to cover the escalating demands of teaching, research, and service. On the one hand, experience suggests that situations like these will solve themselves, if only because they cannot continue. On the other, they make it imperative for us to engage in some fundamental re-examination of the continuing relevance of what we do, lest we be saddled with "solutions" that we do not like.

At bottom, I think, three major trends in higher education's environment are forcing us toward serious stock-taking. The first and most fundamental is a decisive change in the nature of teaching and learning itself which is affecting both individual college classrooms and how institutions "organize themselves" for learning. A second somewhat associated trend is the manner in which postsecondary teaching and learning is becoming "de-institutionalized"- becoming both the product of multiple institutions in the case of a given student and increasingly the province of providers beyond the academy. The third and final force is the pressure for public and "customer" engagement in the process of quality assurance, both to ensure that relevant information about institutional performance is collected and to address wider concerns about lack of accountability in higher education. All three of these trends, of course, have been with us for a while. Together and separately, though, their impacts have become impossible to ignore any longer.

 

1. The Revolution in Teaching and Learning

Although it poses significant challenges for review, the first trend is a very positive development. In fact, we appear to be right in the middle of a major renaissance of interest in the nature of collegiate learning itself and with it, in the things that institutions can do to occasion and create collegiate learning. On the matter of "deep learning" itself, more and more research suggests that current modes of instruction that emphasize information transmission are relatively ineffective. In parallel, calls are increasing to shift the axis of what colleges do from "delivering content" to providing students with multiple and diverse opportunities to actively engage in knowledge-construction and skills building on their own.

Many of the specifics of what some observers have called this "paradigm shift from teaching to learning" are already familiar. Learner-centered and self-paced instructional designs in which individual students proceed to master particular bodies of material independently and at their own pace are among the most prominent. This approach, which the Australians usefully term "resource-based learning" (and is probably most prominently practiced by the British Open University), does not always entail "distance" delivery; its most visible growth area right now, in fact, is as an adjunct to regular college classes. Closely related are "active learning" approaches which radically re-configure how faculty members interact with students. Laboratory or "studio" classes in which students engage directly in complex class-based research exercises and problem-solving scenarios under the guidance of faculty mentors are perhaps the most compelling examples. Both are often combined with collaborative learning approaches in which students work in teams to address posed problems or, at the very least, are organized into intentional "learning communities" taking several classes in common and engaging in mutual support. All of these techniques, research suggests, produce superior learning gains; more surprisingly, perhaps, all three can also be more efficient than traditional "instructional delivery" because of the ways they re-configure the use of expensive faculty time. Technology, of course, is a visible part of these developments and is often seen as a driver for change in itself. While partly the case, it is important to recall that every one of these methods-and their demonstrable effectiveness-predates the widespread application of technology. This leads me to the conclusion that the role of technology is not revolutionary in itself. Rather it renders the use of alternative instructional approaches far more feasible and efficient than in the past and makes their consequences for institutions unavoidable.

This "revolution in teaching and learning" has at least three specific implications for accreditation. First, it makes it even more important to focus on outcomes in the accreditation process. Certainly this is not news and, to be fair, most accreditors have been moving in the direction of assessment for over a decade now. But their coverage of outcomes has largely been in addition to activities already in place instead of substituting assessment-based approaches for more traditional curricular inspection. As new approaches to instruction proliferate, though, the question of how good they are in comparison to known approaches will increasingly arise. We see this already in distance education, where the focus of most review attention is placed on ensuring that off-campus or media-delivered classes are "as good as" those delivered in more familiar ways. Because of radical differences in modality, a comparison of outcomes is really the only way to determine this. The very same differences in modality-once a variety of instructional approaches is in place-render learning outcomes the only viable "coin of the realm" for looking at quality in the first place. Instructional time-our traditional way of counting and judging such things-simply becomes irrelevant.

Beyond outcomes, a second important implication for accreditation is increased attention on the degree to which institutions are organizationally aligned with the purpose of learning. Current accreditation practices attempt to comprehensively examine all aspects of institutional or program functioning by looking at such things as the overall adequacy of total resources or the quality of curricular and program design. Focusing instead on how the institution or program is "organized for learning" does not deny the importance of resources or design. But it does suggest that far greater attention be paid to how disparate aspects of the institution or program actually fit together to achieve intended learning goals. Most institutions are notoriously scattered in this respect with important functions like classroom instruction, library and information resources, student affairs, and the campus environment functioning independently. Current review processes, in parallel, tend to look at the "quality" of each of these components independently-usually through an implicit division of labor among review-team members. Focusing instead on the relationships among these various functions in the light of their intended and actual contributions to student learning might be both more revealing and more efficient.

A final implication of this line of reasoning is the need to examine the institution's own capacity for "learning." Indeed, the drive toward assessment as an emphasis in accreditation in recent years is as much about building an institutional capacity for self-examination as it is about assuring the quality of learning outcomes in themselves. An important implication here-and one that might also appropriately narrow the scope of examination-might be to concentrate most of a review on an institution's internal academic management information infrastructure and how information is actually used to guide institutional decisions.

 

2. The "De-Institutionalization" of Learning

As learning resources and processes become increasingly student-centered and consequently time-and-place independent, though, reviews of single institutions operating in isolation from one another become increasingly problematic. A first problem here is the fact that a majority of students no longer attend a single institution. According to figures recently compiled by the Department of Education for the graduating high-school class of 1988, for example, some 54% of those ultimately attaining a baccalaureate degree had attended more than one institution and almost a fifth had attended more than two. The notion of "transfer" has also become increasingly volatile, with most large community college districts reporting as many students coming in from neighboring four-year institutions as moving to such institutions as part of a "normal" pattern of progression. Technology and distance delivery, of course, amplify this condition because students can access multiple providers simultaneously. Under such conditions, attributing the "cause" of any particular learning outcome becomes extremely difficult because most of them are joint products of multiple systems of instruction.

More than a technical difficulty, this phenomenon raises the fundamental question of what to accredit. Focusing directly on the integrity of the credential granted by any given institution or program-wherever the learning came from-through a rigorous examination of outcomes is certainly part of the solution. But there remain plenty of good reasons to still look at institutions and programs. For one thing, these need to be examined not only in themselves but also as part of a larger system. Following Robert Zemsky's recent description of the majority of higher education institutions as "convenience stores," one would probably want to look at such outlets in terms of both "truth in advertising" and the basic "nutritional content" of the items that they stock-even if most never serve their customers a "whole meal."

As this analogy suggests, "de-institutionalization" has some easily-describable implications for the practice of accreditation. First, it suggest a need for greater cooperation among quality-assurance agencies because all may have an interest in a particular institution. Certainly the emergence of multi-campus "national" institutions like the University of Phoenix (and even more strikingly, the Western Governors University which has already occasioned an unprecedented example of inter-agency cooperation in the form of the Inter-Regional Accreditation Committee) demonstrates vividly how the need for such cooperation can arise and how it can be handled. Parallel instances of interchange between regional and specialized accreditors are equally compelling when program components are "brokered" among several institutions-a situation that is also beginning to happen.

A second implication of "de-institutionalization" is the need for accreditation to pay special attention to how the "hand-off" of students from one institution to another is managed. The ability to transfer credits, of course, has always been of concern and was part of the original business of accreditation. But growing volumes of activity in this realm-together with a less and less homogeneous range of institutional players-has rendered it a complicated and controversial matter these days. Particularly important here are transfers among purely vocational and more traditional institutions where the notion of "equivalent curricular content" is problematic. Such situations again force greater attention to outcomes. As repeated studies have demonstrated, vocational and academic curricula may differ radically in structure and pedagogy yet yield similar levels of achievement in such cross-cutting skills as communications, quantitative reasoning, and problem-solving. Equally important is the need to ensure proper student advising in multi-institutional enrollment situations. Indeed, if simultaneous or sequential participation becomes the norm for postsecondary enrollment-as it already is for so many students-attending to this function may be among the most important contributions to overall quality assurance for the nation's higher education system that accreditors can make.

 

3. The Need for Public Engagement

The third challenge, of course, is no stranger to accreditation. Calls to "open up" the process of peer review have been heard frequently since the mid-1980's when public accountability for higher education first became a major national preoccupation, and concrete proposals to make review results more public were a prominent part of the agenda advanced by the National Policy Board for Institutional Accreditation (NPB) in the early 1990s. Since that time, the accreditation community has reacted both positively and effectively to these admonitions. In fact, much of the considerable adaptation and experimentation now occurring in the accreditation community can be ascribed directly to the need to recapture public credibility for the process. Despite this progress, there is a long way to go. In higher education more generally-as in virtually every professional sector in society (including such stalwarts as health care and the law)-both the definition and oversight of "quality" are no longer the exclusive province of "experts." For accreditation, this development means at least two things.

The first is a growing demand for direct involvement in the process on the part of multiple stakeholders including employers, students, and public representatives. At minimum, "involvement" means participation in determining what counts as "quality," and therefore what specifically ought to be examined in a review. As any examination of increasingly-popular state-level performance indicator systems reveals, different stakeholder perspectives yield quite distinctive objects of attention. More significant, perhaps, is growing public interest in using the information derived from quality assurance processes to help inform consumer choice. Rightly or wrongly, many policymakers believe that widening choice through the "marketplace" is a better route to quality improvement in public higher education than traditional financial and regulatory policies. Accreditation's particular role in this new policy arena has yet to be determined. But it surely should be a matter of serious consideration and deliberate choice rather than-as the current U.S. News and Money Magazine examples epitomize-one of simply allowing others to fulfill this function by default.

At the other end of the spectrum, there may be a direct role for stakeholders in the conduct of reviews. This is already, in part, a feature of some specialized accreditation processes- especially those that have a clearly-identifiable employment community. In regional accreditation, though, the practice of including "outsiders" on review teams remains quite rare-even for institutions with strongly-established ties to a regional economy or an employment community. Several experiments are currently under way to explore such involvement (including one Pew-supported consortium of urban public institutions), and the practice is increasingly common in other nations. Indeed, the multi-national program review recently undertaken in the field of Electrical Engineering under the auspices of the European Union included both employer and student members of review teams.

A second implication of public engagement is broad and detailed access to the results of review. Again, at least in regional accreditation, this is an old debate. Among the many reform proposals originally advanced by the NPB, for example, were several that addressed this issue. One is simply the extent of disclosure-in particular, the fact that the general public typically knows only the "yes/no" result of any accreditation process and learns nothing about the details of strength and weakness that lie behind a particular grant of recognition. Strongly related, of course, was the NPB's original call for three "tiers" of recognition. Although strongly rejected by the academic community, the notion of some kind of rating retains strong appeal in public circles. Here again, it is useful to point out that many European and Commonwealth systems-despite considerable structural differences in the specific review processes that they employ-do "grade" institutions in terms of three or more categories. Assignments of institutions to categories, moreover, are typically made on multiple dimensions of institutional or program performance and are generally not combined to yield a single "bottom line." To policymakers, information on dimensional performance of this kind, of course, is imminently suited for driving a "market for quality" through consumer choice. At the same time, it provides immediate public feedback to institutions and programs about the things that they can be proud of and what they need to fix.

 

B. What Some Alternatives Might Look Like

Reflection on these three major drivers for change in higher education leads directly to some alternatives to our current ways of accrediting both institutions and programs. To begin this discussion, it is important to recognize that we have established some pretty standard ways of doing things despite considerable diversity across agencies. Principal hallmarks of this established paradigm are two: a periodic standards-based self-study completed by the entity under review followed by a relatively open-ended, multi-day site visit by a team composed of peer-practitioners whose task is then to prepare a report that documents strengths and weaknesses and contains a recommendation about re-affirmation. Despite many changes in the content of accreditation standards, the fundamentals of this mode of operating have been relatively unaltered over the past half-century. Variations on it, in contrast, can best be described in three intersecting headings: what is reviewed, how a review is done, and who does it. In the baseline model, for instance, the answer to the first question is "everything." The familiar self-study/team-visit combination provides our current answer to the second question, while "academic peers" constitute the traditional way of dealing with the third. Each established "answer," though, is only one of many possibilities, most of which have already been proven elsewhere.

 

1. What is Reviewed.

A first design dimension for accreditation practice is the focus of inquiry itself-what, in particular, should be looked at when a given college or university is examined? In current U.S. practice, the answer to this question is embodied most visibly in the standards of each accrediting body. Ideally, these standards are intended to be "comprehensive." That is, questions are posed and reviewers are expected to make judgements about virtually every aspect of a given institution's assets and operations. Critics of this approach, and there have been many, contend that it inevitably forces excessive attention to the details of resources and processes- many of which lie outside the real core of the institution's educational mission. Recently, of course, accreditors have responded by enacting new standards on educational outcomes or institutional effectiveness. But as noted earlier, this "new look" has for the most part been implemented in addition to established standards organized on traditional "functional" lines, including "mission," "governance," "faculty," "instructional programs," "student services," "library and physical resources," and "finance."

This "comprehensive" approach to peer review contrasts strongly with similar practices in other countries and, indeed, in other sectors in the U.S. (e.g. health care organizations). In these cases, quality assurance procedures tend to be considerably more specific in what they examine- concentrating particularly on key processes or attributes deemed "essential" or "indicative" of institutional or program quality. Recommendations for reforming U.S. practice along similar lines have also been made in recent years-most notably three years ago in the debates accompanying the work of the NPB. All, however, can be usefully grouped under three main headings:

  • Examining the "integrity of the degree."
    This focus is consistent with both the original purposes of accreditation in the U.S. and with peer-based quality assurance processes conducted on a discipline-by-discipline basis under government sponsorship in such countries as The Netherlands and the U.K. To be credible, this approach first demands considerable attention to learning outcomes. In contrast to current accreditation requirements for assessment, moreover, it must examine such outcomes in the light of standards of attainment external to the institution. In some cases, these may be standards of achievement set by the wider academic community (as, for example, in the External Examiner system in the U.K.). In others, they might be standards set by principal client groups (professional bodies or employers, for example). A second implication of this focus is that reviews should concentrate specifically on the ways the institution or program is organized to foster the learning outcomes that it believes are important. While to some extent demanding attention to resources and processes, it is their configuration and use in specific support of what is intended-rather than their sheer extent or ascribed "quality"-that is important.
  • Examining "core processes" of [undergraduate] teaching and learning.
    This focus also constrains the object of review to the institution's teaching/learning function, but from a slightly different perspective. In this case, what is under review is the actual manner in which the institution conceives of, designs, and delivers experiences intended to promote learning-principally through its curriculum and instructional activities. Rather than simply taking a look at overall curricular design, as is typical in current accreditation practice, a review organized along these lines would likely "audit" selected courses in considerable detail (including the processes used in their design and review) and would look at pedagogy directly through mechanisms like the peer review of teaching. Again, some measure of these activities does in fact occur in U.S. higher education through the mechanisms of regional accreditation and, to a somewhat greater degree, through professional accreditation. But far better models of focused activity can be found in European practice (for instance in the reviews that used to be conducted in the U.K. under the auspices of the HEFCE or the CNAA) or in the kinds of "core process reviews" that are occurring with increasing frequency in business settings.
  • Examining academic quality assurance processes.
    This focus has perhaps the greatest rhetorical resemblance to current U.S. accreditation practice, an espoused purpose of which is to "validate" an institution's self-study. Really concentrating on this function, though, would entail a much more thorough examination of the specific processes by means of which colleges and universities define and monitor the quality of their academic processes and products. The adoption of "academic audits" of this kind has recently been advocated as an answer to the declining credibility of institutional accreditation, and the process has considerable appeal in the light of the enormous diversity that characterizes American higher education. Use of this alternative is also supported by a considerable body of experience elsewhere-including Europe, Australasia, and Hong Kong-as well as through more generic quality-assessment approaches like ISO9000 or the annual Malcolm Baldrige National Quality Award. Unlike the prior two emphases where ready recognition of the actual objects of interest can be counted on among most academic peer reviewers, a good deal of additional training may be required to turn academics into credible "auditors." Recent accreditor experience in conducting institutional reviews that include an explicit examination of "assessment" or "institutional effectiveness" sustains this point: lay teams are in most cases not sufficiently prepared to assess institutions on these dimensions. This is, of course, precisely the reason why both offshore "academic audits" and the domestic Baldrige require considerable reviewer training to make them work.

 

2. The Conduct of Reviews.

A second design dimension for peer review centers on the nature of the evidence considered and exactly how the review process is conducted. Current U.S. practice in this regard, as noted, is remarkably consistent. One component of the typical body of evidence is a comprehensive self-study produced every five to ten years by the institution or program under review. This generally consists of an extensive narrative, structured around the standards and intended to be both self-analytical and supportive of the institution's case for quality. Following submission and review of the self-study, a second typical element is a site-visit, the bulk of which consists of face-to-face interviews with various campus constituencies. Both components have recently been the object of increasing campus-level discontent, largely because they are seen by many institutions as both costly and unable to "add value" to local institutional planning processes. Partly as a result, there has been considerable flux of late in the manner in which these components are configured, at least among regional accreditors. All now allow substantial modifications to the traditional formula-principally by allowing institutions the latitude to "focus" their self-studies on specific topics of interest under defined conditions so long as they simultaneously can demonstrate basic compliance with Commission-established standards.

Few of these variations, however, substantially alter the basic types of evidence that are examined and how the examination takes place. Again, experience elsewhere suggests some quite different approaches to assembling and reviewing evidence that might be considered. These include:

  • Inspections or audits.
    Traditional accreditation visits are relatively non-directive with respect to how a given team conducts its work. Generally, however, agency standards are used as a rough guide, with team members breaking up into smaller groups in an attempt to "cover" the standards through interviews or document review. This approach contrasts strongly with that typical of an inspection or audit. In the Baldrige review process, for instance, team members are not only thoroughly trained but they also employ detailed review guides based on clearly-established protocols to help them in their work. Review processes for health care accreditation also use structured review protocols to examine identified key processes when a team arrives on site-often in considerable detail. Similarly, the "academic audits" conducted in other countries noted earlier rely on well-elaborated review criteria and guidelines. At present, the periodic program reviews and on-site inspections conducted by the USDOE to assure continuing eligibility for federal student aid programs constitute the only U.S. example of a protocol-based audit of this kind.
  • Documentary and statistical evidence.
    Traditional accreditation visits also tend to confine their attention to what can be learned through interview and direct observation. While institutions generally make a range of supporting documents available to back up assertions made in the self-study, these are rarely examined in detail. Nor are such documents typically sent to the team in advance for detailed inspection and analysis. Similarly, with the exception of budgetary and financial information, statistical data on institutional condition and performance is spotty, non-standard, and supplied at the institution's discretion. In contrast, direct examination and analysis of "authentic" documents and standard statistical datasets have always provided professional evaluators with powerful tools to examine an organization's condition and performance. As experience in social program evaluation shows, both can be organized in standard ways that facilitate review and analysis. Because they are natural products of ongoing operations, moreover, compiling such evidence also places a good deal less burden on the institution or program than preparing a self-study. These advantages have already occasioned several experiments with the use of "institutional portfolios" in accreditation, the most prominent probably being a WASC initiative with the University of California and a review process currently being piloted by AALE with support from Knight and Pew. With respect to statistical data on academic condition and performance, moreover, initial models already exist in the indicator systems for public colleges and universities that have been launched in a number of states. At the very least, and consistent with an initiative that CHEA is already pursuing, accreditors could begin to require institutions to report standard sets of descriptive statistics that have already been developed for federal (IPEDS) reporting and/or recommended for common accountability purposes by such bodies as the Joint Commission on Accountability Reporting (JCAR).
  • The "rhythm" of review.
    Typical accreditation visits occur infrequently and consist of a single multi-day review period. While the intensity of this process provides some advantages (which could probably be multiplied if teams employed protocols or explicit review strategies), significant changes in institutions or programs may occur within a given review cycle. Large-scale comprehensive reviews, as noted, also represent a major commitment of institutional time and resources. Both conditions might be addressed by restructuring the relationship between the institution and the review team to emphasize a more frequent, low-intensity, set of interactions. Again, some experiments consistent with this direction have already been conducted-the most prominent being a SACS approach which allows separate "compliance" and "deep engagement" reviews and several WASC experiments that involve multiple visits over time. Coupled with document review and the use of statistical evidence, moreover, changing the "rhythm" of review to allow teams to examine evidence in advance and to identify specific focused topics to examine on site at a later point might provide far more opportunities for teams to engage in active dialogue with the institution that they are reviewing-hopefully to the benefit of both.

 

3. Who Participates in a Review?

The traditional answer here, of course, is members of the academic community who can pass as "peers" of the institution or program under review. But if quality assurance based on the review of peers is to have real meaning, the institutions and programs participating must to some extent be of "like type." At the very least, minimal commonality allows appropriate comparisons across institutions to be made-presumably by individual peer professionals who are deeply familiar with how particular kinds of institutions and programs function. At the same time, it allows meaningful review standards to be developed-broad enough to be widely applied, but sufficiently specific to support real judgements about institutional condition and performance. "Like type," however, can be defined in many ways. At the same time, the issue of peer definition itself might be reconsidered to include different kinds of stakeholders in addition to academic insiders. Taken together, the following variations on who participates in reviews seem worth considering:

  • New ways of defining peers.
    Current regional accreditation practice has established appropriate groupings of institutions on the basis of pure geography. Largely a matter of historical accident, these aggregations are often justified (when they are justified at all) because of shared culture and tradition. Carrying this logic further, though, other and perhaps more defensible regional groupings might be considered, such as institutions within a definable socio-economic region like an SMSA or a multi-state economic area that shares similar industries. Such groupings might be especially useful if the focus of "quality" addressed such matters as contributions to regional civic and workforce development, service to particular regional student clienteles, or how well the institution functions as part of a wider group of providers that constitute an integrated higher education "system." Current accreditation practice also attempts to recognize appropriate physical, structural, or mission differences in the types of institutions or programs under review. Usually this is done informally by selecting members of review teams from colleges and universities that "match" key characteristics of the institution in question. In some cases, associational boundaries reflect differences in institutional type-for example, in the case of single-purpose professional schools (e.g. Bible Colleges or Chiropractic Colleges) or in the distinction between the Junior and Senior College Divisions in the Western Association of Colleges and Schools (WASC). Reform proposals have suggested that such distinctions go much further because of significant presumed differences among institutional types. Principal candidates for such groupings are probably major (nationally-ranked) research universities, community colleges, and selective liberal arts colleges-cases in which groups of institutions closely resemble one another on many dimensions.
  • Processes that include various stakeholder groups.
    With regard to involving "outsiders," current practice has been quite circumspect. Stakeholders like students, employers, or members of the surrounding community may be used occasionally (and usually informally) as sources of evidence by a review team during a site-visit, but they have no independent voice and are almost never involved directly in determining what a team should find and report. As noted earlier, strong pressure for a reversal of this condition is evidenced by prominent state accountability initiatives and an increasingly active set of third-party players like media-based ranking systems. Given these already-apparent alternatives, whether accreditors should directly involve such stakeholders in their own review processes remains debatable. But we should again be reminded that they are included in processes that resemble institutional and program accreditation elsewhere. Again, the best illustration is provided by the Dutch, whose national discipline-by-discipline reviews include both employers and students as full members of review teams.
  • Processes that involve multiple quality assurance agencies.
    Finally, normal accrediting practice is based on individual quality-assurance agencies acting more or less independently. This situation, more than any other, has led to complaints on the part of institutions about "accreditation burden." Within the accreditation community, of course, steps are already being taken to alleviate this condition. In the realm of institutional accreditation, the primary stimulus has been distance education and the rise of multi-site "national" institutions-situations which increasingly require regional accreditors to work together. At the very least, regional accreditors are seeing the need to share commonly-defined bodies of information about the institutions that they recognize. Growing cooperation between institutional and specialized accreditors is also becoming more apparent and several experiments using joint or coordinated visits are already under way. Such initiatives, though, could be considerably extended. Reviewers selected jointly by regional and specialized accreditors, for example, might staff up to half of a single "all-purpose" institutional accreditation team. Less well-exploited to date, but equally needed, are joint ventures between accreditors and government authorities. Following the new political ideology of deregulation, many state governments have already in essence delegated to accreditors primary responsibility for stimulating and monitoring internal institutional assessment activities-an area in which many states were quite active a decade ago. Most needed here, perhaps, are more formal and visible ways of delineating an appropriate division of labor with respect to accountability that avoids substantial duplication. At minimum, this requires greater information sharing. More radically, it might imply joint participation in the review process itself.

Other ways of establishing the appropriate dimensions, conduct and participants in a review can easily be imagined and the alternatives I have discussed are not mutually exclusive. But they do make more explicit the kinds of choices that must be considered in order to render accreditation systems more intentional and capable of meeting the demands created by new environmental pressures. Certainly, making such choices is difficult. The established approach still seems to work in most cases and we already know how to do it. Under such circumstances, it is easy to dismiss as fanciful things that haven't been tried. But we must remind ourselves again that workable prototypes for all these alternatives are not just theoretical. Elements of each are in place now, either in other nations to provide quality assurance for higher education or in other sectors of societal activity here in the U.S. Because many of these practices appear to me admirably adapted to meeting emerging conditions, I look forward to actively exploring them with you in the coming years. For now and in conclusion, I can only applaud CHEA for helping to occasion and to shape this dialogue through meetings like this one.

 


 

CHEA Main Graphic Council For Higher Education Accreditation
One Dupont Circle NW, Suite 510
Washington DC, 20036-1135
202-955-6126 (voice)
202-955-6129 (fax)

© 1998 Council for Higher Education Accreditation. All rights reserved. Terms of Use.

 Send comments and suggestions to chea@chea.org
 Last Modified: December 8, 1998