Lapas attēli
PDF
ePub

ects in order to break loose any findings that might have greater potential for useful application than they had thus far received. Several good ideas were unearthed by this review. The provision of comprehensive health, education, and counseling services for unwed school-age pregnant girls was identified as an effective way to reduce the likelihood that these girls would cut short their education, have additional unwanted pregnancies, or become dependent on welfare. Promising programs to teach reading, to train teachers, and to train physicians' assistants were also uncovered, and the agencies are now stepping up their promotion of these ideas.

In carrying out this review of completed R. & D. projects, however, we learned that there really aren't a great many good ideas sitting on the shelves in the agencies waiting to be promoted. The prospect of any substantial payoffs, therefore, must depend on managing our R. & D. programs so that more useful and convincing results are produced. We must make sure that our limited R. & D. resources are focused on important questions, that projects fit together, and that they do not wastefully overlap.

To these ends, the operating agencies have now established R. & D. planning procedures and are working on the improvement of project design. We no longer tolerate the failure to submit R. & D. project reports. Each report is scrutinized to determine what, if anything, has been learned, whether the project should be replicated on a more sophisticated or larger scale, whether the results justify limited application and limited dissemination, or whether they should be widely promoted. In one major research area, enactment of the President's proposal to establish a National Institute of Education will greatly strengthen our capacity to make this kind of determination.

The process by which R. & D. results are disseminated is also being improved. For some users, it is enough to provide easy access to project reports through information retrieval systems like those operated by the National Library of Medicine and the National Clearinghouse for Drug Abuse Information. For others, the R. & D. offices which produce the results will need to make increasing use of demonstrations, conferences, and direct personal contact. Training programs and State and local planning projects can also serve as conduits for ideas.

EVALUATION

EVALUATION is the way we try to find out how well we are doing. Given the squeeze between uncontrollable costs and rising expectations, our society can no longer afford to indulge the "don't just stand there, do something" syndrome that has so often characterized reactions to current appeals. It is not that "doing something" is necessarily wrong. At a time of disillusionment with the integrity of government, however, ineffective responses to needs we do not really know how to meet can only compound distrust and reinforce alienation. In George Washington's day, it may not have done much harm to indulge the belief that

koches could cure a fever; in our own, the failure to acknowledge the limitations of our capacity to treat heroin addiction would be inexcusable.

It is thus more urgent than ever before to be able to apply objective measures to the performance of our programs. Despite this urgency, our present capacity to do this is sharply limited: We need better methods of measuring performance; and we also need to make evaluation a regular part of program administration. The first step toward overcoming these deficiencies must be a clear understanding of just what it is that needs to be evaluated. We too often speak of evaluating a "program" without knowing what we mean when we use the word. A "program" is the sum of the activities related to specific, usually categorical, legislation, such as the Comprehensive Alcohol Abuse Act. A "program" is also the sum of the activities directed toward a particular goal-for example, the "Right to Read program" or the prevention and control of cancer. Or the word may be used to refer to the sum of the activities of an on-going administrative function for example, the "Upward Mobility program."

The proper object of evaluation will from time to time be a “program” in any one of these senses of the word. Ordinarily, however, the more appropriate object ot evaluation is a way of doing something-a method of teaching reading or an approach to the treatment of alcoholism. We want to know what works. We want to know what works best. We want to know what it costs to get some umpurovement In order to put our resources' in the most effective places, we want to be able to measure the tradeorks between competing alternatives.

We're working hard to improve our ability to answer this kind of question. We have catabladed evaluation offices at high levels in the agencies and have recounted good people for them, each year as they plan and manage evaluations, thew psyse wad xyagiða thu â &. We are diso xeginning to develop trainA chachian The Health Services and Mental Health AdminisBokakh De Zuidhoyy Nax code aced with several universities to design a training ༔h ་ མ ཨི ངོ པོ། ནོནw、

31, Wydd nadley Aayat MAN, kad apgainst lands n the evaluation techniques A Nikhilg & liggere doe toniques We have contracts to thaire Nath cats and socal and emotional Tay. We ac in taxi de control groups and

ཅ།་་་ཅུ་བའི་མིན།

[ocr errors][ocr errors][merged small]

games y ested.

at taniig Pugh the PE3SI Program

Make fective use of com

ares in activities

Changes occur

ades that can be

De not professional

7. themselves, Locate Change

ALT I Wr pro

grams. In several programs, such as the Emergency School Assistance program, we have designed systems which provide information useful both for operations and for evaluation. For all programs, the requirement of annual evaluation plans forces the evaluators regularly to identify the most important questions. All evaluation projects, moreover, will hereafter have to specify at the outset how their results are to be implemented. Evaluation offices will be required to identify regularly those implications of their studies that are important enough to be passed on to possible users.

But none of this will be fully effective until and unless we get in the habit of demanding evidence for decisions, and, as our evaluation efforts begin to produce results, discipline ourselves to take advantage of them. We cannot achieve the twin goals of responsibility and responsiveness until we not only know what our activities produce at what cost and with what impact, but also make use of that knowledge. Ultimately, we should like to know what the results would be from one small addition in resources in any activity.

COSTS AND BENEFITS

the

WE NEED, in short, the capacity to relate costs to benefits and to compare benefits of one program with the benefits of others. Our society's total resources are limited by population, technology, and natural endowment. To solve the resource allocation problem effectively within the public sector, we must perform four difficult tasks much better than we ever have: (1) Clarifying the goals we seek; (2) ranking these goals in importance; (3) considering different ways of approaching each goal; and (4) calculating the costs of each alternative. We have made some progress on each task, but much remains to be done.

I am repeatedly reminded of how little agreement there is on the objectives of some of our public programs. We want to improve the delivery of health services, or we want to make higher education more accessible to the disadvantaged. But what does "improve" mean? How can we measure "accessibility"? How much improvement or accessibility do we want? This, I think, is one of the great values of the evaluation process: when it is well done, it forces us to be specific about our goals. We cannot measure results until we sharpen our perception of what it is we want to do.

Once we have a clear goal in mind, we face what may be the most difficult job of all: We must compare the benefits of one program with the benefits of others, and decide which ones should be expanded, and which ones not.

How does one compare the benefits of one program with the benefits of another? Rarely can we reduce these benefits to dollar figures without the result being so artificial that we lose confidence in it. Yet such a comparison is essential to every budgetary choice we make, because the budget forces us to balance a dollar spent on one program against a dollar spent on others.

Let me give an example. We often speak of a single human life as infinitely precious, and so it is. But when it comes to the allocation of time, money, and

77-615 O 72 15

energy, it is obvious we do not literally mean it. Each year our society tolerates thousands of deaths which might be prevented. Last year 114,000 Americans were killed in some form of accident. Of these, about 23 percent were in the home, about 48 percent were on the highway. Some 95,000 Americans died from preventable illnesses, nearly 60,000 of these from lung cancer caused by smoking cigarettes. But building safer cars, highways, and homes, and reducing cigarette smoking involve "costs." By not pushing these things as far as we could, we implicitly put a value on the lives they might save. Unfortunately, such implicit valuations do not give us an explicit measure of the value of human life that could be compared with the value of a year of education or a reduction in water pollution.

I have suggested from time to time that we should develop a benefit unit called the "HEW" for use in comparing cost-benefit ratios among our activities. Such a unit would force us to look at just how much importance we really place on our efforts to deal with any single problem. And it would permit us to compare our real effort in one area—and the returns we got for it-with our real effort and returns in another. If a child-year of preschool education is worth one "HEW," how many "HEW's" is it worth to avoid one traffic death? To rehabilitate one disabled worker? To cure one drug addict?

Using such a benefit constant, we might readily see that the incremental costs of reducing a very small number of deaths by seafood poisoning might be better applied to reducing a larger number of deaths on the highway, or that the additional resources that would allow us to inspect every food-processing establishment twice a year could be more advantageously used to immunize our children. Similarly, it might show that we would achieve higher benefits in lives saved by investing in the safety of products than in special ambulances for victims of heart attacks.

The comparison gets harder to make, of course, when we come to the reduction of injuries, discomforts, and irritations, and when the outcome is uncertain. How much should we be investing, for example, in finding a cure for the common cold? Reducing noise pollution? We make such choices anyway, of course, but the process is seldom both deliberate and explicit.

Hard as it is to define our goals clearly, and then to rank them in importance, we must also deal imaginatively with the third of our four decision tasks: Considering different ways of approaching each goal. Cost-benefit analysis must open the door to fresh and imaginative alternatives. It must enable us, if such be the case, to say that there is a better way of attacking a problem than by an HEW program and that we should shut ours down and support another, somewhere else.

Our fourth job in better decisionmaking is to measure the cost of each alternative. We already do better at this part of the task than at the rest, but still our skills are seriously underdeveloped. We are not always careful to remember, for example, that the true "cost" of a Federal program is not invariably measurable by the number of dollars we allocate to it in the budget. Did we include air

pollution costs in our accounting for the Federal highway program? Do we charge ourselves for the loss of recreation, fish, and wildlife when we develop our rivers and harbors? Do we take into account the possibility that Federal dollars collected from some sources may have more adverse effects on economic activity than those collected from other sources?

Further, our ability to predict what the budgetary costs of alternative programs will be is seriously lacking. There are a few shining examples of good cost estimation in the Federal Government. The Actuary's Office in the Social Security Administration is one. But there are many more dismal cases of complete ignorance about what resources will be needed to carry out proposed programs.

Our success at these four tasks of cost-benefit analysis can be improved, though not easily. Yet even if we do them a great deal better, there will remain severe limits on the practical use of evaluation and cost-benefit analysis. Such techniques may help us to choose the best way to use an additional $1 million on homemaker services for the elderly or preschool education for disadvantaged children. They may even offer some basis for comparing the social return on one or another such investment. But a choice between homemaker services and preschool education cannot and should not rest only on this kind of analysis. Even though it could be shown that the investment in preschool education paid larger dividends for a longer future, our feelings toward the generation to which we owe our own existence and education cannot be fed into this kind of calculation. The hard choices, in the end, are bound to depend on some combination of values and instincts—and, indeed, it is precisely because the content of choice cannot be reduced to a mathematical equation that we need the political forum to reach the final, most difficult decisions. To recognize this, however, reinforces the importance of being as honest and explicit as possible in articulating the nonmeasurable considerations that transcend the limits of objective analysis. Only if these considerations are exposed to full view can we bring those whose expectations have to be deferred or overruled to accept the legitimacy of the process by which this was done. Only thus can we hope to reconcile the loser to losing and encourage the impatient to wait.

« iepriekšējāTurpināt »