Lapas attēli
PDF
ePub
[blocks in formation]

The Wall Street Journal's recent report that a career official in the Office of Management and Budget blocked a proposed regulation to require Reye's syndrome warning labels on aspirin (Reaganites Find Plans for Deregulation Stall After EPA Revelations, June 6, 1983) is false. The decision not to issue the proposed warning label regulation was made by Richard Schweiker, then Secretary of Health and Human Services, based in large part on the recommendation of the American Academy of Pediatrics that such labeling requirement would not be in the best interest of children, and that further investigation was necessary regarding the association of aspirin use and Reye's syndrome.

We write as a group because the three of us, who were the senior Administration officials involved in the decision, discussed the issue at length both before and after the Academy of Pediatrics issued its recommendation. Our discussions were concerned exclusively with the scientific evidence about aspirin use and Reye's syndrome, and about the possible adverse health consequences of discouraging use of aspirin in medically appropriate circumstances. These public health issues were much in the press at the time, including an editorial page article in the Journal on October 15, 1982 highly critical of the warning label proposal. The financial interests of aspirin manufacturers (and the contrary financial interests of acetaminophen manufacturers) were never mentioned and had nothing to do with the decision.

It is dismaying that the Journal should run this story so prominently without bothering to inquire of the Administration officials responsible for the decision. The facts were available not only from us, but from the public record of the Nader group's

2

the proposed regulation. Your article did mention this suit, but neglected to say that the suit was dismissed by the U.S. District Court last March (and is presently on appeal). The court's opinion emphasized both the Academy of Pediatrics' position and the vigorous research effort ordered by Secretary Schweiker into the relationship between aspirin use and Reye's syndrome.

Sincerely,

Prabards. Fluiskn

Richard S. Schweiker
Former Secretary of Health,
and Human Services

Die R.

Christopher DeMuth

Administrator for Information
and Regulatory Affairs

Ga Myrt

George A. Keyworth

SAN

EXECUTIVE OFFICE OF THE PRESIDENT
OFFICE OF MANAGEMENT AND BUDGET
WASHINGTON, D.C. 20503

Ms. Eleanor Chelimsky

NOV 15 69

Assistant Comptroller General

Program Evaluation and Methodology Division
General Accounting office

441 G. Street, N.W.
Washington, D.C. 20548

Dear Ms. Chelimsky:

We are taking this opportunity to respond to your final report, Paperwork Reduction: Mixed Effects on Agency Decision Process and Data Availability, because we have serious concerns both with the validity of certain of its allegations and the quality of the methodological analyses employed. To put it succinctly, if the survey and statistical methodology used in this study had been submitted to us by an agency under the Paperwork Reduction Act, we would not have cleared it without having to work closely with agency staff to correct the methodological flaws.

As you know, we sent a letter on May 25, 1989, commenting on a draft version of the report. In that letter we stated that we found the recommendations contained in the report to be "both reasonable and consistent with our desire to further improve administration of . . . the Paperwork Reduction Act." We also pointed out that we were already taking steps to implement your recommendations and would continue to do so. However, we also took the opportunity in that letter to respond to certain allegations contained in the draft report. Since your final report did not respond adequately to the points we raised, we are amplifying and elaborating on our concerns in this letter.

There are two types of problems with the final report. First, the conclusions of the report do not follow from the data presented; and second, the report is rife with methodological problems.

The report alleges that the Office of Management and Budget (OMB) has somehow had a "chilling effect" on research. A closer look, even at the data in the report, does not bear out that allegation. The concerns raised by the General Accounting Office (GAO) report appear to be based more on the subjective views of the 17 agency staff interviewed than on the statistical analyses of the 20,497 information requests that OMB processed between 1982 and 1987. For example, the statistical analysis in the GAO

research-oriented submissions than in statistical or regulatory requests." As Table 5.1 from the GAO report shows, while requests from "non-regulatory" -- primarily research-oriented agencies -- declined only two percent between 1982-84 and 1985-87 and statistical agency requests increased by eight percent, regulatory agency submissions decreased by 16 percent. Table 5.5 shows that research submissions from all types of agencies actually declined less than non-research submissions (six percent decline compared to a nine percent decline) over the same period.

-

Clearly the Office of Information and Regulatory Affairs paperwork reviews have not had a disproportionate "chilling" -GAO's characterization - effect on research information requests, compared to non-research requests. This conclusion is further strengthened when account is taken of the GAO report's finding (p. 21) that nearly 75 percent of the submissions from regulatory agencies are "recurrent" (that is, already approved once and thus less likely to be disapproved), while only about 60 percent of the submissions from non-regulatory agencies are - recurrent.

The misconception about the "chilling" effect on research activity apparently stems in part from a selective focus on a small part of the picture rather than the total picture. Table 5.6 shows that there was a relatively large 46 percent decline in research requests from regulatory agencies that had low approval rates. The report highlights this finding. This 46 percent decline, although large in percentage terms is small in absolute terms (34 submissions). It is more than offset by the large numbers of research requests that were submitted from nonregulatory and statistical agencies and from regulatory agencies with high approval rates with the net result, mentioned above, that non-research information requests declined 50 percent more than research requests.

[ocr errors]

GAO also apparently believes that there is something wrong with its finding that agencies with low prior approval rates submit fewer information requests in later years than agencies with high approval rates. As mentioned, the report labels this differential decline the "chilling" effect of the OMB review process. One might ask, however, what GAO would label the effect if it had found that agencies with low approval rates tended to submit more requests than agencies with high approval rates. We believe that such a result would indicate that the review process was having perverse incentive effects. It would be analogous to a finding that drivers who were given speeding tickets drove faster after the ticket than before. The OMB review process envisioned in the Paperwork Reduction Act should have a "chilling" effect on poor design, poor planning, duplication, excessive burden, and unnecessary collections, although the term "chilling" is pejorative and we prefer the terminology used in the Act

3

Agencies that have lower approval rates in an earlier period clearly should be submitting fewer paperworks in the future, other things being equal. GAO defined low approval rate as below 90 percent and the high rate as above 90 percent. If the disapproved requests were not submitted in the later period while the approved requests were, that in itself could easily explain the difference of nine percentage points between the requests filed by the high approval agencies and the low approval agencies. In addition, we hope that the agencies with low approval rates have learned from the review process to perform more careful internal review of their information requests for compliance with the criteria specified in Paperwork Reduction Act.

We would like to address several examples of methodological problems of a more technical nature: (1) invalid comparisons; (2) arbitrary definitions and concepts; (3) the panel of "experts" used to simulate our technical review; and (4) the use of non-probability samples.

Invalid Comparisons

We found comparisons employed in your "illustrations" that took no account of whether the processes observed were, in fact, comparable over time. One example is the comparison of recurrent and new collections in two consecutive three year periods. Arguably, recurrent surveys are somewhat similar in consecutive periods, but to presume this characteristic for "new" surveys is not appropriate.

Most new surveys are either "first-time," "one-time," or "experimental" surveys. First-time surveys in one period move to the recurrent category in the next period or are eliminated if they perform poorly. True one-time surveys occur in only one time period, and are usually associated with changes in policy (e.g., the early years of a new administration). Experimental work is often characterized by a series of two or more related collections in succession which culminate in a large scale or recurrent data collection or a decision not to proceed these again display the pattern of many early transactions followed by few or no actions in later periods.

-

A

There is nothing mysterious or sinister in these patterns they follow the life cycle of programs or program changes. decline in new data collections between the early and later years of an administration is to be expected. Yet your analysis tacitly assumes that no change should be expected over time and proceeds to infer "chilling" effects from this flawed assumption. If this no-change assumption were true, one of the consequences

« iepriekšējāTurpināt »