Lapas attēli
PDF
ePub

A Message to Students

The question is frequently asked: What can I do about raising my SAT scores or
about making them better than they would be otherwise? The answer is: Quickly
and immediately probably not much; over longer periods it depends upon how
much time, effort, and concentration goes into the preparation.

The Scholastic Aptitude Test measures the extent to which your reasoning abili-
ty and skills with words and mathematical concepts have been developed up to the
time you take the test. These are abilities that are related to academic success in
college and that grow over a lifetime through learning experiences such as those in
the family, in school, with your friends and associates, and in reading and indepen-
dent study. The best preparation for the SAT is to have had varied opportunities of
this kind and to have made the most of them.

The skills and abilities the SAT tests tend to grow relatively slowly and at different rates for different people. Whether you have more or less of these abilities does not say anything about your worth as an individual. Many other individual qualities not measured by the SAT, such as motivation, creativity, and artistic skills, have much to do with your sense of satisfaction and your success in life.

If you or your parents have been thinking about special preparation for the SAT outside your regular classroom activities, these six points are worth remembering:

1. The SAT measures developed verbal and mathemat

ical reasoning abilities that are involved in successful academic work in college; it is not a test of some inborn and unchanging capacity.

2. Scores on the SAT are subject to improvement as educational experience, both in and out of school, causes these verbal and mathematical abilities to develop.

3. Development of these abilities is related to the time and effort spent; short-term drill and cramming are likely to have little effect; longer-term preparation that develops skills and abilities can have greater effect.

4. While drill and practice on sample test questions generally result in little effect on test scores, preparation of this kind can familiarize you with different question types and may help to reduce anxiety about what to expect. You can help yourself to bedome familiar with the test by using the explanations and full sample test in this booklet.

5. Whether longer preparation, apart from that available to you within your regular high school courses,

is worth the time, effort, and money is a decision you and your parents must make for yourselves; results seem to vary considerably from program to program, and for each person within any one program. Studies of special preparation programs car. ried on in many high schools show various results averaging about 10 points for the verbal section and 15 points for the mathematical over and above the average increases that would otherwise be expected from intellectual growth and practice. In other programs results have ranged from virtually no improvement in scores to average gains as high as 25-30 points for particular groups of students or particular programs. Recent studies of commercial coaching have shown a similar range of results. You should satisfy yourself that the results of a special program or course are likely to make a difference in relation to your college admissions plans.

6. Generally, the soundest preparation for the SAT is to study widely with emphasis on academic courses and extensive outside reading. SAT score increases of 20-30 points correspond to about three additional questions answered correctly. Such a result might be obtained by independent study in addition to regular academic course work.

1101608. DD31P300b. 200617. Printed In U.S.A.

Attachment B

The
American

School Board
Journal

Standardized testing has become education's latest scapegoat

Reprinted with permission, from The American School Board Journal,
July 1981. Copyright 1981, the National School Boards Association.
All Rights Reserved.

As E.T.S. sees it

Standardized testing has

become education's latest scapegoat

By Scarvia B. Anderson

W

E AMERICANS have a long history of explaining away any dissatisfaction with our education system by blaming successively vulnerable parties: First we blamed the students, then we blamed their parents (indicting Dr. Spock along the way), then we blamed the teachers, and of course administrators always are blamed for everything. But the late 1970s brought more bad news. Student scores on the Scholastic Aptitude Test (S.A.T.) continued to decline; many volunteers in the armed forces were labeled incompetent in basic skills; colleges began to offer more remedial courses; the crime rate went up all of this in spite of unprecedented societal attention paid to education legislation and innovation.

Clearly, the time was right to nominate a new villain for the "what'swrong-with-our-schools?" drama. No group was more eager to serve as the casting agent than those who most recently had been featured players-teachers, or more specifically, one of the two major teacher unions. Without so much as an audition, the National Education Association (N.E.A.) quickly picked standardized testing as the villain. The choice was praised by the National Association for the Advancement of Colored People, Ralph Nader, several state legislators, and two social scientists at the Harvard Medical School.

Their arguments against testing were patterned after the logic of Lewis Car

Scarvia B. Anderson is senior vice-president of the Educational Testing Service.

26

roll, author of Alice's Adventures in Wonderland: If minorities scored lower on tests, if test scores were going down, if students' scores on tests could be improved, then the trouble lay with the tests, not with the schools. "If only the tests were cleared away," they said, "it would be grand."

But tests, of course, have not been "cleared away"-nor are they likely to be. Educators would be wise to recognize this fact and to redirect their energies from damning tests to learning as much as they can about them (see accompanying article on page 28). Only then can progress be made toward improving test scores-and determining the proper role of testing in education.

Consider for a moment the factors that contribute to a student's performance on any kind of test, whether it is teacher-made or standardized. If the test is a good one-that is, both valid (measures what it's supposed to measure) and reliable (yields consistent results) then students' scores are determined first and foremost by knowledge and skill: Does he know the answers to the questions on the test? Can they do the tasks that are required? We must remember this at all times: There is no magic that will enable students to succeed on a good test if they don't know the subject matter or possess the skills that the test measures (short of cheating, that is).

Let me tell you a story about the director of research in a large public school system: He was dismayed because student scores on a national standardized reading test had dropped from the year before, and he refused to release the scores-to the superinten

dent, to the press, to anybody. First he asked the school system's computer center to rescore the tests; there must be an error, he thought. There wasn't. Then he asked the computer people to check all the statistical computations. Still no error. After several weeks and clamors on all sides to see the scores, the head of the computer center met the research director in the parking lot. "Hey, buddy," said the computer specialist, "I think I've figured out how to get those test scores up."

"How? How?" asked the relieved research director.

"Teach those kids how to read."

Given that test performance is determined largely by what students know and can do, here are some other possible, if lesser, influences on test scores:

1. Test directions. If the directions say to select the least probable explanation for an event and a student selects the most probable explanation, then he or she has missed the question. If one of the choices for a set of mathematics questions is, "The answer cannot be determined from the information given," and if the student ignores that choice, the student's score might suffer. If students are told to answer four out of five essay questions, and they answer all five, then they have wasted time that could have been devoted to more complete answers to four questions.

2. Test pacing. When there is limited time to take a test, students should pace their work so they have sufficient time to get through questions they know; they shouldn't get hung up on questions they're unsure of. They should save the difficult questions to ponder at the end of the test.

THE AMERICAN SCHOOL BOARD JOURNAL

3. Neatness. A brilliant essay can be dulled by a sloppy or illegible paper, so neatness counts. So does attention to detail. Matching question numbers to spaces on the answer sheet and completely erasing answers when students change their minds can be of major importance on a machine-scored test.

4. Physical conditions. There is, of course, a rather broad range of tolerance before performance will be adversely affected by physical condition at the test site. Relatively poor lighting, or a testing room that is a little too warm, might make the test taker uncomfortable, but neither is likely to jeopardize test performance if the student is capable of doing the tasks on the test.

5. Emotional or physical health. A slight cold probably would have little influence on a student's test performance, but a bad case of the flu with accompanying fever seriously would reduce the student's ability to perform test tasks. (One of my professors described studies he did during World War II that showed aptitude test scores were affected only slightly when recruits took tests right after an all-night march.)

More troublesome are emotional disturbances, the most relevant of which is called "test anxiety." We know that some people get more anxious about a test than others; we also know that anxiety varies with how well prepared students think they are and with what they think is riding on the outcome. For example, we would expect students to be more anxious about a test that was to be used in determining their eligibility for a significant award than on a test that was only one of several in a school course. We shouldn't forget, however, that there are a few among us who actually like taking tests and are inclined to give our all in a testing situation regardless of the purposes of the test.

The measurement and study of test anxiety are complex and have challenged a number of psychologists through the years. In spite of all the work, we still cannot predict the number of people in a given population who will have paralyzing test anxiety; the number probably is small. On the other hand, we know that a little anxiety probably is facilitating rather than debilitating. (Think about your own performance when you know you are being judged by important audiences.) Even though I don't believe test anxiety is a serious determinant of performance for many people, I would not dismiss its value as

JULY 1981

an ego-saving device for thousands of students and their parents. If the term "test anxiety" did not exist, we probably would have to invent it to preserve the self-concepts of those who otherwise would have to blame their own ignorance or laziness for poor test performance.

Now that we've examined some of the major influences on test scores, we must consider what we as educators and teachers can do to reduce the negative influences.

• We certainly can design testing conditions in which directions are as clear

No magic formula will enable kids to succeed on

a good test if they don't know the subject

matter or possess the skills that

the test measures

as possible, test administrators are friendly and helpful, the physical environment is comfortable, and students who are ill are offered make up exams.

• We also can teach students test-taking skills-pacing, handling machinescored answer sheets, understanding different types of questions, and so on. Familiarity with objective tests is not the problem it once was. In the first testing study I ever did (in 1951, at George Peabody College for Teachers), we found that many of the freshmen had no previous experience with objective tests and machine-scored answer sheets. So, we instituted practice tests to try to even out any differences associated with the mechanics of test taking. Today, almost all children are exposed to a variety of standardized tests as they go through school.

If we can identify the most extremely anxious students, we might be able to recommend personal therapy for them.

What can we do, though, about the biggest factor that affects student

scores: knowledge and skills? The best answer to that is to teach reading, writing, and whatever skills tests demand. Unfortunately, if the knowledge or skills are significant, this takes a rather long time. Everyone wants to know if there are any shortcuts.

Enter coaching, one of the most confusing elements of the debate on standardized testing. The N.E.A. and others say that tests are no good if they are coachable. But this denies the link between education and the testing process. I think there is little place in education for tests on which performance cannot be influenced by instruction. But to discuss coaching intelligently, we first need to consider the nature of the coaching, the length of the coaching, and the kind of test that is being coached for.

Coaching usually can be distinguished from regular instruction by its duration, its purposes, and its techniques:

• Instruction generally is during a long-term period; coaching generally involves a few hours, days, or weeks.

• The purpose of instruction is to develop skills, knowledge, and understanding to an appropriate level of mastery-for their own sake. If students do better on tests as a result, fine. The purpose of coaching, however, is to improve performance on a specific examination. If students improve their skills, knowledge, and understanding as a result, fine.

• A good instructional program generally includes alternative explanations of phenomena and solutions to problems, allows for practice with variations, and provides a certain amount of enriching detail—all of which will help the student to understand as well as to know. Coaching, on the other hand, concentrates only on the kinds of questions that are likely to be asked on the test, on quick solutions to typical problems, and on associations of test language with certain types of responses. Drill is a significant part of most coaching.

I was coached in the evenings for three weeks for my Ph.D. examination in German. Because the examination only required me to translate passages, and because we were allowed the use of a dictionary, I did not spend much time learning long lists of vocabulary words. I did memorize the articles and common prepositions, conjunctions, adjectives, and adverbs so I wouldn't have to spend time looking them up. My tutor taught me how to look at a German sentence,

[blocks in formation]

quickly pick out the subject and predicate, and determine which words went with each. And because we knew the exam would be in my field (psychology). we practiced on abstracts of articles from German psychological journals, not on travel books or short stories. If my object had been to learn to read scientific German fluently so I could keep up with current literature, my tutor obviously would have taken a very different approach; and if I really had wanted to learn German, even more time would have been required.

I passed the exam handily, but today I know practically no German. Unfor

The amount of time students spend being coached seems to make a difference in test performance, but only up to a certain point

tunately, retention of facts learned during coaching frequently is poor. Coaching seems to work best when it is of the "refresher" variety: If a student has been out of school for a few years or hasn't studied a certain subject recently, coaching can clear away some of the cobwebs from skills the student once had.

The amount of time students spend being coached seems to make a difference in test performance, but only up to a point. Studies of the S.A.T. show a positive relationship between the amount of time spent in coaching and score increases. The law of diminishing returns, however, soon begins to operate. For example, an increase of 10 points on the S.A.T.'s 200 to 800-point verbal scale might be associated with 12 hours of coaching; to increase scores by 20 points, however, might require 57 hours; 30 points might require 260 hours. Extrapolating beyond existing data, it appears that for a 40-point gain

on the verbal portion of the S.A.T., something close to 1,200 hours of

238

coaching time would be required. That's full-time schoo.ing..

Another problem: The broader and more general a test is, the more difficult is the task of preparing for it. That's what makes coaching for aptitude tests so difficult. It would be impossible in a couple of weeks to teach all the words that might be included on the vocabulary section of the S.A.T. But if the test were on American literature, you certainly could hit some of the high spots in two weeks. And if the test were even more sharply focused (testing knowledge of the symbols for chemical elements, perhaps), students probably could learn everything they needed to know in less than two weeks.

Here's how I would summarize my position on coaching:

• Those who would condemn tests because they are coachable ignore a fundamental relationship between testing and teaching.

• The success of coaching depends on many factors, including student motivation, the nature and duration of coaching, and the type of test. In general, aptitude tests are harder to coach for than are achievement tests.

• Whether coaching actually improves test scores, many students believe coaching is helpful (often, they say, because it reduces anxiety). Therefore, schools would be well advised to provide interested students with appropriate opportunities to prepare for tests-especially in test-taking skills. Just two or three sessions should do; this will help avoid charges of inequity between students who can afford expensive commercial coaching courses and those who can't.

Tests are the latest scapegoat for failures attributed to education, but the label is not likely to stick. As American Federation of Teachers President Albert Shanker put it, "You can't blame the thermometer for the patient's fever." Considering the growing public and legislative demand for educational accountability and productivity, tests probably will occupy a more, rather than a less, important place in the education scene. Consequently, it behooves educators to learn as much about measurement as they can in order to monitor testing practices, to resist inappropriate uses of tests in the school systems; to counter the misuse of test results in unwarranted attacks on education, and, most important, to harness important testing tools for their own decisionmaking needs. O

Take this

Tests are an easy mark for education's critics, partly because tests are not well understood-by educators or the public. Here's a crash course on the various types of tests and what they are designed to accomplish: You can increase your understanding of almost all types of education tests by determining (1) whether they measure aptitude or achievement, (2) what kind of response the student makes, (3) how the results are interpreted, and (4) who makes them up.

1. Aptitude tests versus achievement tests. There is a difference between aptitude and achievement tests, but it is not always immediately apparent. Performance on both types of tests depends on what students have learned, not just what they came into this world with. (A wolf child would not do well on either kind of test.) For example, Sample 1 contains two questions: Question A is from a well-known measure of scholastic aptitude; Question B is from a basic skills achievement test. In general, socalled aptitude tests measure skills that have developed over a long period of time; they also measure applications of skills in new situations. It would be unusual to look at an aptitude test question and be able to say precisely, "1 learned that on Tuesday" or "Mrs. Parker taught me that." Aptitude tests are valued primarily for their predictive value. We say that scores on aptitude tests provide some indication of how well people will do in college, at the police academy, or in advanced math. Because the skills measured on aptitude tests develop over a long period of time and from a variety of influences, it is not fair to blame the schools entirely if the students score poorly on aptitude tests. Unfortunately, schools were blamed in many quarters when S.A.T. score declines became big news. (Of course, schools shoulder part of the blame if, after 11 or 12 years of education, students cannot handle words and number concepts as well as students of a few years before.)

Achievement tests, on the other hand, generally are more easily understood and appreciated by the public than are aptitude tests. That's because the questions on achievement tests can be related more readily to school curriculum. Scores on science, history, literature, and mathematics tests should be influenced by the quality and recency of releTHE AMERICAN SCHOOL BOARD JOURNAL

« iepriekšējāTurpināt »