Testing Is Only Part Of The Evaluation Of Learning
Every time you ask a question in class, monitor a student discussion, or read a term paper, you are evaluating learning. Moreover, the evaluation process (whether it involves examinations or not) is a valuable part of the teaching process. The primary purpose of evaluation is to provide corrective feedback to the student, the secondary purpose is to satisfy the administrative requirement of ranking students on a grading scale. Owing to limitations of space, we cannot provide an exhaustive explanation of the types of tests and rules for writing them, but we will offer a few guidelines for each type and focus primarily on the two most widely used types of exams: multiple-choice and essay.
The selection of material to be tested should be based on learning objectives for the course, but the complexity of the course material associated with those objectives (and the limited time for taking exams) means that you can only sample the material in any given unit or course. All tests should have complete, clearly-written instructions, time limits for each section, and point values assigned to different questions or groups of questions. The question sheets should be clearly typed and duplicated so that students have no difficulty reading them.
When grading exams, strive for fairness and impartiality by keeping the identity of each student secret from yourself until you have finished the entire set of tests. Some additional issues arise in testing in mathematics and the natural sciences, since students are required to work problems on their exams. Answers may be right or correct but differ in accuracy and completeness, so the type of answer and the degree of precision you expect must be clearly specified. You must also decide how much work the student will be required to show and how partial credit will be allocated for incomplete answers. Keep in mind that the basic purpose of a test is to measure student performance, and the best teachers constantly work to refine their testing techniques and procedures. Poor techniques may result in tests that only measure the ability to take a test – test-wise students will perform well whether or not they know the material. Writing good exam questions requires plenty of time for composition, review, and revision. Also, it is beneficial to ask a colleague to review the questions before you give the exam – another teacher might identify potential problems of interpretation or spot confusing language.
The major weakness of multiple choice tests is that teachers may develop questions that require only recognition or recall of information. Multiple-choice questions in teachers’ manuals that accompany textbooks often test only recognition and recall. Strive for questions that require application of knowledge rather than recall. For example, interpretation of data presented in charts, graphs, maps, or other formats can form the basis for higher-level multiple-choice questions. Multiple-choice questions normally have four or five options, to make it difficult for students to guess the correct answer. Only one option should be unequivocally correct; “distractors” should be unequivocally wrong. After a test has been given, it is important to perform a test-item analysis to improve its validity and reliability.
In matching items, the student is presented with two related lists of words or phrases and must match those in one column with those in a longer column of alternative responses. Obviously, one should use only homogeneous words and phrases in a given set of items to reduce the possibility of guessing the correct answers through elimination. For example, a list which includes names, dates, and terms is obviously easier to match than one containing only names. Arrange the lists in alphabetical, chronological, or some other order. Keep the lists short (ten to twelve items) and type them on the same page of the exam.
Completion questions, short-answer questions, and essays form a continuum of questions that require students to supply the correct answers. Completion questions are an alternative to selection items for testing recall, but they cannot test higher-order learning. In writing completion items, give the student sufficient information to answer the question but not enough to give the answer away. Questions that require students to generate their own response need clear, unambiguous directions for the expected answer.
Students cannot answer an essay question by simply recognizing the correct answer, nor can they study for an essay exam by memorizing factual material. Essay questions can test complex thought processes, critical thinking, and problem-solving skills, and essays require students to use the English language to communicate in sentences and paragraphs – a skill that undergraduates need to exercise more frequently. But essay questions which require no more than a regurgitation of facts do not measure higher-order learning.
Although these guidelines are written from the perspective of the social sciences and humanities, most of these rules also apply to devising long problems in science courses. Since one of the advantages of essay questions is their ability to test elements of higher-order learning, your first task is to define the type of learning you expect to measure. If you wish to test problem-solving skills, the format and method for solving the problems must be clearly communicated to students. Presenting problems with no clues about how to proceed may cause students to adopt a plausible but incorrect approach, even if they knew how to solve the problem in the correct way.
It is helpful to distinguish between essay questions that require objectively verifiable answers and those that ask students to express their opinions, attitudes, or creativity. The latter are more difficult to construct and evaluate because it is more difficult to specify grading criteria (they therefore tend to be less valid measures of performance).
The reliability of essay questions can be increased by paying close attention to the criteria for answers. Many teachers don’t realize that it is not only necessary to compose a model answer, but to provide students with instructions that will elicit the desired answer. First, write an outline of your best approximation of the correct answer, with all of its sections in place. When you have read over your answer several times and are satisfied that it will measure the appropriate course objective, write the instructions students will need to answer the question with the scope and direction you intend. Describe the expected length of the answer, its form and structure, and any special elements that should be present. Good grading practices also increase the reliability of essay tests. Research has shown that the scoring of essays is usually unreliable; scores not only vary across different graders, they vary with the individual grader at different times. If the grader knows the identity of the student, his/her overall impressions of that student’s work will inevitably influence the scoring of the test.
If, through some quirk in wording, students misinterpret your intent, or if your standards are unrealistically high or low, you can alter the key in light of this information. If these problems are not in evidence, and you have carefully constructed the model answer, students should not be able to surprise you with better answers than yours. However, you should be open to legitimate interpretations of the questions different from your own. Finally, unless you intend to grade grammar, syntax, spelling, and punctuation as part of the examination, try to overlook flaws in composition and focus instead on the accuracy and completeness of the answers.
It is important to write comments on test papers as you grade them, but comments do not have to be extensive to be effective. For example, you might assess penalties for incorrect statements, omission of relevant material, inclusion of irrelevant material, and errors in logic that lead to unsound conclusions. Distributing your model answers with the corrected essays can alleviate some of the burden of writing comments on exams.
Jeff C. Palmer is a teacher, success coach, trainer, Certified Master of Web Copywriting and founder of https://Ebookschoice.com. Jeff is a prolific writer, Senior Research Associate and Infopreneur having written many eBooks, articles and special reports.
Source: https://master331.medium.com/testing-is-only-part-of-the-evaluation-of-learning-9885ca00a535