A Highway map and a mileage table help tourists measure their progress. In T&D,the tools of measurement are called "instruments." Instruments for measuring learning usually involve some form of test, either by paper and pencil or by performance.
One obvious way to find out where an individual "is coming from" is to give a test on what that individual already knows. This is commonly called ap retest. A popular way of doing this is to administer the criterion (or final)test, or its equivalent, at the first session. If this test is explained properly, given in a goal-setting and diagnostic atmosphere, it need not be threatening. Actually, it can do more than measure; it can set expectations. Such pre-testing assuredly lets learners and instructors form a clear picture of what they don't need to stress—because people already know it. It tells what they do need to emphasize—because so few knew the correct answer. But above all, it gives individual diagnostic data. Learners can pinpoint the areas they must concentrate on; instructors know how each trainee performed and where extra individual counseling will be needed. The pre-test is, in a sense, a practical application of an dragogy: The experience (inventory) of learners is examined and becomes an early ingredient in the learning process. If there are feelings about the content, or if the objectives are in the affective domain, the instrument must uncover the affective inventory; for this, several instruments are useful.
Agree/disagree tests are easy for data gathering, and they can focus the learner on key concepts. Such a test presents crucial statements to students, who then indicate how they feel about them. It's a nominal scale because the intervals are unequal. For example, a workshop for new instructors might use the instrument in Figure below.
Items in agree/disagree instruments should be controversial enough to elicit a range of opinion. Eliminating the "No Opinion" option gives an even number of columns and forces respondents to "fish or cut bait" without sharp "Agree/Disagree"decisions.
The use of several columns permits:
Sample ADA test.
Another instrument for probing feelings is a variation of this instrument. Two very different opinions are presented, the students sharing their feelings on a nominal scale shown between them. Figure below presents an example. Another way to get at opinions is a "pro-rata" scale, in which students assign portions of an established number to reflect relative preferences. For example, you want to determine how managers feel about Occupational Safety and Health laws.
Variation of ADA test.
Another version of the pro-rata instrument might look like this:
Using a Pro-Rata Scale for Performance Appraisal
Assume that you are using a numerical scale to appraise your subordinates. You want to indicate the relative importance of performance elements. You have exactly 100 points to assign to the elements listed below, and to two other factors if you wish to add elements that are missing. Remember, the numbers you assign must total 100.
This form uses a nominal scale, and data from it can legitimately be used to calculate modes, medians, and totals; such data reveal nothing about averages. A Thematic Apperception Test (TAT) is really a psychological measurement tool. A modified form can measure learners' affective inventory. In TATs, students write a story about a picture. Their stories reveal their feelings. The TAT has some interesting applications in T&D measurements. For example, learners might look at a picture of people entering a room. A sign by the door carries the name of the training program. By telling a story about the situation or the people, or by capturing the conversation of the people in the picture, trainees reveal their own feelings.
By classifying the trainees' statements, instructors can learn a great deal about how participants feel. Do they mention course content? Or do they show social concerns, mentioning the other students and what instructors will be like? Do they reveal organizational anxieties? Things such as "Why am I here?"or "My boss is the one who needs this!" or "Who ever heard of these instructors?" Instructors can also classify the data as"Approach" (comments showing inquiry, eagerness, anticipation,affirmation) or "Aversive" (reluctance, dread, fear, resignation). By repeating the same picture later, instructors can change the description to elicit interim or end-of-training data.
Many instructors like a "Team Effectiveness" instrument. Such devices come in several forms. The TAT version can ask trainees to discuss the way the people work together, and thus can reveal feelings that don't come out in open discussion.
Another very simple method is to ask all participants to write down three adjectives to describe their feelings about the group. Usually, they put just one word on a small sheet of paper. The papers are shuffled and read aloud. During this reading, other members and instructors can comment, or explain how they feel about the words—their appropriateness, their causes, and the implications. It' s often useful to tally these words so that trends can be identified and acted upon.
More formal instruments can deal with specific dimensions of the group dynamics and learning activity. At periodic intervals, members respond anonymously and then collectively discuss issues like those shown on the form in Figure .(Incidentally, if the need for anonymity persists, there's evidence the group isn't maturing.) Note that there are seven positions. Generally, instruments that use a numeric scale should use at least five positions and no more than seven.The odd number permits the participant to take any position on a continuum,including the neutral (center) position. Research suggests that scales bigger than seven are usually beyond the ability of most people to discriminate between the response points.
Form for analyzing team effectiveness.
When trainers want a definite commitment, or pro/con data, they should use an even number of choices. That is, the trainee who selects "1" or"2" instead of "3" or "4" is making a statement of dissatisfaction: On a four-point scale, there is no middle number on which participants can express neutrality.
Instruments such as the TAT or Team Effectiveness scale sometimes reveal feelings that don't come out in open discussions seeking to acquire the same data. This doesn't mean that Process Analysis Sessions are not effective. They are—and they should be used as ongoing, non instrumented measurement activity.
They help measure participants' feelings about their own participation, about their own progress, and about the program and the instructors. There is just one problem with instrument-less sessions: Trainers have neither an instrument nor assurance that they've created an atmosphere in which participants freely express themselves.
Such feedback during the learning offers a most dynamic form of measurement. It provides data for decisions in dealing with people, with the group, and with course content. Because it involves the learners in that process, it motivates them to make a conscious investment of their energy in constructive ways.
Instruments or activities such as Process Analysis or TAT would obviously be no help in measuring progress toward psychomotor skills or cognitive acquisitions—but they might help explain and correct sluggish group dynamics in a program designed to reach such objectives.
The point is this: The instrument may fit either the objective of the program or the objective of the measurement. To measure learning, the instrument should be appropriate to the domain of the learning objective. There are no precise guidelines, but the guidelines shown in Figure below are useful.
Matching measurement to learning domains.
In the cognitive domain, paper-and-pencil instruments seem to prevail. They deserve a few comments. Adults are people who went to school—and as students in academic systems they took a lot of tests. They didn't like them very much, but they're accustomed to them. They are especially familiar with some of the quick-scoring formats such as True/False and Multiple Choice. Now, if feedbacks motivating, yet people don't like these tests, there's an interesting conflict of conclusions! It can be easily explained by the way in which the paper-and-pencil tests were conceived and administered. Schoolteachers so often devised "trick" questions, or used so-called objective tests to enforce subjective opinions. The entire testing process became a destructive game—not legitimate vehicle for receiving honest feedback. Example: "There is no Fourth of July in Great Britain." True or False? Of course there is; they just don't have a celebration. Another example: "The instructor is the most important person in the classroom." True or False? Well, that depends, doesn't it? Not only were the tests tricky and arbitrary, they were often fed back in highly competitive forms: you got an A, or you got an F, or you were "in the lower quartile on a bell-shaped curve"!
In organizational settings, we are rarely interested in "normal distributions"or the competitive positioning of students. Nor are we primarily interested in the retention of information. We want to know that knowledge is there only to be sure that learners possess a proper inventory before they begin to apply the knowledge.In test-writing language, the tests in training should be "criterion referenced"instead of "norm referenced."
Criterion referenced means that the test is constructed according to the job related knowledge required. It is the trainer's job to try to get EVERYONE to learn EVERYTHING. Instead of seeking a normal distribution of test scores, the goal is for every trainee to score a "100." To accomplish that, trainers should test what is needed on the job, and then teach what is needed on the job. In effect, the trainer teaches to the test! In education that is a "no-no," but in training it is precisely what happens in well-designed training.
Thus"factual recall" forms such as True/False, Multiple Choice, and Matching have to be carefully designed for T&D measurement. You want to avoid the game in which testers try to outwit students and learners try to outguess the testers! Generally True/False tests should be avoided because they encourage guessing (a 50 percent chance of being right). Multiple Choice questions should focus on application of knowledge, not just rote memorization. Factual recall can also be tested through short-answer recall questions.
It is beyond the scope of this book to teach test writing, but suffice it to say that every trainer should receive training in how to write criterion-referenced tests.Much of the angst among adult learners about tests is caused by years of taking poorly written tests. Good criterion-referenced tests appear fair to learners, help them learn, and see few objections because they are so clearly closely linked to the job. Two excellent resources are Norman Greenland, Assessment of Student Achievement, 7th ed. (Boston: Allyn & Bacon, 2002); S. Shrock, W. Coscarelli,and P. Eyres, Criterion-Referenced Test Development, 2d ed. (Alexandria,Va.: International Society for Performance Improvement, 2000).
Training and Development Related Interview Questions
|HR Management Interview Questions||Recruitment Interview Questions|
|Content Writer Interview Questions||HR Interview Questions|
|Content Marketing Interview Questions||Taleo Recruiting Interview Questions|
|Performance Management Interview Questions||Hr Coordinator Interview Questions|
|Performance Appraisal Interview Questions||Recruitment and Selection Interview Questions|
|Hr Admin Interview Questions|
All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.