Perception Results - Training and Development

Perception Results are defined as: Participant Perceptions: Perceptions of people with first-hand experience with systems, processes, goods and/or services. Stake holder Perceptions: Perceptions of leaders of systems and/or people with a vested interest in the desired results and the means of achieving them.
Of the three domains, perceptions have the lowest cost and the lowest return.
They are lowest in cost in the sense that simple, short, and standardized perceptions rating forms can be produced for participants and stakeholders.The yare lowest in return because they provide the least valid information about performance outcomes. Research consistently shows that there is little correlation between perceptions and learning or performance, despite the popular myth that they are related.

Perceptions results are perceptual states held by various people in the organization such as trainees and their managers. Measures of perceptions systematically access this information from selected groups of people. The mantra for perception results domain should be

  1. acquire the data;
  2. do not spend a disproportionate amount of resources to acquire it; and
  3. do not over interpret it. We recommend that you collect this data only as long as it is not used as a substitute for measuring learning results and/or performance results

For example, people self-reporting that they have learned something is not a measure of what they have learned. We also recommend that you collect perceptions results data from the participants and the stakeholders both. Thus,from a general planning perspective, commit to both participant and stakeholder perception results in the Results Assessment Plan and check off both boxes. When you've asked someone for perceptions, you can do several things to evaluate the data. First, you can count the positive and the negative comments.

Next,you can classify the comments into inherent categories: the content, the instruction,the facilities, the appropriateness of the objectives. Minor trends should not be used as the basis for action; nor should hotly worded comments. T&D Officers would be wise to temper such overstated comments as "The worst program I've ever been sent to!" or "Totally irrelevant to this organization," or "The instructor is an egotist—out to get the students." Such reactions need to be treated as some judgments are at international skating and swimming competition—the highest and the lowest ratings are thrown out in the final computations.

Whenever one seeks perceptual data, there is an eventual "balancing phenomenon" in which comments contradict each other in almost equal numbers.

For instance, seventeen people will say the program moved too slowly; eighteen will say it was too rapid.

What does this really tell an evaluator? Probably not that the program was either too slow or too fast, but that the design needs to provide more time for individual activity for one-on-one counseling. That might allow all thirty-five of the commentators to feel comfortably in control of their own scheduling.

It might also tell the instructor that there is too little ongoing process feedback during the class sessions. When as many as 10 percent of any student body mention pacing problems, the instructor is probably not getting feedback soon enough.

This raises an important point: Professional instructors are collecting perceptual data throughout the learning. They establish an atmosphere in which it is more than possible; it is inevitable! Instructors are clearly evaluating on perceptual bases whenever they adjust their instruction as a result of such perceptions.

Another issue is the nature of the question. If the instructor is to be evaluated on delivery, the use of visual aids, personal appearance, and the handling of students' questions, the evaluation should come from a professional. Surgeons don't ask their patients for comments on their scalpel techniques; wide receivers don't ask spectators to evaluate the way they caught that pass. Why, then, doT&D specialists ask learners to evaluate instructional technology? The pr operand relevant questions concern learning's, and the learners' perceptions of those learning's.

Then there is the issue of timing. Perceptual data should be gathered at all phases of the learning—not at the end of the program when the real pressures are to go home, go back to the office, or go back to the shop.

The most useful perceptions come during the learning and when the learning's are being applied on the job. At end of learning's, the "I can/I cannot meet the objectives" inquiry is especially useful. Coupled with the actual terminal test data, it gives T&D management cross-validated data on which to make the evaluation. On-the-job perceptions should focus on application of the new learnings.

A Useful follow-up instrument asks learners to tally or estimate how often they have used the new skill on the job. Since this is a perceptual approach, they may simply choose between alternatives such as "always" or"often" or "now and then" or "seldom" or"never." When the cumulative totals are presented as the basis for evaluations, modes and medians can be located as the basis for evaluation.

Let's look at such an approach as it is applied to that program for writing collection letters. You have surveyed the perceptions of the perseverance of the acquired skills. The compiled data look like this:

CHART 16.2. Sample Data Analysis Matrix

Such a record would produce a favorable evaluation of the program and indicate the general validity of the learning goals and their usefulness—as the graduates perceive them. Such perceptual data, coupled with the hard data about operating results (whether the letters are indeed collecting money!) can give a very rich amount of data on which to base the evaluation of T&D programs.

When post-training perceptions of applications reveal nonuse of the new skills, the T&D consulting follow-up may reveal causes. As we've noted, those causes include such things as unreasonableness of the objective, irrelevance of the objective—and, frequently, failure of the immediate boss to reinforce the new behavior. It is by such evaluation of an existing program that more than on eT&D officer learns that the program started one level too low! The indicated action for that evaluative discovery is a training program for the bosses!

Post-training perceptual instruments can also make effective use of open questions. To arrive at perceptions, T&D evaluators like questions such as these:

  • If you were attending the same training today, what would you do differently?
  • What objectives do feel should be expanded?
  • What objectives would you condense?
  • What objectives would you drop?
  • What objectives would you add?
  • What course activities would you expand?
  • What course activities would you eliminate?
  • What course activities would you condense?
  • What would you like to tell us about this course and the way it has influenced you or the way you do your work?

All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd Protection Status

Training and Development Topics