Learning Results - Training and Development

Learning Results are defined as:
Knowledge: Mental achievement acquired through study and experience.
Expertise: Human behaviors, having effective results and optimal efficiency, acquired through study and experience within a specialized domain.

Knowledge,an intellectual or cognitive result of learning experiences, is the basic learning result. It is lodged in a person's mind. Measures of knowledge confirm the level of knowledge held by individuals within a particular subject area.For effectiveness and efficiency, paper-and-pencil tests are the primary means of measuring knowledge.

Human expertise is the second category of learning—the more complex learning result category. People with expertise have knowledge and are able to act upon that knowledge. The effective and efficient ability to act upon knowledge generally comes from experience beyond core knowledge. Measuring human expertise requires that an individual demonstrate behavior in a real or simulated setting. When assessing learning results, we generally recommend that both knowledge results and expertise results be measured. And it is logical that knowledge can be measured some time before expertise in that the learner needs time to ga inexperience. The span of time will vary depending on the complexity of the area of expertise being developed.

What's a reasonable level of accomplishment? Conscientious learners want to demonstrate 100 percent achievement. Problem learners would settle for near zero. Professional instructors will tell you that they "win a few and lose a few," but that they like to shoot for 100 percent. The fact is, you can't change all the people all the time. "You can't make a silk purse out of a sow's ear," as the adage goes. For in-house programs, where learning goals are totally consistent with standards, 100 percent is a reasonable target, but the percentage will vary considerably with different programs indifferent parts of the organization. It is considerably lower in public seminars, where the goals may not be specific or reinforced by the management of the participants.

This type of evaluation requires that each learner be tested on each learning objective listed for the program. In very thorough T&D systems, it involves post training measurement of actual on-the-job use of the new behaviors. Such double- checking shows not just that learners "can"—but that they"actually are." To count the actual learning accomplishments is one step; to match them against predetermined targets is the second. Arranging the data in a visual display helps both the instructor and the ultimate evaluator.It's possible to build a simple matrix. Let's do that for the program on writing collection letters that we gave those clerks:

Learning Results

Basic Job Application Assessment

Just a quick glance at such a display tells any analyst that something went wrong on the second objective: Six of eight trainees cannot meet it. It also shows that Learner 3 has troubles: Look at all the "No" entries. The chart should also demonstrate the value of periodic feedback and testing during the learning. Those trends could have been detected and corrected—in ways which would have benefited both the instructors and the learners.

Failures such as those shown in the display bring up another evaluation decision: What should we do about the "No" conditions? Retrain? Reappraise the objectives? Redesign the program? Take a look at the selection process? The answer could conceivably be a yes to every one of those options. There are circumstances in which some or all of those might be reasonable decisions toT&D evaluators. Let's look at the options.

Retraining makes some sense. Failure the first time isn't an unusual result for human endeavor. To reapply the original stimulus may produce different results. Maybe the mere repetition will cause different responses. Maybe there is some unidentified variable in the learner's life that will produce the desired learning the second time around. Instructors must be prepared to cope with individual differences. This may even mean reporting to management back on the job that certain "graduates" need special post-training follow-up. For example, a trainee may meet the qualitative criteria, but not the quantitative.

Take apprentice machine operators: At the end of training, they may be doing everything properly—but just not doing it fast enough.

Sometimes organizations are too zealous. They expect too much. "Benchmarks" maybe necessary, with certain criteria set for the end-of-training and tougher criteria established for later dates. Here's an example: Learners might be expected to complete five units per hour at the end of the training, eight units per hour two weeks later, and twelve units per hour after a month on the job. If experience proves that nobody can meet these goals, then consider the second option: that the objectives need to be re-appraised. Some might be evaluated as"just plain unreasonable." If most trainees, but not all of them,fail to meet the desired goals, perhaps individual tolerances could be established. If so, the training department must be sure to follow up with the immediate supervisors of all graduates who cannot perform to the expected standard. When significant numbers fail to achieve a given goal (or set of goals), then a redesign of the program should be considered. When the"outputs" are missing, it's just reasonable to reevaluate the"inputs." Perhaps new methods, more drill, different visual aids will produce the desired learning's. Finally, people may fail to meet learning objectives because they lack the needed personal or experiential inventory .Personnel departments and managers have been known to put people into jobs for which they were misfit. It follows, too, that people may be assigned to training programs that involve objectives they cannot master. Even capable people may bring "negative affective inventories." These bad attitudes can inhibit or prevent the acquisition of the new behaviors. Good instructors can overcome some apathy and some negativism— but there is considerable question about how much of this responsibility ought to rest on the instructor's shoulders. Learners and their bosses have a responsibility for motivation, too!

Which brings up a brand new option: sometimes accepting defeat. This means that sometimes the best thing to do is to "give up" on certain individuals. It isn't that these people never could reach the goals; it's just that to produce the learning may be more costly than it is worth. That cost can involve energy and psychic pain as well as money.

When T&D specialists use such a display of achievement of learning goals, they may reasonably evaluate

  1. the reasonableness of the goals,
  2. the effectiveness of the training design,
  3. the effectiveness of the teaching,and
  4. the trainees' suitability to the learning assignment. (Did they belong in this program?)

A Simple formula permits a quantitative analysis:

  1. Compute the potential: Number of students multiplied by number of goals.
  2. Test individual achievements: Test each student on each objective.
  3. Compute gross achievements: Add all the "Yes" achievements.
  4. Compute achievement quota: Divide step 3 by step 1.

Some T&D officers set achievement quotas for every program. In highly critical skills, 100 percent achievement is mandatory; 90 percent is more typical for most organizational training. (The remaining 10 percent can accrue on the job with proper supervisory follow-up.) Public seminars seldom establish achievement quotas; indeed, they rarely use performance testing at all.

Achievement quotas, or performance testing, show what learning was accomplished. The same approach can reveal on-the-job utilization.


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Training and Development Topics