Prompting, feedback and error correction in the design of a scenario machine
Abstract
A scenario machine limits the user to a single action path through system functions and procedures. Four scenario machines were designed to embody different approaches to prompting, feedback, and automatic error correction for a “learning-by-doing” training simulator for a commercial, menu-based word processor. Compared with users trained directly on the commercial system, scenario machine users demonstrated an overall advantage in the “getting started” stage of learning. Initial training on a “prompting + automatic correction” system was particularly efficient, encouraging a DWIM (or “do what I mean”) approach to training system design. Curiously, training on a “prompting + feedback” system led to relatively impaired performance on a set of transfer of learning tasks. It was suggested that too much training information support may obscure the task coherence of the action scenario itself relative to a design that provides less explicit direction. © 1988, Academic Press Limited. All rights reserved.