What are some key mechanisms of change in the information processing approach?

Before proceeding to our methodological proposals, it seems appropriate to present a summary statement of the current state of research on class inclusion as we see it. On the one hand, we have Inhelder and Piaget's theoretical account and, on the other, the complex set of results obtained from the experimental studies. A gap exists between the hypothetical structures and processes which form the basis of the theory and the level of performance as represented by the experimental data. This arises from the fact that the theoretical account is presented at a level of generality which makes it uncertain as to whether it is sufficient to account for the complex and varied behavior which it purports to explain. Indeed, there is no way at all of determining what would be its consequences on the level of performance. A much more detailed account of the functioning of specific processes is necessary before these uncertainties can be dispelled.

The existence of a gap between the levels of theory and performance is not confined to the work of Piaget and his collaborators. The confines of the present paper will not permit an extended discussion of this point, but the same uncertainty regarding consequences in performance terms and sufficiency to account for behavior surrounds theories emanating from neobehaviorist sources such as Berlyne's (1965) account of directed thinking.

It is our contention that the information processing approach which follows provides a methodology which bridges the gap between theory and performance. During the last decade, information processing analysis has gained wide currency as a method of approach to the study of cognition. This approach assumes that for a broad range of cognitive activity, humans are representable as information processing systems. There are a few gross characteristics of the system (e.g., information transfer rates, size of immediate memory, seriality) that are sufficient to cause problem solving to take place in what Newell and Simon (1971) call a problem space. The problem space is a collection of symbolic representations and operations that are determined by the task environment and the problem space in turn determines the programs that can actually be used.

The most specific theory of human problem solving (Newell & Simon, 1972) deals entirely with adult subjects. Although the relevance of information processing models to theory construction in the developmental area has begun to be recognized (Lunzer, 1969; Biggs, 1969; Flavell & Wohlwill, 1969), most of these uses of the information processing approach in cognitive development have been at the metaphorical level. For example, Flavell & Wohlwill (1969) make the general statement that “intellectual development is essentially a matter of ontogenetic change in the content and organization of highly intricate ‘programs’ …” When employed in this fashion, information processing analysis constitutes simply a different, rather than an improved, approach to the study of cognitive development. Theoretical statements employing only the metaphorical level of information processing analysis suffer from the same deficiencies as those already imputed to the theories of Piaget and Berlyne. It has been clearly demonstrated by Newell and Simon (1972), however, that the information processing approach can go far beyond the metaphorical level. When information processing analysis is combined with computer simulation, the result is a theorizing medium which provides both ease of detection of mutual contradictions and ambiguity, and an explicit method for examining the exact behavioral consequences of theoretical statements. These are precisely the attributes which the major theories of cognitive development lack.

Application of the information processing approach to the problems posed by cognitive development was advocated by Simon (1962), but up to the present, only a few studies of this type have been carried out (Gascon, 1969; Klahr & Wallace, 1970 a, Klahr & Wallace, 1970 b; Young, 1971). In an earlier paper, (Klahr & Wallace, 1970 b) we attempted to demonstrate that a set of tasks typically used to assess the stage of concrete operations calls upon a collection of fundamental processes that, when appropriately organized for each task, are sufficient to solve the problem posed. Our initial view of the information processing model of the child's performance on a typical Piagetian task was as follows:

We believe that the major task facing the child who has just been presented with an experimental task is to assemble, from his repertoire of fundamental information handling processes, a routine that is sufficient to pass the task at hand. We view the information processing demands of the tasks as being analogous to the compilation and execution of a computer program. [See Figure 8.6]. Incoming visual and verbal stimuli are first encoded into internal representations. Then the assembly system attempts to construct, from its repertoire of fundamental processes, a task-specific routine that is sufficient to meet the demands of the verbal instructions. Having assembled such a routine, the system then executes it.

Detailed descriptions of three parts of the model were presented: the internal representation of objects, a collection of fundamental processes and a set of task-specific routines. We will briefly describe these elements below.

What are some key mechanisms of change in the information processing approach?

Fig. 8.6. Information processing model of task performance.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780122495502500171

Information Processing

Christopher D. Wickens, John M. Flach, in Human Factors in Aviation, 1988

Publisher Summary

This chapter reviews a representative research generated by the information processing approach to human cognition and presents some implications of this research for the design of safe, comfortable, and efficient aviation systems. The information processing approach has not been without its critics. It has been accused of playing 20 questions with nature and losing. The essence of this criticism is that the science has become too segmented along the line of specific experimental tasks, that is, one group studies the infinite permutations on choice reaction time tasks, another group studies memory search tasks, and another focuses only on tracking tasks. Thus, the science has produced an enormous catalog of information about a few rather esoteric laboratory tasks; yet it has contributed very little to the understanding of how humans function outside the laboratory in real-world settings. It is likely that the future success of the information processing approach will rest on its ability to deal with this criticism. There is a need for human factors specialists to widen their perspective beyond the relatively simple tasks that currently dominate research to increase the complexity of experimental tasks and to incorporate more ecologically valid sources of information. The information processing paradigm has contributed both knowledge and tools relevant for understanding human performance in aviation systems. The study of human performance in aviation systems provides an excellent opportunity to better understand general issues related to human cognition in complex environments.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080570907500114

Testing Tapping Time-Sharing: Attention Demands of Movement Amplitude and Target Width

Barry H. Kantowitz, James L. KnightJr., in Information Processing in Motor Control and Learning, 1978

Publisher Summary

This chapter discusses the attentional demands of a simple voluntary positioning movement. It presents an understanding of motor control and performance that can be aided by an information-processing approach. The study of voluntary positioning movements is a venerable topic in experimental psychology with antecedents as early as the nineteenth century. A simple positioning movement requires an individual, sometimes called a subject (psychology jargon) or an operator (engineering jargon), to relocate a pointer such as a stylus or a finger by executing a spatial movement to reach a well-defined target position. The experimenter who induces such a voluntary movement is interested primarily in the speed and accuracy with which the pointer gets from here to there. Although terminology has changed, current views about simple positioning movements bear a strong resemblance to the position stated by Woodworth (1899). A ballistic or open-loop initial movement phase is followed by a closed-loop phase in which the processing of feedback information controls pointer position until the target is reached. The open-loop portion of movement is quite often said to be under the control of a motor program. Automatization releases attention or capacity so that it is available for performing a secondary task. Time-sharing paradigms are in some ways methodologically simpler than probe paradigms, they cannot sweep out time patterns of attention throughout a movement when analyzed.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780126659603500153

Foreword

Edwin A. Fleishman, in Transfer of Learning: Contemporary Research and Applications, 1987

The book is timely for a number of reasons. First, a new upsurge in interest in transfer of learning has occurred among researchers with a cognitive or information-processing approach to human learning. For many years, cognitive psychologists seemed to reject the topic, and it was difficult to locate references to transfer in the indexes of books on cognitive psychology. The interests of these psychologists seemed not to be in transfer of learning but in the structures and processes involved in the encoding and retrieval of information during initial task acquisition and retention. This is a far cry from the centrality accorded the topic in earlier days. Thus, McGeoch and Irion's classic 1952 book on human learning states that transfer of learning “is one of the most general phenomena of learning and, by means of its influence, almost all learned behavior is interrelated in various complex ways.” And Battig, in a 1966 review of the topic, concluded that “the magnitude and generality of the effects produced by previous learning upon performance in new learning tasks require that transfer phenomena be placed at or near the head of the list insofar as overall importance to psychology is concerned.” Despite the benign neglect by subsequent cognitive psychologists, it is reasonable to assume that the large volume of recent data from information-processing research has relevance to our understanding of transfer of learning, even if this has not been its primary focus. And, as this book demonstrates, interest among cognitive psychologists in issues of transfer has been increasing with the significant enrichment in our conceptualization and applications of research in this area.

Read full chapterView PDFDownload book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780121889500500059

QUANTIFICATION PROCESSES

David Klahr, in Visual Information Processing, 1973

Publisher Summary

This chapter discusses information processing model, cast in the form of a production system that could be used to explain different patterns of success and failure and the effects of training. At the heart of the production system were some things called quantification operators. These quantification operators were nice examples of what Walter Reitman meant when he once characterized the information processing approach as a way to invent what is needed to be known. They were essential to the logic of the model discussed in the chapter, and so their existence was postulated. The overall research strategy is to formulate models of performance of the developing organism at two different points in time, and then to formulate a model for the transition or developmental mechanisms. Adopting an extreme engineering approach to the information processing system, changes can be viewed in terms of four major classes of variables: (1) programs, (2) data structures, (3) capacities, and (4) rates. Changes in the first two result from software variation; changes in the last two result from hardware variation.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012170150550007X

Dimensions of Motor Task Complexity

Keith C. Hayes, Ronald G. Marteniuk, in Motor Control, 1976

Publisher Summary

This chapter discusses many questions that arise when trying to understand the control processes involved in different forms of complex motor behavior. One convenient way to grasp a fairly wide perspective of task complexity is to view it from two of the principal approaches to understanding motor control. These two approaches, the “preprogramming model” and the “information processing” approach, may be thought of as being different dimensions, each capable of providing insight into the nature of task complexity. In essence, the notion of motor programming embodies the view that stored sets of motor commands, both innate and learned, are available within the central nervous system to be called upon at will and synthesized into a desired movement. The coordinative structures, the body's reflexes, are most familiar to clinicians who frequently see them manifest in the normal course of development of the neonate or in the motor expression of the brain-damaged and mentally retarded. Spinal reflexes such as the stretch reflex are readily identified but the higher level reflexes, for example, labyrinthine and righting reflexes, are less well known.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780126659504500146

Philosophy of Computing and Information Technology

Philip Brey, Johnny Hartz Søraker, in Philosophy of Technology and Engineering Sciences, 2009

3.6 Human-Computer interaction

Human-Computer Interaction (HCI) is a subfield within computer science concerned with the study of the interaction between people (users) and computers and the design, evaluation and implementation of user interfaces for computer systems that are receptive to the user's needs and habits. It is a multidisciplinary field, which incorporates computer science, behavioral sciences, and design. A central objective of HCI is to make computer systems more user-friendly and more usable. Users interact with computer systems through a user interface, which consists of hard- and software that provides means of input, allowing users to manipulate the system, and output, allowing the system to provide information to the user. The design, implementation and evaluation of interfaces is therefore a central focus of HCI.

It is recognized in HCI that good interface design presupposes a good theory or model of human-computer interaction, and that such a theory should be based in large part on a theory of human cognition to model the cognitive processes of users interacting with computer systems [Peschl and Stary, 1998]. Such theories of human cognition are usually derived from cognitive psychology or the multi-disciplinary field of cognitive science. Whereas philosophers have rarely studied human-computer interaction specifically, they have contributed significantly to theorizing about cognition, including the relation between cognition and the external environment, and this is where philosophy relates to HCI.

Research in HCI has initially relied extensively on classical conceptions of cognition as developed in cognitive psychology and cognitive science. Classical conceptions, alternatively called cognitivism or the information-processing approach, hold that cognition is an internal mental process that can be analyzed largely independently of the body of the environment, and which involves the manipulation of discrete, internal states (representations or symbols) that are manipulated according to rules or algorithms [Haugeland, 1978]. These internal representations are intended to correspond to structures in the external world, which is conceived of as an objective reality fully independent of the mind. Cognitivism has been influenced by the rationalist tradition in philosophy, from Descartes to Jerry Fodor, which construes the mind as an entity separate from both the body and the world, and cognition as an abstract rational, process. Critics have assailed cognitivism for these assumptions, and have argued that cognitivism cannot explain cognition as it actually takes place in real-life settings. In its place, they have developed embodied and situated approaches to cognition that conceive of cognition as a process that cannot be understood without intimate reference to the human body and to the interactions of humans with their physical and social environment [Anderson, 2003]. Many approaches in HCI now embrace an embodied and/or situated perspective on cognition.

Embodied and situated approaches share many assumptions, and often no distinction is made between them. Embodied cognition approaches hold that cognition is a process that cannot be understood without reference to the perceptual and motor capacities of the body and the body's internal milieu, and that many cognitive processes arise out of real-time goal-directed interactions of our bodies with the environment. Situated cognition approaches hold that cognitive processes are co-determined by the local situations in which agents find themselves. Knowledge is constructed out of direct interaction with the environment rather than derived from prior rules and representations in the mind. Cognition and knowledge are therefore radically context-dependent and can only be understood by considering the environment in which cognition takes place and the agent's interactions with this environment.

Embodied and situated approaches have been strongly influenced by phenomenology, especially Heidegger, Merleau-Ponty and the contemporary work of Hubert Dreyfus (e.g., [Winograd and Flores, 1987; Dourish, 2001; Suchman, 1987]). Philosophers Andy Clark and David Chalmers have developed an influential embodied/situated theory of cognition, active externalism, according to which cognition is not a property of individual agents but of agent-environment pairings. They argue that external objects play a significant role in aiding cognitive processes, and that therefore cognitive processes extend to both mind and environment. This implies, they argue, that mind and environment together constitute a cognitive system, and the mind can be conceived of as extending beyond the skull [Clark and Chalmers, 1998; Clark, 1997]. Clark uses the terms “wideware” and “cognitive technology” to denote structures in the environment that are used to extend cognitive processes, and he argues that because we have always extended our minds using cognitive technologies, we have always been cyborgs [Clark, 2003]. Active externalism has been inspired by, and inspires, distributed cognition approaches to cognition [Hutchins, 1995], according to which cognitive processes may be distributed over agents and external environmental structures, as well as over the members of social groups. Distributed cognition approaches have been applied to HCI [Hollan, Hutchins and Kirsh, 2000], and have been especially influential in the area of Computer Supported Cooperative Work (CSCW).

Brey [2005] has invoked cognitive externalist and distributed cognition approaches to analyze how computer systems extend human cognition in humancomputer interaction. He claims that humans have always used dedicated artifacts to support cognition, artifacts like calendars and calculators, which HCI researcher Donald Norman [1993] has called cognitive artifacts. Computer systems are extremely versatile and powerful cognitive artifacts that can support almost any cognitive task. They are capable of engaging in a unique symbiotic relationship with humans to create hybrid cognitive systems in which a human and an artificial processor process information in tandem. However, Brey argues, not all uses of computer systems are cognitive. With the emergence of graphical user interfaces, multimedia and virtual environments, the computer is now often used to simulate environments to support communication, play, creative expression, and social interaction. Brey argues that while such activities may involve distributed cognition, they are not primarily cognitive themselves. Interface design has to take into account whether the primary aim of applications is cognitive or simulational, and different design criteria exist for both.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444516671500513

Human-Computer Interaction in Architectural Design

Pierre Goumain, Joseph Sharit, in Handbook of Human-Computer Interaction, 1988

Architectural Design Modelling and Graphic Representations

Throughout the early stages of the design process, drawings are the main vehicle used by architects to externalize their current internal representations of the problem structure and of potential design solutions. Early design drawings often look like a private notation system which can only be communicated to others when supplemented by verbal commentary, what Schon (1983), (1985) has called the “language of designing.” As the design develops, and as tentative ideas are merged into a solution to which the designer is increasingly committed (through “shifts in stances,” in Schon's terminology), drawings become more explicit and capable of being understood by others on their own. The skilled designer will constantly monitor his or her own performance and select those design strategies that are most likely to lead to the desired solution (the “language about designing,” in Schon's terminology). Thus Schon emphasizes the designer's self-reflectivity while designing, what he calls “reflection-in-action,” and presents designing as a conversation with the materials of a “situation” (by venturing a design conjecture retrieved from memory). By remaining attentive to feedback, positive and negative, the designer is “reframing” problem situations, while also adapting situations to frames by exploiting new design opportunities hinted at by the current design. This procedure takes place through a sequence of “design experiments” which in contrast to scientific experiments, are all at once “exploratory, move-testing” and “hypothesis-testing.” During the process, representations such as drawings play the role of “virtual worlds” which enable the “accurate rehearsal” of intended actions. The designer's changing grasp of the problem and his or her production of a solution are thereby interlocked throughout the process.

This analysis of the design/drawing process can be paralleled to the classificatory framework proposed by Goumain (1973) for studying the architectural design process. In this framework, an information-processing approach is taken to analyze the design activity both at the level of the individual designer and at the level of the design organization. The sources of information used to carry out these activities can be traced to five levels of organizational complexity: (1) the individual designer's memory; (2) solution evaluations, drawings, and written lists produced by earlier design activities; (3) the design office library, technical literature, design manuals, handbook of office procedures, previous completed jobs, standard details, etc., as well as advice from professional colleagues within the office; (4) the design coalition which includes all “agents” who make a contribution to the design process (consultants, etc.); and (5) the external environment where the design will eventually be used. It includes the public at large, its democratically elected representative, and officers such as the town planner and building inspector.

When confronted with any new design task, the designer brings to bear the totality of his or her prior experience and knowledge to perceive and understand the nature of the task and to preselect tentative solutions. These observations are consistent with Lebahar's (1983) thesis that the architectural design process is one of uncertainty reduction through the use of graphic simulation and sketches. The graphic simulation model provides a temporary representation of the problem that allows hypotheses concerning the “object model” to be expressed, tested, and modified prior to reaching conclusions concerning these hypotheses.

In summary, it is through the constant exercise of modelling and simulating by drawing that the architectural designer develops the ability to think in three dimensions and to manipulate the symbolic codes and conventions of architectural representation by drawing. This spatial ability is also developed through the making, manipulating, and visualizing of physical three-dimensional models. Through increasing experience the designer can call upon a widening arsenal of design strategies. Having assimilated these in relation to analogous design problems, he or she can successfully accommodate them to reduce the uncertainty presented by new problems.

If CAAD systems are to become “intelligent,” they must incorporate an explicit knowledge of the semantics of architectural symbols and drawings. In doing so, they should not bypass the designer's learning and growth process. Clearly, it is the task of design research to provide such knowledge.

One approach to the study of the semantics of architectural representations is to focus on their memorization in relation to design experience. Mallen and Goumain (1973) together with Wood (1973) suggested that the experienced designer organizes the information content of an architectural drawing differently than the layman, much as chess grandmasters appear to structure information about positions on a chess board in different and larger chunks than ordinary players (Chase and Simon, 1973). In a laboratory experiment, subjects were asked to copy an original drawing by examining it for as long as they wanted (exposure time), then turning around on a swivel chair to transfer the information onto a drawing board (drawing time), and then repeating the process as many times as needed (Figures 8 and 9). Videotaped records and interviews permitted the identification of chunks of information memorized for every exposure/drawing cycle in the sequence. For the copying of an architectural plan, chunking was shown to be strikingly different in the experienced and inexperienced subjects. For the former, both exposure and drawing times were very lengthy for the initial cycles, while these times were evenly distributed across the drawing cycles for the latter. Experienced architects clearly brought their experience to bear. For example, one of them was convinced that he was copying the original drawing when in fact he was using his knowledge of plumbing to redesign the layout of a bathroom. A control experiment involved the memorization of a non- architectural original (a Mondrian painting) and showed almost identical patterns between experienced and inexperienced subjects. Akin (1986) reports a set of three similar experiments where subjects were asked to interpret, trace, or copy the plan of a church (pp.119-130). Findings broadly concur with the above. In particular, they appear to confirm the hierarchically organized and nested structure of memory chunks in architectural design.

What are some key mechanisms of change in the information processing approach?

Figure 8. Perception and copying experiment: experienced subject.

(after Mallen and Goumain, 1973)

What are some key mechanisms of change in the information processing approach?

Figure 9. Perception and copying experiment: inexperienced subject.

(after Mallen and Goumain, 1973)

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444705365500373

ON THE DEVELOPMENT OF THE PROCESSOR*

H.A. Simon, in Information Processing in Children, 1972

Long-Term Memory

Consistent with recent emphasis in cognitive psychology, the papers at this symposium mention long-term memory (LTM) explicitly less often than they do STM. Nevertheless, they suggest a role for LTM in development that is different in one fundamental respect from the role accorded to it classically.

A vulgar view of LTM might picture it as a large bin or filing cabinet in which the child, in the course of his development, accumulates new facts and knowledge. I suppose no one would quarrel with the proposition that this is part of what happens. To acquire a logograph-reading skill, the child must learn the meanings of a suitable vocabulary of logographs. This is a matter of storing away in the filing cabinet properly indexed pairs of associates. However, we have seen that it is equally essential that the child acquire appropriate programs for processing the logographs in accordance with the “reading” instructions.

In almost all references to LTM in this symposium, the speakers talk about the storage of “programs” or “strategies” or “rules, rituals, and tricks of the trade”—that is, of processes rather than information. This, of course, is the touchstone of the information processing point of view in psychology: “Knowing” is largely “knowing how”—that is, skill. It is the viewpoint that led Bartlett, in Thinking (1958), to take motor skill as his metaphor for thinking ability.

The Genevans in this symposium, Inhelder and Cellérier, have some interesting points to make about the relation between a structuralist view, which describes the concepts that the child acquires as abstract structures, and an information-processing approach, which describes them as programs—or, in the terminology of Inhelder and Cellérier, as schemata (see Part III).

I am reminded by the structure-schema distinction of the analogous distinction that linguists make between language competence and performance. In the view of some, language competence formulated, say, as a transformational grammar, provides an abstract description of what the native speaker “knows,” but does not describe the form in which that knowledge is held in memory or used to process language. I hasten to add that I am not at all sure that Inhelder and Cellérier would accept this analogy with the structure-schema dichotomy.

In her descriptions of some experiments on length conservation, Inhelder offers an interesting hypothesis about what needs to be stored in LTM before the child can perform such tasks. The tasks involve constructing a line of some sort that matches in length a line presented by the experimenter. The lines, or “roads,” are constructed from matchsticks, the experimenter using sticks of a different length than those used by the child. Hence, as in all the classic conservation experiments, the situation confronts the child with conflicting cues: He can count matchsticks, or he can estimate lengths. The difficulty in the task, argues Inhelder, lies in reconciling the judgments of equivalence arrived at by these different routes, and choosing a criterion that is consistent with the requirements of the task. She offers a similar analysis of the standard matching test that involves comparing the sizes of sets and subsets.

Now this interpretation is clearly not intended by Inhelder to supercede the usual Geneva analysis, since she speaks explicitly of “resolution through reciprocal assimilation of two different subsystems that do not necessarily belong to the same developmental level.” Thus, underlying the learning phenomena are structures stored in LTM that are acquired at different stages of development. If that is so, then we must suppose that each of these structures is associated with (1) processes of attention and perceptual encoding for acquiring information relating to the structure (e.g., counting operations and length-estimating operations for visual stimuli); and (2) an internal representation for encoding and storing in LTM information characterizing the structure.

The work by Klahr and Wallace (Part IV) can be interpreted as an endeavor to make entirely explicit the information processing associated with these kinds of cognitive structures. These authors agree with the other symposiasts in filling LTM mainly with programs; but they detail not only the programs, but also the encoding of information in LTM—the nature of the internal representation. They postulate that such information is stored in the form of lists and description lists—the latter being better known to contemporary psychologists as “feature lists.” A description, or feature, is simply a two-termed relation between an object and one of its properties: e.g., the color (relation) of the apple (object) is red (property or value).

An interesting characteristic of this representation is that it makes the contents of LTM rather homogeneous in organization, independently of the sensory channel through which the information was acquired. Thus, a mental picture is made of the same stuff (list structures of features) as a mental symphony. Of course, it is only the form of organization they share in common; the specific relations encoded depend on sensory mode—the feature “red” must have been acquired through the eyes, and “interval of a fifth” through the ears.

Postulating this common organization for encoding stimulus information illuminates one of the central issues discussed (Part III) by Jacqueline Goodnow in her paper: the issue of intersensory correspondences of stimuli. Suppose that a sequence of sounds is encoded as a list:

Tap−pause−tap−tap−tap

Suppose, further, that the child has a list of pairs (associations) in LTM:

tap→circle; pause→space; tap2 after tap1→circle2 to the right of                                                                            circle1

Then, a relatively simple program will allow him to translate the aural stimulus into a visual one, which he can undertake to draw:

circle−space−circle−circle−circle

What the homogeneous coding explains is the possibility of anyone even finding meaningful the task of intermodal correspondence. It does not explain why the task may be difficult for children. In her paper, Goodnow shows us what assumptions are involved in supposing that the child “should know” what intermodal associations the adult has in mind (why not: tap, pause → large circle; tap → small circle? Why not, indeed?). She demonstrates that the child must acquire, and store in LTM, a whole host of conventions, many of them culture-specific, about the correspondences that are “appropriate.”

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780122495502500092

The Structural Processes Underlying Transfer of Training

STEPHEN M. CORMIER, in Transfer of Learning: Contemporary Research and Applications, 1987

B CUING PROPERTIES OF SIMULATORS

1 Fidelity

The use of simulators to teach trainees how to operate aircraft and other equipment has been an area of research based on the identical elements approach. For example, the airplane simulator is supposed to provide the kind of environment that would be experienced by a pilot in an actual airplane. To the extent that the simulator has a high correspondence (more identical elements) with the actual equipment, it can be said to possess high physical fidelity.

The transfer effectiveness of simulators is well established (e.g., Lintern, 1980), and as Gerathewohl (1969) has noted, high-fidelity simulators specifically have demonstrated their value. Unfortunately, high-physical-fidelity simulators are expensive to construct, and the amount is usually directly proportional to the degree of fidelity. As a result of this cost, much effort has gone into determining how much fidelity is needed, in other words, how far a simulator can deviate from the actual equipment and still produce high positive transfer.

A consideration of cuing relationships between tasks can help to clarify some of the inconsistent research findings on the degree of fidelity required, findings which have proved refractory to analysis in the identical elements approach. Motion has been a cuing dimension found to exert inconsistent effects on performance (e.g., Caro, 1979; Jacobs & Roscoe, 1975). One reason for this inconsistency is that different kinds of motion (e.g., cockpit motion, rough-air simulation, etc.) have different effects (Ince, Williges, & Roscoe, 1975). National Air and Space Administration researchers (Rathert, Creer, & Sadoff, 1961) found a significant correlation between increased motion and pilot performance with an unstable or sluggishly responding aircraft. Ruocco, Vitale, and Benfari (1965) showed that cockpit motion on a simulated carrier-landing task did improve task performance as measured by successful landings, altitude error, and time outside the flight path. Jacobs and Roscoe (1975) found that motion cues are not useful in transfer to aircraft that are easy to fly, however (cf. Nautaupsky, Waag, Meyer, McFadden, & McDowell, 1979).

Gundry (1977) notes that aircraft motion cues can occur either because of pilot control (e.g., changes in direction or altitude) or because of external forces (e.g., turbulence). He has hypothesized that motion cues may be redundant in the case of pilot-initiated changes not only because the pilot is already alerted to the change but also because aircraft are designed to be as stable and easy to control as possible in normal use. In such a case, other stimulus information is enough to cue the appropriate response. Disturbance-induced motion cues, on the other hand, may be more essential to pilot response when other cues (e.g., visual) are inadequate (Perry & Naish, 1964). For example, Ricard and Parrish (1984) showed that cab motion was useful for helicopter pilots on a simulated hover task on disturbance maneuvers but not for pilot-initiated maneuvers. Martin and Waag (1978) found that pilot-maneuver motion did not enhance transfer using a flight simulator.

The motion studies mentioned above support two basic conclusions relevant to the current information-processing approach. First of all, positive transfer was not a rigid function of the degree of identical elements in Tasks 1 and 2 (simulator and flight). Similar levels of positive transfer were found despite variations in the level of correspondence between Tasks 1 and 2. Secondly, some stimulus attributes of the training environment were more important to the retrieval of TBR material than were other attributes. The degree to which a particular stimulus attribute functioned as a retrieval cue for current responding seemed to depend on the nature of the TBR material and the extent to which other retrieval information was available.

The examination of these cuing relationships permits the predictive analysis of transfer effects prior to actual Task 2 training (cf. Kruk, Regan, Beverly, & Longridge, 1983). For example if the sequence of flight tasks necessary to perform restricted-visibility landings has to be recalled by the pilot (in Task 2), then Task 1 training must insure that the sequence can be performed under recall conditions. Task 1 training which provided only recognition training of the task sequence should result in less positive transfer than the recall training.

In other words, knowledge about the Task 2 cues should permit at least some prediction of transfer effects, given training on some Task 1, since the cuing correspondence is then analyzable in principle. Effective training tasks may have surface characteristics quite different from the target task as long as the essential cue-response relationships are preserved. In this view, it is not physical fidelity per se that contributes to high positive transfer; rather, it is the presence of retrieval information in Task 2 which has a high cuing and redintegrative capacity for the essential Task 1 material. Low-fidelity devices should be effective in producing transfer as long as they provide the trainee with the essential cuing relationships between the stimulus attributes of the task environment and the appropriate responses.

Decreases in simulator fidelity seem most easily achieved for tasks that require fixed procedures (e.g., Bernstein & Gonzalez, 1968). For example, Prophet and Boyd (1970) found that a cockpit mock-up made of plywood and photographs was about as effective as instruction in the aircraft itself on tasks such as aircraft pre-start-up, start run-up, and shutdown procedures.

Tasks in which it is difficult to identify the specific cues which control responding may require more physical fidelity in the training situation. Salvendy and Pilitsis (1980) developed training simulators to teach suturing techniques to medical students. Three training methods were used: electromechanical, perceptual, and a combination of both. A standard instruction (lecture) group was used as controls. The electromechanical method taught students how to puncture simulated tissue with the aid of a mechanical device which provided auditory and visual information on the correctness of the technique performed. The perceptual method involved watching filmed performance of both expert surgeons and inexperienced medical students. The trainee was instructed to analyze the student”s performance by comparing it to that of the surgeon”s. The third experimental method was simply a combination of both procedures.

The results showed that the electromechanical and combined electromechanical-perceptual groups had the highest transfer performance levels and were essentially equivalent. The perceptual-only group”s performance was not significantly different from the control group in the number of good sutures, although instructors did rate their performance as somewhat higher. These results suggest that essential cuing information is provided by the actual performance of the suturing technique, which is difficult to impart through alternative (lower fidelity) means.

2 Augmented Feedback

Up to now we have considered the effects of cuing relationships on positive transfer; however, it is possible for (inappropriate) cuing relationships to exist between Tasks 1 and 2 which could lead to zero or negative transfer. One such example would be when relevant Task 1 information has been encoded and retrieved using attributes which are not present on Task 2, for example, augmented feedback. Augmented feedback, or the use of special cues which provide supplementary or augmented information concerning responding, often facilitates Task 1 performance (e.g., Briggs, 1969). However, its effect on Task 2 performance is much more variable and can produce zero or negative transfer (e.g., Bilodeau & Bilodeau, 1961). As Welford (1968) notes, augmented feedback cannot be expected to increase transfer when the subject comes to rely on it for performing the correct response instead of helping the subject to observe and better use inherent task information that will also be available in Task 2.

Eberts and Schneider (1985) studied the effects of different kinds of augmented cues on performance of a second-order tracking task. (In a first-order system, the pointer moves in direct relationship to movements of the joy stick, while in a second-order system, movements of the joy stick produce changes in acceleration of the pointer). While a variety of augmented cues enhanced performance while present, only one such cue, presenting the expected parabolic path of the pointer produced by a given joy-stick movement, increased transfer to a task without augmented feedback. The parabolic cue not only guided behavior, as the other cues did, but also clarified and increased the salience of the important cue relationships between joy-stick movement and pointer movement. In other words, the trainee”s mental model of the system more closely corresponded to its actual mode of operation. These and the other findings discussed previously highlight the importance of examining and specifying the precise relationship between the retrieval information and the encoded materials present on Tasks 1 and 2.

Although the importance of cuing relationships in determining transfer has been shown through consideration of such phenomena as encoding specificity, we have not specifically discussed ways of manipulating the relationship between cues and TBR material which increase the likelihood of positive transfer. Therefore, we will next consider one line of research which sheds some light on this question.

What are the five information processing mechanisms?

Elements of Information Processing Theory Cognitive processes – The various processes that transfer memory among different memory stores. Some of the processes include perception, coding, recording, chunking, and retrieval.

What are the main processes in the information processing approach?

The most important theory in information processing is the stage theory originated by Atkinson and Shiffrin, which specifies a sequence of three stages information goes through to become encoded into long-term memory: sensory memory, short-term or working memory, and long-term memory.

What are the 3 stages of information processing approach?

Encoding involves the input of information into the memory system. Storage is the retention of the encoded information. Retrieval, or getting the information out of memory and back into awareness, is the third function.

Which are key elements of the information processing model?

An abstract model of an information system features four basic elements: processor, memory, receptor, and effector (Figure 1).