Jump to content

Enactive interfaces

From Wikipedia, the free encyclopedia
(Redirected from Enaction)
Enactive human-machine interface translating the aspects of a knowledge base into modalities of perception for a human operator. The auditory, visual, and tactile presentations by the system respond to tactile input from the operator, which user input in turn depends upon the auditory, visual, and tactile feedback from the system.[1][2]

Enactive interfaces are interactive systems that allow organization and transmission of knowledge obtained through action. Examples are interfaces that couple a human with a machine to do things usually done unaided, such as shaping a three-dimensional object using multiple modality interactions with a database,[2] or using interactive video to allow a student to visually engage with mathematical concepts.[3] Enactive interface design can be approached through the idea of raising awareness of affordances, that is, optimization of the awareness of possible actions available to someone using the enactive interface.[4] This optimization involves visibility, affordance, and feedback.[5][6]

The enactive interface in the figure interprets manual input and provides a response in perceptual terms in the form of images, sounds, and haptic (tactile) feedback. The system is called enactive because of the feedback loop in which the system response is decided by the user input, and the user input is driven by the perceived system responses.[1]

Enactive interfaces are new types of human-computer interface that express and transmit the enactive knowledge by integrating different sensory aspects. The driving concept of enactive interfaces is then the fundamental role of motor action for storing and acquiring knowledge (action driven interfaces). Enactive interfaces are then capable of conveying and understanding gestures of the user, in order to provide an adequate response in perceptual terms. Enactive interfaces can be considered a new step in the development of the human-computer interaction because they are characterized by a closed loop between the natural gestures of the user (efferent component of the system) and the perceptual modalities activated (afferent component). Enactive interfaces can be conceived to exploit this direct loop and the capability of recognizing complex gestures.

The development of such interfaces requires the creation of a common vision between different research areas like computer vision, haptic and sound processing, giving more attention on the motor action aspect of interaction. An example of prototypical systems that are able to introduce enactive interfaces are reactive robots, robots that are always in contact with the human hand (like current play console controllers, Wii Remote) and are capable of interpreting the human movements and guiding the human for the completion of a manipulation task.

Enactive knowledge

[edit]

Enactive knowledge is information gained through perception–action interaction in the environment. In many aspects the enactive knowledge is more natural than the other forms both in terms of the learning process and in the way it is applied in the world. Such knowledge is inherently multimodal because it requires the co-ordination of the various senses. Two key characteristics of enactive knowledge are that it is experential it relates to doing and depends on the user's experience, and it is cultural: the way of doing is itself dependent upon social aspects, attitudes, values, practices, and legacy.[1]

Enactive interfaces are related to a fundamental interaction concept that often is not exploited by existing human-computer interface technologies. As stated by cognitive psychologist Jerome Bruner, the traditional interaction with the information mediated by a computer is mostly based on symbolic or iconic knowledge, and not on enactive knowledge.[7] While in the symbolic way of learning knowledge is stored as words, mathematical symbols or other symbol systems, in the iconic stage knowledge is stored in the form of visual images, such as diagrams and illustrations that can accompany verbal information. On the other hand, enactive knowledge is a form of knowledge based on active participation, knowing by doing, by living rather than thinking.[8]

"Any domain of knowledge (or any problem within that domain of knowledge) can be represented in three ways: by a set of actions appropriate for achieving a certain result (enactive representation); by a set of summary images or graphics that stand for a concept without defining it fully (iconic representation); and by a set of symbolic or logical propositions drawn from a symbolic system that is governed by rules or laws for forming and transforming propositions (symbolic representation)"[9]

A particular form of knowledge is a skill, juggling being a simple example, and the acquisition of a skill is one area where enactive knowledge is evident. The sensorimotor and cognitive activities involved in acquiring skills are tabulated by the SKILLS FP6 European skills project.[10]

Multimodal interfaces

[edit]

Multimodal interfaces are a good candidate for the creation of Enactive interfaces because of their coordinated use of haptic, sound and vision. Such research is the main objective of the ENACTIVE Network of Excellence, a European consortium of more than 20 research laboratories that are joining their research effort for the definition, development and exploitation of enactive interfaces.

ENACTIVE Network of Excellence

[edit]

The research on enactive knowledge and enactive interfaces is the objective of the ENACTIVE Network of Excellence. A Network of Excellence is a European Community research instrument that provides fundings for the integration of the research activities of different research laboratories and institutions. The ENACTIVE NoE started in 2004 with more than 20 partners with the objective of the creation of a multidisciplinary research community with the aim of structuring the research on a new generation of human-computer interfaces called Enactive Interfaces. The aim of this NoE is not only the research on enactive interfaces by itself, but also the integration of the partners through a Virtual Laboratory and the spreading of the expertise and knowledge of the Network.

Since 2004, the partners, coordinated by the PERCRO laboratory, have improved both the theoretical aspects of enaction, through seminars and the creation of a lexicon, and the technological aspects necessary for the creation of enactive interfaces. Every year the status of the ENACTIVE NoE is presented through an international conference.[11]

See also

[edit]

References

[edit]
  1. ^ a b c Monica Bordegoni (2010). "§4.4.2: PDP [Product Development Process] scenario based on user-centered design". In Shuichi Fukuda (ed.). Emotional Engineering: Service Development. Springer. p. 76. ISBN 9781849964234.
  2. ^ a b Monica Bordegoni (2010). "§4.5.2 Design tools based upon enactive interfaces". In Shuichi Fukuda (ed.). Emotional Engineering: Service Development. Springer. pp. 78 ff. ISBN 9781849964234.
  3. ^ D Tall; D Smith; C Piez (2008). "Enactive control". In Mary Kathleen Heid; Glendon W Blume (eds.). Research on Technology and the Teaching and Learning of Mathematics. Information Age Publishing Inc. pp. 213 ff. ISBN 9781931576192.
  4. ^ TA Stoffregen; BG Bardy; B Mantel (2006). "Affordances in the design of enactive systems" (PDF). Virtual Reality. 10 (1): 4–10. doi:10.1007/s10055-006-0025-7. S2CID 8334591.
  5. ^ Debbie Stone; Caroline Jarrett; Mark Woodroffe; Shailey Minocha Morgan Kaufmann (2005). "Chapter 5; §3: Three principles from experience: visibility, affordance, and feedback". User Interface Design and Evaluation. Morgan Kaufmann. pp. 97 ff. ISBN 9780080520322.
  6. ^ Elena Zudilova-Seinstra; Tony Adriaansen; Robert van Liere (2008). "Perceptual and design principles for effective interactive visualizations". Trends in Interactive Visualization: State-of-the-Art Survey. Springer. pp. 166 ff. ISBN 9781848002692.
  7. ^ Bruner's list of six characteristics of iconic knowledge is found in Phillip T. Slee; Marilyn Campbell; Barbara Spears (2012). "Iconic representation". Child, Adolescent and Family Development. Cambridge University Press. p. 176. ISBN 9781107402164.
  8. ^ Phillip T. Slee; Marilyn Campbell; Barbara Spears (2012). "Enactive representation". Child, Adolescent and Family Development. Cambridge University Press. p. 176. ISBN 9781107402164.
  9. ^ Jerome Seymour Bruner (1966). Toward a Theory of Instruction (PDF). Harvard University Press. p. 44. ISBN 9780674897014. Archived from the original (PDF) on 2014-05-02. Retrieved 2014-05-16.. Quoted in J Bruner (2004). "Chapter 10: Sustaining mathematical activity". In John Mason; Sue Johnston-Wilder (eds.). Fundamental Constructs in Mathematics Education (Paperback ed.). Taylor & Francis. p. 260. ISBN 978-0415326988.
  10. ^ B Bardy; D Delignières; J Lagarde; D Mottet; G Zelic (July 2010). "An enactive approach to perception-action and skill acquisition in virtual reality environments" (PDF). Third International Conference on Applied Human Factors and Ergonomics.
  11. ^ "Research on haptic interfaces and virtual environments". PERCRO Perceptual Robotics Laboratory. Retrieved April 30, 2014.
[edit]
  • Vimeo, video of a three-dimensional dynamic interactive graphical display allowing a human operator to visualize and manipulate data.

Further reading

[edit]