Keehner, M., & Khooshabeh, P. (2005). "Computerized representations of 3D structure: How spatial comprehension and patterns of interactivity differ among learners"

From MathWiki

Keehner, M., & Khooshabeh, P. (2005). Computerized representations of 3D structure: How spatial comprehension and patterns of interactivity differ among learners. Proceedings of the AAAI Spring Symposium Series, Reasoning with Mental and External Diagrams, Stanford University, March 21-23, 2005; AAAI Press.

Available online: http://www.psych.ucsb.edu/~hegarty/Keehn&Khoosh%20AAAI%20final%201-31-05.pdf

Table of contents

Introduction

In this follow-up study, which largely parallels an experiment by Keehner et al. (2004) previously presented here, Keehner & Khooshabeh continue to explore the nature of learning experiences with 3D technologies. The factors of individual spatial ability and the interactivity of computer software are explored here.

The authors assert that while there is much optimism surrounding the use of 3D visualization software, “our understanding of how learners interact with these representations is relatively limited” (p.1). Nevertheless, some previous research has suggested, for example, that medical anatomy learning can be affected by 3D visualization software. The nature of this effect, however, may be mediated by a number of factors. These may include a student’s prior spatial abilities (where it was suggested that medical students with low spatial ability may actually be at a disadvantage through the use of such software), and the interactivity of the software itself. Thus, the authors carried out two experiments to explore these relationships further.


Experiment 1

Please see Keehner M, Montello D.R., Hegarty M. & Cohen C. (2004). "Effects of interactivity and spatial ability on the comprehension of spatial relations in a 3D computer visualization", for a description of the original experiment.


Experiment 2

After considering the findings (and potential confounding factors) of the first experiment, the authors explain their reasoning for pursuing a replication study:

A possible explanation for why we found no advantage
of active control may also lie in the nature of the interface.
The key-press control system used in Experiment 1 was
not intuitive, and as such it is possible that merely
operating it produced a significant additional cognitive
demand on active participants, counteracting any potential
benefits from active control. If this is the case, then an
Interface that produces a smaller cognitive load might
allow the real advantage from active control to emerge.
The purpose of Experiment 2 was to test this hypothesis.


A different interface

To determine whether the results of the first experiment were mediated by the keyboard interface, the researchers facilitated an identical study to the first – with the exception of the rotation interface that participants used.

The interface was a more intuitive hand-held device, comprising a 3
degrees-of-freedom motion sensor (the InterSense Inertiacube2) 
mounted inside an egg-shaped casing. This translated the rotational 
movements made by the participants in real time to the object on-screen.

The authors provide little information on this device, so here is some more information for those who are interested:

InterSense (InertiaCube2 developer) Website (http://www.intersense.com/)

InterSense InertiaCube2 Specifications (http://www.intersense.com/uploadedFiles/Products/InertiaCube2(1).pdf)


Results

  • As in the first experiment, there was no significant difference between the active and passive control conditions. So active control of the visualization was not found to benefit participants’ overall performance.
  • When compared to the first experiment, however, “the effect of spatial ability on performance was substantially attenuated” (p.5). Thus, in this replication study, spatial ability was less of a predictor of task performance than the original investigation.
  • In the first experiment, there was little consistency in the participants’ interaction with the software. In other words, all of the participants seemed to interact with the software in unique and inconsistent ways. Here, however, this was not the case:
The patterns of interactivity showed much greater consistency across
participants than in Experiment 1. This finding provides indirect evidence 
for our assertion that the interface was more intuitive or naturalistic, as all
participants used it in similar ways, even though they received minimal 
training in its operation.

The authors explain that there were certain views of the 3D object that were favoured over others, by nearly all the participants (without a large impact of prior spatial ability). The authors go a step further and assert that the views chosen by most of the participants correspond to the “optimal key views for solving each trial” (p.5).

Discussion: Drawing Conclusions?

Since the two experiments differed only in the input (rotation) interface used by participants, and since some related research had previously been conducted in this area, the researchers suggest the following points for discussion:

  • Previous research suggests that active control over a virtual environment positively affects performance. However, neither of these experiments supports this claim. The authors suggest that the answer may lie in the design of the study: Recall that both groups (active & passive) experienced the same visual information:
The fact that they did not differ in performance suggests that it is access
to informative views of the structure, rather than interactivity per se, that 
is critical for performance on this task. Thus, when both participants see 
the same information, they do equally well, even if they do not directly 
control the movements of the object.

The researchers feel that the benefit of employing 3D visualization software in learning may stem from the experience (primarily visual) of “informative views” of 3D objects. The researchers admit that more research is needed to explore this claim.

  • In line with previous findings, spatial ability was found to be a predictor of performance, and a mediating factor in participants’ experience of the 3D visualization software. However, the authors highlight the fact that the correlation between spatial ability and performance was far less evident in the second experiment, and speculate that this difference was due to the varied user interface. However, they also assert that there is not enough data from this study suggest a mechanism for the effect of the user interface on learning.
  • The authors identify the interactivity patterns prevalent in the study as the most interesting result of the investigation. The fact that the “more intuitive” (I will hopefully be able to generate a more specific description of what is meant by this) user interface generated a consistent (and efficient!) pattern of interaction with the software is interesting when considering the decreased differences in performance between the high- and low-spatial groups. As one would probably expect, “it appears that the nature of the interface caused participants to interact with the stimulus in qualitatively different ways” (p.6).


The authors offer the following questions as their guide to future investigations:

1. What patterns of interactivity characterize learners with good and poor spatial 
   visualization abilities?
2. How do different patterns of interactivity relate to performance on the cross-section task?
3. What is the nature of this relationship? i.e., is task performance influenced by spatial
   ability alone, by interactivity alone, or by a combination of both factors, with or without
   one mediating the other?
4. What effect does the nature of the interface have on the interactions observed?
5. What are the implications of these conclusions for the design and implementation of these
   types of representations in educational contexts?


Links

Back to Mordy's Links

Back to Spatial Reasoning Home