Student perceptions of the assessment utility of immersive virtual assessments.

Document Type


Journal/Book Title/Conference

American Educational Research Association Annual Meeting


Annual Educational Research Association (AERA)

Publication Date




Advances in information technology enable innovative ways for using performance-based assessments to measure learning (Pellegrino, et al., 2001). One such technology is Immersive Virtual Environments (IVEs). IVEs are three dimensional (3-D) simulated contexts that provide rich, authentic contexts in which participants interact with digital objects and tools, such as virtual microscopes. The goal of our Virtual Performance Assessment (VPA) research project is to develop and study the feasibility of using IVEs as a platform for assessing middle school students’ science inquiry skills, including systematicity, in ways not possible with item-based tests (http://virtualassessment.org). Systematicity is expressed as a student’s ability to scan an open problem space and narrow their focus to make sense of complexity (Clarke, 2009). IVEs allow for performances and observations of these skills that are not possible via traditional testing formats (Clarke, 2009). The purpose of this study is to examine student perceptions of the assessment utility of IVEs for the assessment of their science inquiry knowledge and whether the IVE enables their expression of systematicity.

Theoretical Framework

The assessment utility of any IVE is guided by design assumptions of how the functions of the IVE facilitate the demonstration of students’ knowledge and skills (see Clarke-Midura, et al., 2010a; Clarke-Midura, et al., 2010b for a detailed discussion of the VPA design). To examine our constructivist, situated design assumptions about how the VPA facilitates the immersive assessment of science inquiry, we conducted an empirical investigation of student perceptions of the assessment utility of the VPA for this purpose. Evaluating the assessment utility in this context focuses the research on the kind of learning the VPA enables students to demonstrate providing evidence that the VPA is a valid assessment of science inquiry skills, including systematicity.

Methods & Data Sources

Using a convenience sample of middle school science students (N = 260, 125 Female) we adapted 20 items from the Pedagogically Meaningful Learning Questionnaire (PMLQ; Nokelainen, 2006). This included student perceptions of the following components of the VPA: learner control, learner activity, added value, valuation of previous knowledge, flexibility, and feedback. Following a 90 minute exposure to the VPA, students were asked to state their agreement with a series of items using a 5 point Likert scale from 1 = strongly disagree to 5 = strongly agree. A second study replicating these findings and validating our adapted version of the PMLQ for IVEs (PMLQ-IVE) is planned for Fall 2010.

Results & Significance

We are in the process of analyzing our data; however, preliminary results suggest that students strongly agreed or agreed that the VPA enables learner control (50.7%), learner activity (54.9%), added value and valuation of previous knowledge (40.1%), flexibility (56.6%), and feedback (47.5%). Internal consistency for this scale is reported as α = .82 (CI95 = .78, .85). At AERA, we will present the final results of this research along with our second study, a complete analysis of the assessment utility of our VPA, the validated PMLQ-IVE instrument, and recommendations for IVE assessment design and implementation.

This document is currently not available here.