Developing a Review Process for Online Resources

Document Type

Article

Journal/Book Title/Conference

Joint Conference on Digital Libraries

Publisher

ACM

Publication Date

2008

First Page

457

Last Page

457

Abstract

The democratization of content creation via ubiquitous Internet tools and infrastructure [1] has fueled an explosion of user-generated content in the commercial and educational markets. Indeed, funding agencies such as the National Science Foundation (NSF) are actively seeking ways to integrate teachers and learners into the education cyber-infrastructure, whereby they become co-creators of educational content [2].

The ease with which this content, often in the form of online learning resources of varying levels of granularity, can be created and disseminated places it outside the usual peer review processes employed by publishers and professional societies. To date, digital library (DL) developers, teachers and school administrators, concerned whether teachers are using peerreviewed online learning resources, have depended on one or a combination of the following proxies to establish an imprimatur of quality: the reputation and oversight of a funding organization (e.g., NSF's peer review process), the credentials of the content creator (e.g., National Science Teachers Association) or the collection development policies of specific DLs (e.g., DLESE).

Now more than ever, though, sites such as YouTube, Flickr and ccMixter and the evolving education cyber-infrastructure, have created an environment where user-generated content is beyond the reach of even these proxy review processes. However, in the omnipresent climate of accountability within K12 education at U.S. federal, state and local levels, education DLs are being challenged to identify the value: of the resources they hold and services they provide to users; and, of what their users create with those resources. For all of these reasons, it is useful, and necessary, to develop a standardized rubric and process to review online education resources. In particular, this work should leverage social and technical networks to enrich, facilitate, and automate the review process.

The Digital Libraries go to School project was funded by NSF in 2006 to develop a professional development workshop curriculum that enables teachers to use the Instructional Architect (IA; http://ia.usu.edu) to design their own learning activities for classrooms using online STEM resources from the National Science Digital Library (NSDL.org) and the wider Web. One component of the project is to examine the criteria and approaches for reviewing the quality of teacher-created online learning resources in order to develop a rubric and workflow process.

Work to date includes conducting focus groups and surveys with teachers and a 5-person Expert Review Committee, complemented by a literature review to identify elements for a review rubric incorporating the work of other education DLs (e.g., DLESE, MERLOT, NEEDS, among others). Findings are being synthesized, and based on analysis, a draft list of elements has been identified for further testing in Spring 2008. At the same time, a workflow process for conducting reviews with teacher-created resources will be piloted. It will combine human-generated reviews with machine-generated information about online resources (e.g., image and word count; educational standards alignment; currency of updates, provenance) [3]. Further work will identify areas for improving the review rubric and scaling and standardizing the workflow process for Fall 2008. We will also evaluate the usefulness of the reviews to teachers, and to stakeholders such as the IA, NSDL, NSF and other DLs, in providing access to high-quality online content.

Share

COinS