How to Evaluate Digital Scholarship

The purpose of this document is to provide a set of guidelines for the evaluation of digital scholarship in the Humanities, Social Sciences, Arts, and related disciplines. The document is aimed, foremost, at Academic Review Committees, Chairs, Deans, and Provosts who want to know how to assess and evaluate digital scholarship in the hiring, tenure, and promotion process. Secondarily, the document is intended to inform the development of university-wide policies for supporting and evaluating such scholarship.

1. Fundamentals for Initial Review: The work must be evaluated in the medium in which it was produced and published. If it’s a website, that means viewing it in a browser with the appropriate plug-ins necessary for the site to work. If it’s a virtual simulation model, that may mean going to a laboratory outfitted with the necessary software and projection systems to view the model. Work that is time based — like videos — will often be represented by stills, but reviewers also need to devote attention to clips in order to fully evaluate the work. The same can be said for interface development, since still images cannot fully demonstrate the interactive nature of interface research. Authors of digital works should provide a list of system requirements (both hardware and software, including compatible browsers, versions, and plug-ins) for viewing the work. It is incumbent upon academic personnel offices to verify that the appropriate technologies are available and installed on the systems that will be used by the reviewers before they evaluate the digital work.

2. Crediting: Digital projects are often collaborative in nature, involving teams of scholars who work together in different venues over various periods of time. Authors of digital works should provide a clear articulation of the role or roles that they have played in the genesis, development, and execution of the digital project. It is impractical — if not impossible — to separate out every micro-contribution made by team members since digital projects are often synergistic, iterative, experimental, and even dynamically generated through ongoing collaborations. Nevertheless, authors should indicate the roles that they played (and time commitments) at each phase of the project development. Who conceptualized the project and designed the initial specifications (functional and technical)? Who created the mock-ups? Who wrote the grants or secured the funding that supported the project? What role did each contributor play in the development and execution of the project? Who authored the content? Who decided how that content would be accessed, displayed, and stored? What is the “public face” of the project and who represents it and how?

3. Intellectual Rigor: Digital projects vary tremendously and may not “look” like traditional academic scholarship; at the same time, scholarly rigor must be assessed by examining how the work contributes to and advances the state of knowledge of a given field or fields. What is the nature of the new knowledge created? What is the methodology used to create this knowledge? It is important for review committees to recognize that new knowledge is not just new content but also new ways of organizing, classifying, and interacting with content. This means that part of the intellectual contribution of a digital project is the design of the interface, the database, and the code, all of which govern the form of the content. Digital scholars are not only in the position of doing original research but also of inventing new scholarly platforms after 500+ years of print so fully naturalized the “look” of knowledge that it may be difficult for reviewers to understand these new forms of documentation and the intellectual effort that goes into developing them. This is the dual burden — and the dual opportunity — for creativity in the digital domain.

4. Crossing Research, Teaching, and Service: Digital projects almost always have multiple applications and uses that enhance—at the same time—research, teaching, and service. Digital research projects can make transformative contributions in the classroom and sometimes even have an impact on the public-at-large. This ripple effect should not be diminished. Review committees need to be attentive to colleagues who dismiss the research contributions of digital work by cavalierly characterizing it as a mere “tool” for teaching or service. Tools shape knowledge, and knowledge shapes tools. But it is also important that review committees focus on the research contributions of the digital work by asking questions such as the following: How is the work engaged with a problem specific to a scholarly discipline or group of disciplines? How does the work reframe that problem or contribute a new way of understanding the problem? How does the work advance an argument through both the content and the way the content is presented? How is the design of the platform an argument? To answer this last question, review committees might ask for documentation describing the development process and design of the platform or software, such as database schema, interface designs, modules of code (and explanations of what they do), as well as sample data types. If the project is, in fact, primarily for teaching, how has it transformed the learning environment? What contributions has it made to learning and how have these contributions been assessed?

5. Peer Review: Digital projects should be peer reviewed by scholars in fields who are able to assess the project’s contribution to knowledge and situate it within the relevant intellectual landscape. Peer review can happen formally through letters of solicitation but also be assessed through online forums, citations and discussions in scholarly venues, grants received from foundations and other sources of funding, and public presentations of the project at conferences and symposia. Has the project given rise to publications in peer-reviewed journals or won prizes by professional associations? How does it measure up to comparable projects in the field that use or develop similar technologies or similar kinds of data? Finally, grants received are often significant indicators of peer review. It is important that reviewers familiarize themselves with grant organizations across schools and disciplines, including the Humanities, the Social Sciences, the Arts, Information Studies and Library Sciences, and the Natural Sciences, since these are indicators of prestige and impact.

6. Impact: Digital projects can have an impact on numerous fields in the academy as well as across institutions and even the general public. They often cross the divide between research, teaching, and service in innovative ways that should be remarked. Impact can be measured in many ways, including the following: support by granting agencies or foundations, number of viewers or contributors to a site and what they contribute, citations in both traditional literature and online (blogs, social media, links, and trackbacks), use or adoption of the project by other scholars and institutions, conferences and symposia featuring the project, and resonance in public and community outreach (such as museum exhibitions, impact on public policy, adoption in curricula, and so forth).

7. Approximating Equivalencies: Is a digital research project “equivalent” to a book published by a university press, an edited volume, a research article, or something else? These sorts of questions are often misguided since they are predicated on comparing fundamentally different knowledge artifacts and, perhaps more problematically, consider print publications as the norm and benchmark from which to measure all other work. Reviewers should be able to assess the significance of the digital work based on a number of factors: the quality and quantity of the research that contributed to the project; the length of time spent and the kind of intellectual investment of the creators and contributors; the range, depth, and forms of the content types and the ways in which this content is presented; and the nature of the authorship and publication process. Large-scale projects with major funding, multiple collaborators, and a wide-range of scholarly outputs may justifiably be given more weight in the review and promotion process than smaller scale or short-term projects.

8. Development Cycles, Sustainability, and Ethics: It is important that review committees recognize the iterative nature of digital projects, which may entail multiple reviews over several review cycles, as projects grow, change, and mature. Given that academic review cycles are generally several years apart (while digital advances occur more rapidly), reviewers should consider individual projects in their specific contexts. At what “stage” is the project in its current form? Is it considered “complete” by the creators, or will it continue in new iterations, perhaps through spin-off projects and further development? Has the project followed the best practices, as they have been established in the field, in terms of data collection and content production, the use of standards, and appropriate documentation? How will the project “live” and be accessible in the future, and what sort of infrastructure will be necessary to support it? Here, project specific needs and institutional obligations come together at the highest levels and should be discussed openly with Deans and Provosts, Library and IT staff, and project leaders. Finally, digital projects may raise critical ethical issues about the nature and value of cultural preservation, public history, participatory culture and accessibility, digital diversity, and collection curation, which should be thoughtfully considered by project leaders and review committees.

9. Experimentation and Risk-Taking: Digital projects in the Humanities, Social Sciences, and Arts share with experimental practices in the Sciences a willingness to be open about iteration and negative results. As such, experimentation and trial-and-error are inherent parts of digital research and must be recognized to carry risk. The processes of experimentation can be documented and prove to be essential in the long-term development process of an idea or project. White papers, sets of best practices, new design environments, and publications can result from such projects and these should be considered in the review process. Experimentation and risk-taking in scholarship represent the best of what the university, in all its many disciplines, has to offer society. To treat scholarship that takes on risk and the challenge of experimentation as an activity of secondary (or no) value for promotion and advancement, can only serve to reduce innovation, reward mediocrity, and retard the development of research.

 

Originally published by Todd Presner in September 2011.


This document was authored by Todd Presner, with contributions, feedback, and language provided by John Dagenais, Johanna Drucker, Diane Favro, Peter Lunenfeld, and Willeke Wendrich. At this point, it has not been “approved” or “adopted” by any institutional body and does not reflect university policies; instead, it is meant to be a discussion document for establishing best practices in the changing academic review process. The authors named above are all affiliated faculty with UCLA’s Digital Humanities program. http://www.digitalhumanities.ucla.edu

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Please feel free to copy and share this document in accordance with the Creative Commons license above. Among other places, a version is available in the collaborative and open access book, Digital_Humanities (Cambridge: MIT Press, 2012), co-authored by Anne Burdick, Johanna Drucker, Peter Lunenfeld, Todd Presner, and Jeffrey Schnapp. “How to Evaluate Digital Scholarship” is reproduced on pages 128-29.

About Todd Presner

Todd Presner is Professor of Germanic Languages, Comparative Literature, and Jewish Studies at the University of California Los Angeles. He is the Sady and Ludwig Kahn Director of the UCLA Center for Jewish Studies, and the Chair of the Digital Humanities Program (undergraduate minor and graduate certificate) (http://www.digitalhumanities.ucla.edu). Presner also is the founder, director, and editor-in-chief of HyperCities, a collaborative, digital mapping platform that explores the layered histories of city spaces. His research focuses on European intellectual history, the history of media, visual culture, digital humanities, and cultural geography. His most recent book, co-authored with Anne Burdick, Johanna Drucker, Peter Lunenfeld, and Jeffrey Schnapp, is Digital_Humanities (MIT Press, 2012), a critical-theoretical exploration of this complex, emerging field. A fourth book is under contract, HyperCities: Thick Mapping in the Digital Humanities (Harvard UP, 2013).