Short Guide To Evaluation Of Digital Work
Geoffrey Rockwell
This short guide gathers a collection of questions evaluators can ask about a project, a check list of what to look for in a project, and some ideas about how to find experts in one place. This assumes that evaluators who are assessing digital work for promotion and tenure are:
- new to the review of digital scholarly work and therefore could use a framework of questions to start with;
- prepared to review the materials submitted by a candidate in the form it was meant to be accessed but need ideas of what to look for; and
- will also ask for expert reviews from others and therefore need suggestions on where to look for relevant expertise.
This is an annotated expansion of Evaluating Digital Work (PDF) which was prepared as a one page checklist.
Questions
Some questions to ask about a digital work that is being evaluated:
- Is it accessible to the community of study?
- Did the creator get competitive funding? Have they tried to apply?
- Have there been any expert consultations? Has this been shown to others for expert opinion?
- Has the work been reviewed? Can it be submitted for peer review?
- Has the work been presented at conferences?
The best way to tell if a candidate has been submitting their work for regular review is their record of peer-reviewed conference presentations and invited presentations. Candidates should be encouraged to present their work locally (at departmental or university symposia), nationally (at national society meetings) and internationally (at conferences outside the country organized by international bodies.) This is how experts typically share innovative work in a timely fashion and most conferences will review and accept papers about work in progress where there are interesting research results. Local symposia (what university doesn’t have some sort of local series) are also a good way for evaluators to see how the candidate presents her work to her peers.
- Have papers or reports about the project been published?
- Do others link to it? Does it link out well?
- If it is an instructional project, has it been assessed appropriately?
A scholarly pedagogical project is one that claims to have advanced our knowledge of how to teach or learn. Such claims can be tested and there is a wealth of evaluation techniques including dialogical ones that are recognizable as being in the traditions of humanities interpretation. Further, most universities have teaching and learning units that can be asked to help advise (or even run) assessments for pedagogical innovations from student surveys to focus groups. While these assessments are typically formative (designed to help improve rather than critically review) the simple existence of a assessment plan is a sign that the candidate is serious about asking whether their digital pedagogical innovation really adds to our knowledge. Where assessments haven’t taken place evaluators can, in consultation with the candidate, develop an assessment plan that will return useful evidence for the stakeholders. Evaluators should not look for enthusiastic and positive results — even negative results (as in this doesn’t help students learn X) are an advance in knowledge. A well designed assessment plan that results in new knowledge that is accessible and really helps others is scholarship, whether or not the pedagogical innovation is demonstrated to have the intended effect.
- Is there a deposit plan? Will it be accessible over the longer term? Will the library take it?
Best Practices in Digital Work (Check List)
Here is a short list of what to check for in digital work:
- Appropriate content (What was digitized?)
- Digitization to archival standards (Are images saved to museum or archival standards?)
Once choices are made about the content then a digital scholar has to make choices about how the materials are digitized and to what digital format. There are guidelines, best practices and standards for the digitization of materials to ensure their long term access, like the Text Encoding Initiative guidelines or the Getty Data Standards and Guidelines. These are rarely easy to apply to particular evidence so evaluators should look for a discussion of what guidelines were adapted, how they were adapted, and why they were chosen. Absence of such a discussion can be a sign that the candidate does not know of the practices in the field and therefore has not made scholarly choices.
- Encoding (Does it use appropriate markup like XML or follow TEI guidelines?)
As mentioned in the previous point there are guidelines for encoding scholarly electronic texts from drama to prose. The TEI is a consortium that maintains and updates extensive encoding guidelines that are really documentation of the collective wisdom of expert panels in computing and the target genre. For this reason candidates encoding electronic texts should know about these guidelines and have reasons for not following them if they choose others. The point is that evaluators should check that candidates know the literature about the scholarly decisions they are making, especially the decisions about how to encode their digital representations. These decisions are a form of editorial interpretation that we can expect to be informed though we should not enforce blind adherence to standards. What matters is that the candidate can provide a scholarly explanation for their decisions that is informed by the traditions of digital scholarship it participates in.
- Enrichment (Has the data been annotated, linked, and structured appropriately?)
One of the promises of digital work is that it can provide rich supplements of commentary, multimedia enhancement, and annotations to provide readers with appropriate historical, literary, and philosophical context. An electronic edition can have high resolution manuscript pages or video of associated performances. A digital work can have multiple interfaces for different audiences from students to researchers. Evaluators should ask about how the potential of the medium has been exploited. Has the work taken advantage of the multimedia possibilities? If an evaluator can imagine a useful enrichment they should ask the candidate whether they considered adding such materials.
Enrichment can take many forms and can raise interesting copyright problems. Often video of dramatic performances are not available because of copyright considerations. Museums and archives can ask for prohibitive license fees for reproduction rights which is why evaluators shouldn’t expect it to be easy to enrich a project with resources, but again, a scholarly project can be expected to have made informed decisions as to what resources they can include. Where projects have negotiated rights evaluators should recognize the decisions and the work of such negotiations.
- Technical Design (Is the delivery system robust, appropriate, and documented?)
In addition to evaluating the decisions made about the representation, encoding and enrichment of evidence, evaluators can ask about the technical design of digital projects. There are better and worse ways to implement a project so that it can be maintained over time by different programmers. A scholarly resource should be designed and documented in a way that allows it to be maintained easily over the life of the project. While a professional programmer with experience with digital humanities projects can advise evaluators about technical design there are some simple questions any evaluator can ask like, “How can new materials be added?”; “Is there documentation for the technical set up that would let another programmer fix a bug?”; and “Were open source tools used that are common for such projects?”
- Interface Design and Usability (Is it designed to take advantage of the medium? Has the interface been assessed? Has it been tested? Is it accessible to its intended audience?)
The first generations of digital scholarly works were typically developed by teams of content experts and programmers (often students.) These project rarely considered interface design until the evidence was assembled, digitized, encoded and mounted for access. Interface was considered window dressing for serious projects that might be considered successful even if the only users where the content experts themselves. Now best practices in web development suggest that needs analysis, user modeling, interface design and usability testing should be woven into large scale development projects. Evaluators should therefore ask about anticipated users and how the developers imagined their work being used. Did the development team conduct design experiments? Do they know who their users are and how do they know how their work will be used? Were usability experts brought in to consult or did the team think about interface design systematically? The advantage to a candidate of engaging in design early on is that it can result in publishable results that document the thinking behind a project even where it may be years before all the materials are gathered.
It should be noted that interface design is difficult to do when developing innovative works for which there isn’t an existing self-identified and expert audience. Scholarly projects are often digitizing evidence for unanticipated research uses and should, for that reason, try to keep the data in formats that can be reused whatever the initial interface. There is a tension in scholarly digital work between a) building things to survive and be used (even if only with expertise) by future researchers and b) developing works that can be immediately accessible to scholars without computing skills. It is rare that a project has the funding to both digitize to scholarly standards and develop engaging interfaces that novices find easy. Evaluators should look therefore for plans for long term testing and iterative improvement that is facilitated by a flexible information architecture that can be adapted over time. A project presented by someone coming up for tenure might have either a well documented and encoded digital collection of texts or a well documented interface design process, but probably not both. Evaluators should encourage digital work that has a trajectory that includes both scholarly digital content and interface design, but not expect such a trajectory to be complete if the scope is ambitious. Evaluation is, after all, often a matter of assessing scholarly promise so evaluators should ask about the promise of ambitious projects and look for signs that there is real opportunities for further development.
- Online Publishing (Is it published from a reliable provider? Is it published under a digital imprint?)
- Demonstration (Has it been shown to others?)
- Linking (Does it connect well with other projects?)
- Learning (Is it used in a course? Does it support pedagogical objectives? Has it been assessed?)
How to Find an Expert
Places to start to find an expert who can help with the evaluation:
- Ask the candidate. A candidate should know about the work of others in their field and should be able to point you to experts who can understand the significance of their work. If they can’t then they aren’t engaged in scholarship.
- Find a Computing and <your field> centre, association or conference and scan their web site. If you want names of people able to review a case there are centres for just about every intersection of computing and the humanities (like the Roy Rosenzweig Center for History and New Media); there are national and international organizations (like the Society for Digital Humanities / Société pour l’étude des médias interactifs in Canada and the international Association for Computers and the Humanities); and there are conferences (like the Digital Humanities 2009 joint conference.)
- Check the Association of Digital Humanties Organizations and contact association officers. On their home page they list past joint Digital Humanities conferences.
- Join the Text Encoding Initiative Consortium or their discussion list TEI-L and ask for help with technical review.
- Ask MLA Committee on Information Technology.
- Search HUMANIST archives. Humanist is a moderated discussion list that has been going since 1987, searching the list should provide ideas for experts.
- For expertise in pedagogical innovation you can ask your local teaching and learning unit for advice or names of people who have developed similar learning interventions.
Originally published by Geoffrey Rockwell in July 2009.