Vol. 1 No. 4 Fall 2012
ISSN 2165-6673
CC BY 3.0
With this fourth issue we wrap up the first year of the Journal of Digital Humanities, and with it, our first twelve months of attempting to find and promote digital scholarship from the open web using a system of layered review. The importance of assessment and the scholarly vetting process around digital scholarship has been foremost in our minds, as it has in the minds of many others this year. As digital humanities continues to grow and as more scholars and disciplines become invested in its methods and results, institutions and scholars increasingly have been debating how to maintain academic rigor while accepting new genres and the openness that the web promotes.
Some scholarly societies, universities and colleges, and departments have called for a redefinition — or at least an expansion — of what is considered creditable scholarship. There have been scattered initial attempts to understand how digital scholarship might be better assessed, but the editors of JDH felt, and many of our readers agreed, that there was not a single place to go for a comprehensive overview of proposals, guidelines, and experiences. We attempt to provide a single location here, with an issue and living bibliography that will grow as additional examples are published across the web.
We begin with an identification of the scope of the problem, some reasons for the difficulty assessing digital scholarship, and a call for action. First, Sheila Cavanagh explains how the expectations of traditional scholarship and the breadth of support required for successful and creative scholarly and pedagogical projects restrict younger scholars. Bethany Nowviskie suggests that modifying outdated modes of peer review to recognize and credit the intellectual and technical labor of the many participants who produce ambitious and collaborative projects will positively influence the evolution of scholarship writ large. The collaboratively-written “Call to Redefine Historical Scholarship in the Digital Turn,” led by Alex Galarza, Jason Heppler, and Douglas Seefeldt, was submitted as a formal request for the American Historical Association to recognize and address these particular issues.
In the next section, practitioners from across the academy and the world offer their perspectives on assessment and evaluation. Todd Presner, Geoffrey Rockwell, and Laura Mandell propose evaluation criteria specifically for tenure and promotion. James Smithies details a typology of digital humanities projects to ensure proper evaluation. Shannon Christine Mattern advises that the same detailed criteria used to evaluate multimodal work in her classroom can serve the larger academy. Zach Coble offers the view from the library, which is the home of many collaborators and creators of digital humanities projects. Finally, Sheila Brennan suggests that we further highlight the intellectual goals and achievements of digital humanities projects declared, but perhaps buried, in administrative documents and reports.
Several practitioners then offer their personal experiences with evaluation and assessment to help others in this unchartered territory. Mark Sample explains the approach to digital scholarship he used in his tenure and promotion case, while Katherine D. Harris offers her tenure and promotion statements as a resource for others. Finally, students at the new digital humanities program at University College Cork remind us that evaluation ultimately is meant to encourage conversation, so practitioners need to be involved directly in the definition of any standards.
Already some organizations and scholars have produced good beginning guidelines for assessment. The Modern Language Association in particular has solicited in-depth discussions among its membership and outside scholars who have long worked in new media on how to assess new forms of scholarship involving digital media and technology. Other institutions, such as the Organization of American Historians and the National Council on Public History, have made some entreaties to broaden the definition of scholarly communication that will require fleshing out in the years to come. We have reproduced some of that content at the end of this issue. We end the issue with a bibliography of additional suggested readings on the evaluation and assessment of digital humanities work.
For the broadest possible understanding of the assessment of digital scholarship, we asked the community to help us find good case studies, personal accounts, and departmental and institutional efforts. This issue brings the best of these into one place that we can continue to update as other guidelines and experiences are shared. We hope that scholars in digital humanities and related fields will be able to point to this volume of the Journal of Digital Humanities as a resource for digital assessment and a starting place for further conversations.
It’s no secret that times are tough for scholars in the humanities. Jobs are scarce, resources are stretched, and institutions of tertiary education are facing untold challenges. Those of us fortunate enough to hold tenured positions at financially stable colleges and universities may be the last faculty to enjoy such comparative privilege. The future shape of the academy is hard to predict, except to acknowledge that it is unlikely to remain static. Our profession is being rapidly reconfigured, but many changes are not happening quickly enough. In the realm of the digital, for example, entrenched traditional standards of assessment, support, and recognition still fail to encourage the kind of exciting new research that keeps our disciplines vibrant.
While some organizations, such as the Modern Language Association (MLA) and the National Endowment for the Humanities (NEH), have made significant efforts to address the need for national dialogues about germane topics, numerous faculty members, department chairs, deans, and others involved in the faculty reward system continue not to understand the shifting parameters of research, teaching, and service that have been instigated by the digital revolution. Many of these individuals, in fact, remain unaware of their ignorance. Those who do not work in digital realms themselves often unwittingly contribute to an environment that impedes intellectual innovation. Despite the pressing need for reconfigured standards of evaluation and new approaches to mentoring, many of those holding the power to address this situation do not recognize the issues at stake.
Failure to redress current circumstances would have serious consequences for the humanities. Fields such as those promoted by this journal are especially vulnerable, since they often do not attract the widespread attention needed to survive in difficult times. It is important, therefore, for administrators and faculty at all levels to respond to the particular ways that conventional academic evaluative and mentoring models often inadvertently impede important new work.
In a letter to the MLA, past President Sidonie Smith notes: “Experimenting with new media stimulates new habits of mind and enhanced cultures of collegiality. Future faculty members in the modern languages and literatures will require flexible and improvisational habits and collaborative skills to bring their scholarship to fruition”(2). Smith’s remarks reflect the evolving reality of today’s academy. As we struggle with shrinking resources and other changes to our academic environment, her words demand careful consideration.
As director of the Emory Women Writers Resource Project (EWWRP) since 1995, editor of the Spenser Review (now online rather than produced in print) and co-director, with Dr. Kevin Quarmby, of the World Shakespeare Project (WSP), I have a personal investment in the success of the digital humanities. As a tenured, full professor, however, my career is not unduly influenced by the status of my digital work. During previous promotion deliberations, my digital contributions — predominantly focused on the study of early modern women — were ignored. At this point, I enjoy the opportunity to pursue such avenues without worrying about employment security. While my professional reputation and compensation are still influenced by my scholarly productivity, whether digital or in print, such pressures are obviously less critical than those facing graduate students and junior scholars.
As collaborator and mentor to several such members of the academic community, I would like to draw from my experience with their projects to illustrate some of the ways that scholarship is changing and to suggest the kinds of concurrent alterations needed in our assessment and mentoring practices. As my title suggests, I believe that our traditional conceptualization of peer review, the humanities’ continuing hesitance to support collaborative ventures, and our common inability to mentor junior colleagues appropriately remain primary obstacles to the kind of digital humanities work that can help our disciplines flourish even during difficult times. While “open access” in today’s academic discourse generally signifies freely available digital materials, I would like to expand that term in order to examine the obstacles impeding junior scholars seeking open access to digital creation.
One of the changes I want to highlight is the way that “peer review” has evolved fairly quietly during the expansion of digital scholarship and pedagogy. Even though some scholars, such as Kathleen Fitzpatrick, are addressing the need for new models of peer review, recognition of the ways that this process has already been transformed in the digital realm remains limited. The 2010 Center for Studies in Higher Education (hereafter cited as Berkeley Report) comments astutely on the conventional role of peer review in the academy:
Among the reasons peer review persists to such a degree in the academy is that, when tied to the venue of a publication, it is an efficient indicator of the quality, relevance, and likely impact of a piece of scholarship. Peer review strongly influences reputation and opportunities. (Harley, et al 21)
These observations, like many of those presented in this document, contain considerable wisdom. Nevertheless, our understanding of peer review could use some reconsideration in light of the distinctive qualities and conditions associated with digital humanities. The status of peer review has shifted, but there have not been sufficient conversations about the implications of those changes. While there is some understanding that digital work demands new configurations of review, there is still insufficient awareness that these processes have already been changed in substantial ways. Nevertheless, some scholars, such as Steve Anderson and Tara McPherson, emphasize the dangers accompanying such shortsightedness:
Yet we resist such change at our peril. In a moment when universities and governments in the United States and abroad seem intent on shrinking the humanities and on interrogating their value, digital media offer an avenue to reinvigorate our scholarship and to communicate it in compelling new ways. This capacity of the digital to present work to a broader audience means that our work can circulate in many forms, in different affective registers, and in richer dialogues. (149)
The work of many scholars would benefit from such changes. As market forces and other non-intellectual considerations reduce opportunities for scholarly exchange in smaller humanistic fields, such as women’s writing, electronic media offers great promise that should be supported, rather than constrained.
As an example of important alterations already silently occurring in the peer review process, I would like to draw attention to the work of Dr. Melanie Doherty, a junior humanist at Wesleyan College in Macon, Georgia, a college serving a socioeconomically diverse population of women. A few months ago, Dr. Doherty sent me (as Director of the EWWRP) an email, asking if I would be interested in a digital archive project that she was creating with Sybil McNeil in the Wesleyan Library. Her message offered an overview of this endeavor:
As Wesleyan College celebrates its 175th anniversary this year as the first college in the world chartered to grant degrees to women, our library has begun to digitally archive student writings and artifacts from the mid-19th through 20th century. Wesleyan holds a wealth of unique materials discussing women’s history in the South. These include student writings, speeches from visiting dignitaries, and letters from notable 19th- and 20th-century feminists, as well as photos, clothing and artworks that span the school’s rich past from the 1840s to the present day. These artifacts detail invaluable information about the lives of women in the antebellum South through the world wars, women’s suffrage, and Civil Rights movements, all documenting the achievements of women with fascinating insight into their daily lives.
Your collection has already featured work from some notable Wesleyan alums, including Loula Kendall Rogers, and we have much more material that would be relevant to the Emory archive. As sister college to Emory, and as your institution also celebrates its 175th anniversary this year, we thought this might present a great opportunity to collaborate.
Intrigued by the project, I met with Dr. Doherty several times in person and over Skype. I also gathered a group of relevant local library and technological personnel, so that we could all discuss whether and how Wesleyan’s archival efforts could be supported by Emory. As these conversations evolved, several key issues emerged regarding the atypical nature of peer review and collaboration in digital humanities. The academic review aspect of this undertaking illustrates a noteworthy, but under-recognized shift in the professional trajectories of junior scholars involved in digital humanities. Dr. Doherty approached me, in part, because Wesleyan does not have sufficient server capacity to house any archives that she and McNeil are able to produce. In addition, Emory (and Georgia Tech, another potential partner) possesses a range of technological equipment and expertise that Wesleyan cannot replicate. Facing such obstacles is a standard feature in modern digital scholarship, as the Berkeley Report makes clear:
humanists are seldom able to pay for extensive support out of personal research funds and many voiced the need for “in-house” (i.e., institutional) technical support for individual research projects. Libraries are often on the front lines of supporting these faculty with their research and publication needs. For example, the library is assumed, in many cases, to be the locus of support for archiving, curation, and dissemination of scholarly output. (27)
Accordingly, Doherty proposed that the EWWRP might house the Wesleyan archive as a distinctive collection among the others currently comprising this digital enterprise. This prospect made immediate sense to me. The Wesleyan archive appears to be of significant academic interest, and I believe strongly in supporting the efforts of talented junior scholars, particularly when they are working on projects involving Women’s Studies and digital humanities.
Emory’s Center for Interactive Teaching
Over the years, I have been able to offer tangible, moral, and advocacy support to a number of less-established scholars, both male and female, who have grown interested in these fields. In this instance, however, while I could fulfill the crucial role of facilitator, I could not provide the level of authorization that Dr. Doherty would need in order to submit the strong grant proposal she was trying to create. Although I direct the EWWRP, I do not “own” the digital space it inhabits. I work closely with colleagues in Academic Technology and the Woodruff Library, but I make no decisions regarding the allocation of their time, expertise, or priorities. At the same time, these colleagues typically have some ability to determine where to devote their attention, but generally lack the authority to decide independently what kinds of projects they will support in their capacity as Emory employees. As I note elsewhere (Cavanagh 5), this situation contrasts dramatically with my experience of starting the EWWRP. In the mid-nineties (a lifetime ago in digital chronology), faculty and librarians at Emory faced comparatively few similar constraints. It was an era of fledgling digital exploration. Those of us experimenting in these realms formed partnerships with limited official interference. We were not required to justify our efforts very often, in part because relatively few people were paying much attention. Dr. Chuck Spornick and Dr. Alice Hickcox in the Lewis H. Beck Center for Electronic Collections and Services were charged with supporting faculty with digital endeavors. Fortunately for me, they were eager to become engaged with the EWWRP and have remained valuable collaborators ever since.
Today, however, there are a number of competing needs and priorities that potential Emory partners, such as Dr. Hickcox in the Beck Center and Dr. Stewart Varner of Emory’s Digital Scholarly Commons, need to address before they can offer ongoing participation in any project. Like other units of the university, Woodruff Library has its own Strategic Plan detailing its official ambitions, goals, and priorities. Within the Library and in various divisions of Information Technology, numerous business plans and other germane documents identify the kinds of endeavors that will further these aims. As readers of this journal probably know all too well, women writers and women’s history are not likely to figure prominently in typical university technological vision statements. There may or may not be active opposition to this kind of academic focus, but faculty in these fields cannot presume that everyone will recognize the value of such projects. The individuals making decisions about technological resourcesare often not scholars themselves, while even those who offer both scholarly and technical expertise are likely to come from disparate fields. Accordingly, while “review” remains, traditional conceptualizations of “peer” recede.
This common situation leads to the largely unseen shift in the kind of review current digital scholars encounter. In traditional print scholarship, faculty face peer review much later in the trajectory of their research. They might, at some point, apply for a grant, but many humanistic scholars complete their projects successfully with appropriate access to relevant library collections and sufficient time to devote to their research. Faculty at more affluent institutions often “access” more financial resources and more amenable teaching loads than their colleagues with less comfortable circumstances, but everyone is eligible to apply for grants and fellowships from organizations like the NEH or the ACLS. According to conventional wisdom, moreover, scholars are often best situated to receive such grant support if they apply after their work is largely completed. Applications written when the relevant research has already been done are said to provide more compelling accounts indicating the worthiness of the project. I have never seen non-anecdotal evidence confirming this common belief, but the premise carries considerable logical merit.
Digital work, such as Dr. Doherty’s, cannot be created under comparable circumstances, however. As detailed above, the successful implementation of her plans for a digital archive requires a significantly different review process. She cannot present a finished or nearly completed project for evaluation; she needs approval from varied sources before she can even proceed past the conceptual stage of her endeavor. Numerous people from several institutions need to agree that her idea holds merit and fits within existing, non-scholarly priorities, before she can move forward with it. This situation reflects today’s norm.As the MLA Guidelines for Evaluating Work with Digital Media in the Modern Languages suggests, digital scholars invariably work with a range of project collaborators:
Humanists are not only adopting new technologies but are also actively collaborating with technical experts in fields such as image processing, document encoding, and information science. Academic work in digital media should be evaluated in the light of these rapidly changing institutional and professional contexts, and departments should recognize that some traditional notions of scholarship, teaching, and service are being redefined.
Notably, however, this now common reconfiguration of faculty work makes it difficult to characterize the procedures Doherty followed as involving traditional “peer review.” Unlike the “blind” evaluative procedures followed in conventional promotion, tenure, and grant reviews, Dr. Doherty needed to approach people openly and directly. She also required assistance in determining who to contact at potential partner institutions, since such information can be impossible to discern from university websites. In addition, the typical conceptualization of what constitutes a “peer” becomes complicated in these instances.
Since a digital project demands support outside the faculty of a given institution, the work regularly requires authorization from those who do not typically engage in faculty peer review. The necessary evaluation, moreover, often includes serious consideration of factors that have nothing to do with scholarly quality. Like the many university presses that have eliminated monograph series or gone “trade” for financial rather than intellectual reasons, those able to authorize digital projects make decisions based on a broad range of considerations that are distinct from elements key to promotion and tenure discussions.
At a large university, for example, projects in the humanities may be competing for funding and attention with proposals from diverse professional schools. Resources might be allocated by individuals without a particular commitment to the humanities or by those holding any number of competing interests. Unlike a journal article, book proposal, or grant application that is sent to an “expert” in a relevant field, a digital decision can be made by people from a range of positions, both academic and not, within a college or university. A successful application may indicate scholarly value, but not necessarily, just as a failed proposal may stop a scholar in his or her tracks, but may not suggest that the idea was flawed.
Obviously, traditional scholarship also confronts the influence of chance, mistake, or other arbitrary roadblocks, but the distinctive situation facing scholars in digital humanities is not widely acknowledged. While a scholar applying for a research grant from the Folger Shakespeare Library does not generally face an applicant pool containing faculty from Engineering, Business, or Law, faculty pursuing digital support often do. The concept of “open access,” therefore, which many academics currently perceive as a primary value in digital production, exists in an environment that is far less open than many scholars recognize. Successful projects may be disseminated through the process termed “open access,” but that does not mean that there is “open access” to developmental resources. In reality, “open” access to the range of personnel and equipment needed to bring a digital humanities project to fruition is rarely available.
For the purposes of this essay, I am not proposing a specific, “one size fits all” response to these circumstances; rather, I am encouraging faculty who hire, tenure, and mentor junior scholars to acknowledge the complicated factors in the world of digital scholarship needing attention. In addition to the under-recognized importance of “non-peer review” in digital undertakings, for example, faculty often have difficulty identifying appropriate experts to participate in more traditional peer review processes. Could an Aphra Behn scholar with no background in electronic media, for instance, provide appropriate evaluation of a digital Behn resource? Would a digital humanist with no familiarity with Behn be a more or less qualified assessor? At what stage is peer review needed? As the MLA Guidelines indicate, “Faculty members who work with digital media should have their work evaluated by persons knowledgeable about the use of these media in the candidate’s field.” An appropriate level of familiarity with digital work is particularly important for outside reviewers, since many faculty members have not been part of informed discussions about how to evaluate digital scholarship. In a hiring discussion at Emory recently, for example, a normally astute faculty member with little digital background remarked that since anyone can post anything on the web, departments should only evaluate items published electronically after standard peer review processes.
While this perspective is understandable, it demonstrates a common inability to consider the need for revised evaluative guidelines if we are going to encourage innovative new scholarship. “Self-publishing” on the web, for instance, does not correspond to traditional print “self-publishing” as closely as many non-digitally savvy faculty members believe. The web certainly can serve as an electronic vanity press, but it can also facilitate rapid and revisable dissemination of important scholarly material. Not recognizing the differences between appropriate traditional and digital review is likely to hurt scholarship, as Kathleen Fitzpatrick notes:
Imposing traditional methods of peer review on digital publishing might help a transition to digital publishing in the short term, enabling more traditionally-minded scholars to see electronic and print scholarship as equivalent in value, but it will hobble us in the long term, as we employ outdated methods in a public space that operates under radically different systems of authorization.(9)
As Fitzpatrick suggests, a reimagining of peer review will provide a crucial step toward needed academic progress. Traditional peer review often does not meet the needs of electronic production. In an article on a related topic (Cavanagh 10) I recently described the significant scholarly achievement demonstrated by my colleague Harry Rusche’s “Shakespeare’s World” websites, even though Professor Rusche’s work did not undergo standard peer review. Since Professor Rusche began his impressive archive long after he received tenure, he was not impeded by the paucity of evaluative bodies available to offer peer review for projects such as his that are created without grant funding.
Only a few groups, such as NINES (Networked Infrastructure for Nineteenth-Century Electronic Scholarship) and its “sister” group 18thConnect, provide this type of external review for digital work within their subject areas. In addition, NINES is partnering with the NEH to formulate detailed review guidelines for projects emerging across the digital humanities horizon (Wheeles). Nevertheless, the “field” of evaluation for digital scholarship is still largely under development. In the meantime, both junior and senior faculty members continue to expand their digital projects.
While Rusche — a full professor — can devote considerable attention to his acclaimed collection of Shakespearean postcards, however, an untenured scholar would be taking a significant risk by following this example. Although the quality of such work can be assessed through appropriate criteria, many institutions have not addressed what standards might be applicable for their hiring or promotion and tenure processes. “Access” to the opportunity of creating digital work is currently denied to many untenured scholars, therefore. Written guidelines for digital assessment rarely exist and many tenured faculty members remain unable, unwilling, or blind to the need to adapt current promotion criteria to digital scholarship.
Not surprisingly, the “privilege” of undertaking digital scholarship thus often falls to those who have already received tenure through traditional channels. Mentoring practices tend to reinforce this pattern. According to the Berkeley Report, for example:
The advice given to pre-tenure scholars was quite consistent across all fields: focus on publishing in the right venues and avoid spending too much time on public engagement, committee work, writing op-ed pieces, developing websites, blogging, and other non-traditional forms of electronic dissemination . . . (10)
Scholars on the tenure track accordingly often resist such risky avenues, given the considerable pressures associated with the pre-tenure probationary period. Academics with even less employment stability, such as graduate students and other non-tenure track scholars, face additional challenges that also need more serious attention than they currently receive. In the next section of this discussion, I would like to highlight the work of three such scholars, graduate students Amy Elkins and Catherine Doubler at Emory, and my collaborator, Dr. Kevin Quarmby, a recent Ph.D. who teaches in London. None of these promising scholars currently hold tenure track positions. They are all involved in exciting digital projects, however, that demonstrate the short-sightedness of pushing scholars to postpone such endeavors until after tenure, while underscoring the significant scholarly benefits possible if faculty and administrators more actively encouraged electronic scholarship of many kinds.
Amy Elkins won the 2011 South Atlantic Modern Language Association Graduate Student Essay Prize for her essay, “Cross-Cultural Kodak: Snapshot Aesthetics in the Fiction of Virginia Woolf.” This print essay is forthcoming in South Atlantic Review. As this accolade suggests, Ms. Elkins is a talented literary scholar, whose graduate career shows great potential. Fortunately, she is not restricting herself to the print domain, however. One of her scholarly projects involves the creation of an intriguing digital archive that draws from several institutional collections. She describes the project in a recent email:
For some time I’ve been working on creating a digital archive of the Potter’s Wheel, a manuscript magazine created by Sara Teasdale and a group of women artists and writers (they called themselves The Potters) in St. Louis from 1904-1907. I’ve located all of the extant manuscripts in special collections libraries, and I’ve been working to get those libraries to digitize their holdings so that I can get the page images on an Omeka site. I envision a scholarly resource, as well as a teaching resource for a range of scholars across the disciplines.
Elkins details the trajectory of her digital creation in terms that resonate with many who enter this field:
Working on a DH [digital humanities] project has put me in touch with a whole range of amazing scholars. I’ve opened up lines of communication with professors who have an interest in DH such as yourself, a wonderful Teasdale scholar who is totally behind the archive, other graduate students working on the intersection of visual art, book history, and the digital, and a network of DH enthusiasts on Twitter. . . Also, the staff at Yale’s Beinecke Library has been tremendously helpful. . . The hindrances: Not all libraries are equipped to do high quality digitization or they don’t want to use their manpower helping someone from another institution. DH is truly collaborative, which means that you have to rely on other people to get the balls rolling.
As Elkins is discovering, the collaborative efforts involved in digital work can be both exhilarating and frustrating. They also require a different skill set than was needed by many of the tenured faculty who are mentoring upcoming generations of students. Traditional print scholarship often leads to intellectual exchanges at conferences and elsewhere, but it does not demand cooperation as frequently as digital humanities does. While there may be a range of personality types represented among humanities academics, the conventional image of a scholar working in comparative isolation corresponds to the largely solitary process that has led to many scholarly articles and monographs in print. One might eagerly share ideas with colleagues over coffee at the Newberry Library, for instance, but the rest of an archival scholar’s day is likely to be spent predominantly with the library’s holdings. Conversations with knowledgeable colleagues may be valuable in this model, but they are generally not imperative for the mere existence of a project. In digital humanities, however, it is a rare scholar who is able to actualize an entire project without substantial contributions by a host of technologists, librarians, and others whose knowledge complements that provided by the scholar(s) envisioning an electronic product.
These necessary partnerships offer further complications to issues involving access. Clearly, collaborative work has a different history in the humanities than in the sciences and conventional reward structures in humanistic disciplines do not always easily accommodate mutual efforts. Although a few humanists, such as Lisa Ede and Andrea Lunsford, address the challenges and benefits of collaborative work, humanistic fields have generally not caught up with such work. Procedures for determining how to assess individual contributions to joint endeavors can be developed, but most humanities departments have yet to initiate such discussions in any serious or systematic way.
Given the widely recognized transformation within traditional print publication outlets, humanities scholars cannot afford to postpone such vital discussions any longer. Newer scholars need to produce work within current practical restraints. Senior faculty who assess this scholarship and who hire and mentor this cohort are irresponsible if they do not acquire the knowledge they need in order to bring promotion and tenure criteria into alignment with technological, material, and philosophical changes in the intellectual marketplace. Standards do not need to be lowered, just shifted. Senior faculty must recognize, for instance, that many common contemporary scholarly practices, such as collaboration, can no longer be perceived as aberrant or unworthy of “credit.” In addition, as the MLA guidelines for evaluating electronic scholarship suggests, “credit” may need to be allocated unconventionally:
Institutions should also take care to grant appropriate credit to faculty members for technology projects in teaching, research, and service, while recognizing that because many projects cross the boundaries between these traditional areas, faculty members should receive proportionate credit in more than one relevant area for their intellectual work. (“Guidelines”)
Digital scholarship is transforming our professional lives and none of us will benefit by ignoring or resisting the challenges introduced by these new formats and modes of thinking. Noting the importance of such academic reconfigurations, the Berkeley Reportsuggests that: “As faculty continue to innovate and pursue new avenues in their research, both the technical and human infrastructure will have to evolve with the ever-shifting needs of scholars” (iii).
Concurrently, the professoriate will also need to expand the range of topics and media that are welcomed into scholarly conversations. As a graduate student in the 1980s, I was warned not to undertake scholarship on women writers until I had tenure. Similar cautions were offered to many contemporaries with scholarly interests in other fields deemed professionally “risky.” Over the years, the kinds of scholarship prompting such suspicion may change, but a pattern of resistance to certain topics of inquiry recurs. As digital options broaden the types of presentation models available to scholars, multimedia presentations also arouse both caution and suspicion. Senior “gatekeepers” thereby stand in the way of vibrant modes of innovation that may keep the humanities alive.
Catherine Doubler’s work demonstrates how limiting such intellectual restrictions can be. Her self-designated “second book project” concerns the work of the controversial “anti-Stratfordian” Delia Bacon. Understandably viewed skeptically by the Shakespearean establishment, long wearied of spurious claims against the “Bard of Avon,” Bacon is the kind of figure junior scholars are traditionally being warned against investigating. Doubler, however, is expanding her expertise in Bacon’s fascinating intellectual legacy with its surprising connections to today’s digital world, while she completes her dissertation on Renaissance drama and becomes adept with electronic media. As a result, Doubler is creating a tangible scholarly product while exploring intriguing questions about the relationship between theoretical issues emerging through modern media and those raised by earlier intellectuals such as Bacon. At the moment, Doubler is working on digital editions of Bacon’s three novellas, The Tales of the Puritans. As she describes this undertaking, Doubler highlights the unexpected theoretical issues emerging through this digitization effort: “I thought that representing Bacon’s life and work in digital venues could fittingly highlight her own interest in literature and technology.” As part of this electronic process, Doubler has been learning TEI (Text Encoding Initiative) mark-up, which she finds intersects significantly with Bacon’s work:
I have had to make use of two systems of codes when looking at The Tales of the Puritans: the first concerns itself specifically with literary meaning while the second takes a less logocentric view in order to make the novel legible in an online format. As such, I would like to use my experiences of putting Bacon into code to reflect on Bacon’s own obsessions over the concepts of ciphers and secret Languages.
While Doubler’s investigation of Bacon’s life and works and her translation of these novellas into digital format are still in embryonic form, the questions emerging make it clear that the theories and practices accompanying modern technology can illuminate such earlier texts in fruitful ways. Whether or not Delia Bacon proves to be a more promising figure of study than previous Shakespeareans have thought, the connection between nineteenth- and twenty-first-century technological codes opens exciting new realms of study. Working digitally in this way can make such work available, bypassing non-qualitative concerns that often stall print publication. This kind of intellectual risk-taking leads to lively and productive humanistic research. In contrast, keeping certain modes and topics “off limits” to junior scholars impedes critical progress, just as demanding scholarly isolation inhibits exploration of the intriguing questions new technologies foster. Broadening the concept of “open access,” on the other hand, to make a wider range of scholarly topics and practices “open,” can invigorate the humanities during these times of debilitating constraints. Expanding the communal impulse behind the now commonly conceived understanding of open access could transform humanistic research.
The kind of energizing intellectual and practical collaboration that Amy Elkins has encountered in her Teasdale work and Melanie Doherty developed for her Wesleyan proposal, moreover, illustrates the importance of expanding and endorsing inter-institutional ventures as well as intellectual partnership between individuals. Emory’s strong support of my collaboration with Dr. Kevin Quarmby in London models the brand of forward thinking that can facilitate an array of future scholarly initiatives, but it also demonstrates the value of shared innovation and the benefit of deliberate cooperation betweendiverse practical and intellectual goals. As our World Shakespeare Project and related endeavors have evolved, we have received practical support from many faculty, staff, and administrators working far outside the realm of early modern drama. Their engagement remains vital to our success, which is largely created through our distinctive, though complementary skill sets. Dr. Quarmby acted professionally in London’s West End for many years before completing his Ph.D. at King’s College, London in 2008. He currently teaches for a number of academic programs in London, and is actively seeking a permanent, full-time, institutional affiliation. Although still living and working in the United Kingdom, he has been named Distinguished Visiting Scholar at Emory’s Halle Institute for Global Learning and Shakespeare Performance Specialist in Virtual Residence at Emory’s Center for Interactive Teaching. He has also received support from Emory’s Center for Faculty Development and Excellence. Clearly, numerous individuals at Emory see advantages to the university’s educational mission through the implementation of this transAtlantic research and pedagogic partnership.
The many Emory educational and technological leaders who are contributing to the work that Dr. Quarmby and I are jointly involved in are not demonstrating blind altruism, however. They are not offering technical support and other assistance simply from generosity. Rather, they see our projects as mechanisms for testing new technological and international opportunities that will benefit the University. They also recognize the value to Emory of Dr. Quarmby’s wide-ranging skills as an academic and theatrical practitioner. Our first electronic collaboration, which is ongoing, involves Dr. Quarmby leading acting workshops with students enrolled in an upper-division Shakespeare class. Uniformly praised by undergraduate participants, these sessions enable us to explore the technological and pedagogical opportunities of co-teaching simultaneously from two different countries while offering students the unique perspective provided by a Shakespearean scholar, who has also performed professionally at some of Britain’s most renowned venues, such as the Old Vic and the National Theatre. Alan Cattier, Director of Academic Technology Services at Emory and an impressive team at Emory’s Center for Interactive Teaching, including Wayne Morse, Chris Fearrington, and a cadre of dedicated graduate students, recognize this electronic teaching project as a way to experiment with videoconferencing in a setting where the students are clearly well served. Rather than simply bringing in a guest lecturer for a single class, this technological alliance makes it possible for Dr. Quarmby to work individually with students and to partner with me in planning and assessing assignments. We endeavor to create a sustainable and “scalable” model of electronic collaboration that takes advantage of technological advances responsibly. Emory’s continuing dedication to this project helps us accomplish those goals.
The World Shakespeare Project (WSP) has related, but not identical aims. In addition to the technological partners mentioned above, the WSP benefits from the enthusiastic support of Vice Provost Holli Semetko, Director of the Halle Institute for Global Learning and of Professor Steve Walton and his students in the Goizueta Business School. The WSP links electronically with international Shakespearean faculty and students in order to create and sustain Shakespearean education and dialogue opportunities with populations that would not be able to participate in such projects prior to modern technology.
Once again, Emory’s significant assistance results from the innovative vision of leaders such as Dr. Semetko and Dr. Walton, whose own areas of professional expertise do not include Shakespeare. Nevertheless, they appreciate the broader pedagogical and technological implications of projects such as ours. Dr. Walton’s students, for example, are gaining relevant business experience by helping us craft a business plan, while faculty across the campus benefit from our success with communicating internationally despite disparities between time zones, cultural and educational differences, and widely variant technological infrastructures. Dr. Quarmby and myself have a host of intellectual and pedagogical goals to pursue through the WSP, but we can simultaneously fulfill broader institutional needs without compromising our own plans. This kind of mutual benefit does not occur spontaneously, but can result from open discussions and alertness to the needs of our domestic and international partners.
While Shakespearean drama falls outside the central academic scope of this journal, the WSP draws attention to a number of issues pertinent to the intertwined topics of peer review, collaboration, and access that affect scholars in all fields. As a long-time faculty member at a major research university, I am fortunate enough to have an academic base willing and able to support my own work and that of talented colleagues, such as Melanie Doherty and Kevin Quarmby. As noted, Wesleyan College does not possess the computer resources needed to create and maintain its own digital archive, while Dr. Quarmby does not currently have direct access in London to the range of technological expertise available through Emory. While both of these scholars are pursuing worthy academic projects, their institutional affiliations do not provide the resources they need in order to complete their work. Collaboration with a university like Emory is critical, therefore, since this electronic work could not exist otherwise. With the library and archival resources openly available in London, Dr. Quarmby could produce his recent book, The Disguised Ruler in Shakespeare and His Contemporaries (Ashgate, 2012), without this kind of institutional backing. Serious digital work, in contrast, remains significantly less possible for scholars working outside robust research institutions. Such projects can be of substantial benefit to individual scholars and to collaborating institutions, however, suggesting that there would be great merit in wider support of such cooperative efforts.
Such inter-institutional cooperation and other collaborative models can lead to projects that benefit all participants. Concurrently, however, they highlight important changes in the shape of faculty work that require more widespread attention. Senior humanists need to recognize, for example, the vital role of evaluation outside traditional “peer review” in the creation and sustenance of the kinds of the digital products discussed here, if they are going to mentor their graduate students and junior colleagues appropriately. In each instance outlined above, most of the key personnel who determined whether or not these projects could continue were not faculty experts in the relevant field. Although some of these individuals hold doctorates, they do not generally fit the disciplinary profile typical departments would use when choosing outside evaluators for these junior scholars’ tenure reviews. Instead, they have been trained in a range of subjects, often widely variant from the content specialty of the graduate students and junior scholars approaching them for assistance.
While peer review remains important in the academy, senior faculty would do well to mentor junior colleagues about the importance of developing connections outside traditional disciplinary and faculty/staff boundaries. Institutions could profitably offer training to graduate students in the emerging entrepreneurial aspects of their professional lives. Knowing who to contact in an institution for what kind of support is a skill that not all humanists understand instinctively. Many current senior faculty never needed to develop this ability during their own careers. Increasingly, however, access to vital scholarly resources is likely to depend upon developing expanded sets of skills, including many that are not specifically intellectual. The partnership that Dr. Quarmby and I are forging with the Goizueta Business School, for example, and the many links we have created with international institutions, do not result from anything we learned in graduate school, but still illustrate the range of practical skills that are becoming necessary for humanists to create successful careers in their disciplines. While content knowledge will undoubtedly remain central, it is unlikely to be sufficient for a scholar to thrive in a digital environment.
My goal in this essay is to encourage conversations about significant aspects of digital scholarship and pedagogy that have not yet surfaced in the awareness of many key players in the intertwined processes of mentoring, hiring, tenure, and promotion. Those who do not work in electronic realms themselves need to acquire a clearer understanding of the particular requirements of this rapidly expanding scholarly domain. “Access” to the ability to create substantive digital work emanates from markedly different sources than comparable access to traditional scholarship and pedagogy. Once completed, the resulting projects often do not easily fit conventional evaluative mechanisms. Electronic media have become pervasive in all of our lives, just as many institutions are facing severe financial constraints. These concurrent realities bring an urgency to the issues addressed here that contrast with the slow pace that often characterizes significant change in higher education.
Originally published by Sheila Cavanagh at Interactive Journal for Women in the Arts, March 2012.
Anderson, Steve, and Tara McPherson. “Engaging Digital Scholarship: Thoughts on Evaluating Multimedia Scholarship.”Profession 2011: 149. Modern Language Association. Web. 23 Feb. 2012.
Cavanagh, Sheila T. Emory Women Writers Resource Project. Emory University. 2006. Web. 23 Feb. 2012. <http://womenwriters.library.emory.edu/>.
Cavanagh, Sheila T. “How Does Your Archive Grow?: Academic Politics and Economics in the Digital Age.” Appositions: Studies in Renaissance/Early Modern Literature and Culture. 4 (2011). Web. 23 Feb. 2012.
Cavanagh, Sheila T., and Kevin A. Quarmby. World Shakespeare Project: A Model for Live Shakespeare Interaction in the New Media World. N.p. n.d. Web. 23 Feb. 2012. <www.worldshakespeareproject.org>.
Doherty, Melanie. Message to the Author. 9 Aug. 2011. E-mail.
Doubler, Catherine. Message to the Author. 24 Oct. 2011. E-mail.
Elkins, Amy. Message to the Author. 19 Oct. 2011. E-mail.
Fitzpatrick, Kathleen. Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York: NYU Press, 2011: 9. Web. 23 Feb. 2012. <http://mediacommons.futureofthebook.org/mcpress/plannedobsolescence/>
“Guidelines for Evaluating Work with Digital Media in the Modern Languages.” Modern Language Association. 2012. Web. 23 Feb. 2012. <http://www.mla.org/guidelines_evaluation_digital>.
Harley, Diane, et al. Final Report: Assessing the Future Landscape of Scholarly Communication: An Exploration of Faculty Values and Needs in Seven Disciplines.CSHE 1.10 (Jan. 2010): 21. Web. 23 Feb. 2012.
Rusche, Harry. Shakespeare Illustrated. Emory College English Department. n.d. Web. 23 Feb. 2012. <shakespeare.emory.edu/illustrated_index.cfm>.
Smith, Sidonie. “Beyond the Dissertation Monograph.” Modern Language Association Newsletter. 42.1 (Spring 2010): 2. Web. 23 Feb. 2012. <http://www.mla.org/nl_archive>
Wheeles, Dana. “NEH Summer Institute: Evaluating Digital Scholarship.” NINES: Nineteenth-Century Scholarship Online. 19 Oct. 2010. Web. 23 Feb. 2012. <http://www.nines.org/news/?m=201010>.
This is the lightly edited text of a talk given at the 2011 NINES Summer Institute, a National Endowment for the Humanities-funded workshop on evaluating digital scholarship for purposes of tenure and promotion, hosted by the Networked Infrastructure for Nineteenth-Century Electronic Scholarship. It builds on a more formal essay written for an open-access cluster of articles on the topic in Profession, the journal of the Modern Language Association (MLA). A pre-print of that essay was provided to NINES attendees in advance of the Institute.
As you’ll divine from the image above, I’ll spend my time today addressing human factors: framing collaboration within our overall picture for the evaluation of digital scholarship. I’ll pull several of the examples I’ll share with you from my contribution to the Profession cluster that our workshop organizers made available, and my argument will be familiar to you from that piece as well. But I thought it might be useful to lay these problems out in a plain way, in person, near the beginning of our week together. Collaborative work is a major hallmark of digital humanities practice, and yet it seems to be glossed over, often enough, in conversations about tenure and promotion.
We can trace a good deal of that silence to a collective discomfort, which much of my recent (“service”) work has been designed to expose — discomfort with the way that our institutional policies, like those that govern ownership over intellectual property, codify status-based divisions among knowledge workers of different stripes in our colleges and universities. These issues divide digital humanities collaborators in even the healthiest of projects, and we’ll have time afterwards, I hope, to talk about them.
But I want to offer a different observation now, more specific to the process that scholars on tenure and promotion committees go through in assessing readiness for advancement among their acknowledged peers. My observation is that the tenure and promotion (T&P) process is a poor fit to good assessment (or even, really, to recognition) of collaborative work, because it has evolved to focus too much on a particular fiction. That fiction is one of “final outputs” in digital scholarship.
In 2006, the MLA’s task force on evaluating scholarship issued an important report. It asserts the value of collaboration even in an institutional situation where “solitary scholarship, the paradigm of one-author–one-work, is deeply embedded in the practices of humanities scholarship, including the processes of evaluation for tenure and promotion.”
That sets a kind of charge for us, and I’ll read the words of the task force to you:
Opportunities to collaborate should be welcomed rather than treated with suspicion because of traditional prejudices or the difficulty of assigning credit. After all, academic disciplines in the sciences and social sciences have worked out rigorous systems for evaluating articles with multiple authors and research projects with multiple collaborators. We need to devise a system of evaluation for collaborative work that is appropriate to research in the humanities and that resolves questions of credit in our discipline as in others. The guiding rule, once again, should be to evaluate the quality of the results. (“Report” 56–57)
I see this as a clear and unequivocal endorsement of the work for which the set of preconditions I’ll offer you in a little bit intends to clear ground. But I want to pick at that last sentence a little, and encourage some wariness about the teleological thrust of the phrase, “quality of results.”
The danger here (which many of you confirmed you see this happening) is that T&P committees faced with the work of a digital humanities scholar will instigate a search for print equivalencies — aiming to map every project that is presented to them, to some other completed, unary and generally privately-created object (like an article, an edition, or a monograph). That mapping would be hard enough in cases where it is actually appropriate — and this week we’ll be exploring ways to identify those and make it easier to draw parallels. But I am certain, if you look only for finished products and independent lines of responsibility, you will meet with frustration in examining the more interesting sorts of digital constructions. In examining, in other words, precisely the sort of innovative work you want to be presented with. To make a print-equivalency match-up attempt across the board, in every case, is to avoid a much harder activity, the activity I want to argue is actually the new responsibility of tenure and promotion committees. This is your responsibility to assess quality in digital humanities work — not in terms of product or output — but as embodied in an evolving and continuous series of transformative processes.
Many years ago, when we were devising an encoding scheme for a project familiar to NINES attendees, the Rossetti Archive, two of our primary sites for inquiry and knowledge representation were the production history and the reception history of the Victorian texts and images we were collecting and encoding. I find (as perhaps many of you do) that I still locate scholarly and artistic work along these two axes. In conversations about assessment, however, we are far too apt to lose that particular plot. This is because production and reception have been in some ways made new in new media (or at least a bit unfamiliar), and also because they’ve never been adequately embedded — again, as activities, not outcomes — in our institutional methods for quality control.
We have to start taking seriously the systems of production and of reception in which digital scholarly objects and networks are continuously made and remade. If we fail to do this, we’ll shortchange the work of faculty who experiment consciously with such fluidity — but worse: we will find ourselves in the dubious moral position of overlooking other people, including many non-tenure-track scholars, who make up those two systems.
some Scholar’s Lab non-tenure track faculty and staff
Digital scholarship happens within complex networks of human production. In some cases, these networks are simply heightened versions of the relationships and codependencies which characterized the book-and-journal trade; and in some cases they are truly incommensurate with what came before. However you want to look at them, it’s plain that systems of digital production require close and meaningful human partnerships. These are partnerships that individual scholars forge with programmers, sysadmins, students and postdocs, creators and owners of content, designers, publishers, archivists, digital preservationists, and other cultural heritage professionals. In many cases, the institutional players have been there for a long time, but collaboration, now, has been made personal again (by virtue of the diversifying of skillsets) and is amplified in degree through the experimental nature of much digital humanities work. (This is an interesting observation to make, perhaps, about our scholarly machine in the digital age. Despite all the focus on cyberinfrastructure and scholarly workflows, we’re fashioning ever closer, more intimate and personalized systems of production.)
To offer just one small example: compare the amount of conversation about layout, typography, and jacket design a scholar typically has with the publisher of a printed book — to the level of collaborative work and intellectual partnership between a faculty member and a Web design professional who (if they’re both doing their jobs well) work together to embed and embody acts of scholarly interpretation in closely-crafted, pitch-perfect, and utterly unique online user experiences.
But it’s not just that we (we evaluators, we tenure committees) fail to appreciate collaboration on the production side. We neglect, too, to consider the systems of reception in which digital archives and interpretive works are situated. In many cases, the “products” of digital scholarship are continually re-factored, remade, and extended by what we call expert communities (sometimes reaching far beyond the academy) which help to generate them and take them up. Audiences become meaningful co-creators. And more: an understanding of reception now has to include the manner in which digital work can be placed simultaneously in multiple overlapping development and publication contexts. Sometimes, “perpetual beta” is the point! Digital scholarship is rarely if ever “singular” or “done,” and that complicates immensely our notions of responsibility and authorship and readiness for assessment.
So my contention is that the multivalent conditions in which we encounter and create digital work demonstrate just how much we are impoverishing our tenure and promotion conversations when we center them on objects that have been falsely divorced from their networks of cooperative production and reception. Now, okay: certainly, committees can and do confront situations in which individual scholars have created digital works without explicit assistance or with minimal collaborative action. But those have long been the edge cases of the digital humanities — so why should our evaluative practices assume that they’re the rule and not the exception?
There’s something deeper to this, though, and it has to do with the academy’s taking, collectively, what is in effect a closed-down and defensive stance toward the notion of authorship. As an impulse, it certainly stems to the larger feeling of embattlement in our corner of the academy. But we must ask ourselves: do we really want to assert the value and uniqueness of a scholar’s output by protecting an outmoded and often patently incorrect vision of the solitary author? Is that the best way to build and protect what we do, together? What kind of favor do we think we’re doing the humanities, when we stylize ourselves into insignificance in this particular way?
To get back to people, here’s my fear: that we’re driving junior scholars, who lack good models and are made conservative by complex anxieties, toward two poor options. These are 1) dishonesty to self, and 2) dishonesty toward others. In the first case, we are putting them in a position where they may choose to de-emphasize their own innovative but collaborative work because they fear it will not fit the preconceived notion of valid or significant scholarly contribution by a sole academic. That’s dishonesty to self. The even nastier flip side is the second case: causing them to elide, in the project descriptions they place in their portfolios, the instrumental role played by others — by technical partners and so-called “non-academic” co-creators.
Early-Career Scholars at the Scholars’ Lab
Now, you might expect me to go straight for a mushy and obvious first step — to argue today that we should work to increase our appreciation for collaborative development practices in the digital humanities. It makes sense that fostering an appreciation — that clarifying what collaboration means in digital humanities — could lead to a formal recognition of the collective modes of authorship that digital work very often implies. Unfortunately, we have to roll things back a bit — and this is why I used the word “Preconditions” in the title of my Profession essay.
In too many cases (this is disheartening, but true) scholars and scholarly teams need reminders that they must negotiate the expression of shared credit at all — much less credit that is articulated in legible and regularized forms. By that I mean forms acceptable within the differing professions and communities of practice from which close collaborators on a digital humanities project may be drawn.
Up by their bootstraps.
We evaluate digital scholarship through a bootstrapped chain of responsibilities. Professional societies and scholarly organizations set a tone. Institutional policy-making groups define the local rules of engagement. Tenure committees are plainly responsible for educating themselves (they often forget this) about the nature of collaborative work in the digital humanities, so that they may adequately counsel candidates and fairly assess them. Scholars who offer their work for evaluation are, in turn, responsible for making an honest presentation of their unique contributions and of the relationship they bear to the intellectual labor of others.
And digital humanities practitioners working outside the ranks of the tenured and tenure-track faculty have a role to play in these conversations as well. We’re talking here about people like me and many of my colleagues in the digital humanities world, like the people I imagine partner with you at your home institutions, and like some of the folks who built NINES and 18th-Connect. We are hybrid scholarly and technical professionals subject to alternate, but equally consequential (though often less protected) mechanisms of assessment. We need you, the tenured and tenure-track faculty, to support us when we assert that credit must be given where it is due. I’ll talk in a little bit about an event — also organized with National Endowment for the Humanities (NEH) support — that took on exactly this issue, and how making such assertions might hasten the regularization of fair and productive evaluative practice among tenure-track and non-tenure-track digital humanities practitioners alike.
Nobody Loves Me, Everybody Hates Me, I’m Gonna Eat Some Worms
But I have to stop to acknowledge that people on my side of that fence (that is, humanities PhDs working as “alternative academics” off the straight and narrow path to tenure) can sometimes be seen rolling their eyes and wondering aloud why you guys remain so hung up on defining individual (rather than your collective) self-worth. I have observed a sotto voce countdown that often happens among experienced digital humanists at panels on digital work at more traditional humanities conferences: “Can we go ten whole minutes into the Q&A without eating these particular worms?” My suspicion is that many folks on the “alt-ac track” are where they are, not only because of a congenital lack of patience, but because they are temperamentally inclined to reject some concepts that other humanities scholars remain tangled up in. And one of the most invidious of these is a tacit notion of scholarly credit as a zero-sum game, which functions as an underlying inhibitor to generous sharing.
A zero-sum game?
But let’s talk about this week. Wouldn’t it be brilliant if this group, with all the energy of NINES and the authority with which it has come to speak, and under the auspices of a prestigious NEH Institute — what if this group could offer, loudly, a primary motivator or two to counter the inhibiting notion that there’s only so much credit to go around? I’ll give you one.
Please consider that the report that comes from this NINES workshop should assert very clearly that healthier scholarship will result from generous and full acknowledgment of the contributions of collaborators — that this kind of acknowledgment must be made and respected in tenure and promotion cases — and that we should begin considering seriously (as the MLA’s task force suggested years ago) the highly legible and articulated modes of acknowledgment that are common in laboratory partnerships within the sciences.
Why do I say “healthier” scholarship will result? Take it from somebody who trained as a humanities scholar but has worked as a peer, for her entire career, with librarians, software developers and designers, professional society representatives, and digital publishers of various sorts. I am convinced that the mere listing of multiple collaborators contributes to what I’ll call the Three Essential P’s. Giving fair and even generous credit to your digital humanities collaborators from all quarters of the academy will make:
of digital humanities work a shared and personal enterprise. It’ll make your scholarly work an enterprise in which, in the most granular sense, named librarians, technologists, administrators, and researchers will feel a private as well as professional stake. You just do a better job, now and far into the future, with things that have your name on them.
Maybe part of the reason it is so hard to latch onto the issue of proper credit for diverse collaborators is that those collaborators are represented by so many different professional societies and advocacy groups. Let’s check in with just a few. I’ve found the most instructive examples in the field of public (which is often to say digital) history. My favorite is a statement issued by a “Working Group on Evaluating Public History Scholarship,” commissioned jointly by the American Historical Association (AHA), the National Council on Public History, and the Organization of American Historians (OAH). In 2010, they put out something called “Tenure, Promotion, and the Publicly Engaged Academic Historian” (PDF). This piece starts in same key I did today, on the matter of process. It strongly endorses the AHA’s Statement on Standards of Professional Conduct, which defines scholarship as “a process, not a product, an understanding [they say] now common in the profession.” And it goes on:
The scholarly work of public historians involves the advancement, integration, application, and transformation of knowledge. It differs from “traditional” historical research not in method or in rigor but in the venues in which it is presented and in the collaborative nature of its creation. Public history scholarship, like all good historical scholarship, is peer reviewed, but that review includes a broader and more diverse group of peers, many from outside traditional academic departments, working in museums, historic sites, and other sites of mediation between scholars and the public. (Working Group 2)
Similarly, here’s something from the MLA’s 1996 report, “Making Faculty Work Visible”:
As institutions develop their own means of assessment, they should consider the wide range of activities that require faculty members’ professional expertise. These would include, in addition to activities more traditionally recognized, inter- and cross-disciplinary projects, teaching that occurs outside the traditional classroom, acquisition of the knowledge and skills required by new information technologies, practical action as a context for analyzing and evaluating intellectual work, and activities that require collective and collaborative knowledge and the dissemination of learning to communities not only inside but also outside the academy. (Making 54; my emphasis).
Perpetual Peer Review
I want you to see where I think both of these statements are trending. It’s an important new notion. As we expand our understanding of the kinds of work open to assessment, we also need to recognize that digital scholarly collaboration speaks a different brand of peer review. It’s a good start, don’t you think? — to assert the validity of “collective and collaborative” knowledge production and to acknowledge that review is beginning to include “a broader and more diverse group of peers.” But let’s go a little further.
(And this, I think, you won’t find in any formal statements by a professional society; it might be new to this conversation.) Digital humanities practitioners don’t often say, but we all know that collaborative work involves a kind of perpetual peer review. What I mean by that is the manner in which continual assessment — often of the most pragmatic kind, and stemming from diverse quarters — becomes a part of day-to-day scholarly practice in the digital humanities. You don’t get this quite so clearly and regularly, in my experience, in any other kind of scholarly work. And it boils down to something simple. Every collaborative action in the development of a digital project asks one big question: Does it work?
Does it work? That is, can this certain theory or intellectual stance, combined with these particular modes of gathering, interpreting, and designing information, result in ongoing production of a reasonably functional and effective digital instantiation, or user experience, or implementation of a collection or a tool? In other words, peer review, in the digital humanities, is not a post-mortem. Instead, evolving intellectual models and digital content undergo constant review by collaborators who are trying to make everything work together. This is less a review of product, than of process itself. By implementing aligned systems or project components that make special demands of those models and resources, they are constantly assisting in the refinement of them. If, in a collaborative project, your code runs and is reasonably usable, and (more importantly) it makes sense in terms of the scholarly argument you and your collaborators are jointly building — then it has gone through some highly significant layers of systematic quality control already. You just can’t say the same of a single-author scholarly essay, even if you discussed a draft with students or peers. So that’s the pragmatic side of things.
Let’s return to the ethical. This is a dimension that also takes on special significance in the digital humanities. One option always before us, in thinking about collaborative relationships, is to default to a familiar binary: the division between authors and their publication service providers, including book designers and copyeditors, on the model of the university or commercial press. Here, we sometimes (slightly obnoxiously) congratulate ourselves on the way that hands-on work in digital scholarship helps us arrive at a deeper appreciation of technologies of text and media production. As Purdy and Walker note in their article in last year’s Profession:
Though authorial choices [in design modalities, technologies, and conventions] have traditionally been more limited in print, recognizing how collaboration allows for more informed decisions and production competencies can make us appreciate more its value in print as well as digital forms. (Purdy and Walker 186; my emphasis)
Fair enough. But I want to point out that there’s a weird and unsavory assumption, embedded in this passage, of the single scholar as authorial decision-maker. The digital humanities resist that. And I want to remind you workshop participants, that you should, as you’re writing recommendations this week, take pains to avoid implying that collaboration in digital humanities is merely a means of enhancing a privileged faculty member’s ability to make informed decisions or more sophisticated authorial and directorial choices. (Oh, as the flowchart reads, snap.) There will always be a temptation to trend that way in tenure and promotion conversations, because the stakes are so high and (as Joseph Harris gets at in this passage from his rhet-comp article, “Meet the New Boss, Same as the Old Boss) every structure we have reifies the notion of the solitary academic’s agency and individual achievement.
Almost all the routine forms of marking an academic career — CVs, annual faculty activity reports, tenure and promotion reviews — militate against [collaboration] by singling out for merit only… moments of individual ‘productivity.’ . . . The structures of academic professionalism, that is, encourage us not to identify with our coworkers but to strive to distinguish ourselves from one another — and, in doing so, to short-circuit attempts to form a sense of our collective interests and identity. (Harris 51–52)
All this is why (although as an organization, it may have a way to go) I like the way the AHA puts things. In its primary document on standards of conduct for historians, it encourages AHA constituents to be “explicit, thorough, and generous in acknowledging… intellectual debts” and promotes what it calls “vigilant self-criticism,” reminding them that “throughout our lives none of us can cease to question the claims to originality that our work makes and the sort of credit it grants to others.” I went looking, by the way, for something similar on ethics from MLA and could only find a narrower and more operational view: “a scholar who borrows from the works and ideas of other, including those of students, should acknowledge the debt, whether or not the sources are published. Unpublished scholarly material — which may be encountered when it is read aloud, circulated in manuscript, or discussed — is especially vulnerable to unacknowledged appropriation, since the lack of a printed text makes originality hard to establish.” (Statement of Professional Ethics)
Now, this is a statement deeply embedded not only in print culture but in a view of scholarship as the product of solitary, reflective action — something generated by an author, perhaps after discussion. And, you know, it’s not untrue of most of the scholarly work the MLA must address. But the AHA’s encouraging of ceaseless self-questioning and “explicit, thorough, and generous” acknowledgment seems better designed to promote the healthy collaborative relationships that digital scholarship demands. Anyway, it quickens the heart a little more.
Lest I give the impression that I’ve been cracking on the MLA too hard, allow me to scold the professional society nearest to my heart, and for which I take responsibility as an elected officer. The Association for Computers and the Humanities (ACH) is the professional organization perhaps best positioned to understand and articulate issues of collaboration and collaborative credit in digital humanities, and we have been conspicuously and entirely silent. This is beginning to change, but we’re not the only quiet ones. Professional societies across the disciplines have failed, far and wide, to advise scholars and tenure committees to value a risky and potentially transformative action. That action, I see now, is one of clarifying the difference — rather than the scholarly sameness — of public and digital humanities. (Timidity among digital humanities associations stems from decades of disenfranchisement, of making the argument that we are scholarly, too. If we take take advantage of our newfound centrality in only one way, perhaps this should be it.) Perhaps we could all begin do this is by emphasizing, rather than eliding, the degree to which digital scholars function within heterogenous collaborative networks — new networks (and I’m back to this again) of production and reception.
But we also need to make some concrete and pragmatic recommendations.
The MLA advocates one very specific model in its “Advice for Authors, Reviewers, Publishers, and Editors of Literary Scholarship.” Let’s take a moment to look at it.
Only persons who have made significant contributions and who share responsibility and accountability should be listed as coauthors of a publication. Other contributors should be acknowledged in a footnote or mentioned in an acknowledgments section. The author submitting the manuscript for publication should seek from each coauthor approval of the final draft. The following standards are usually applied to coauthored works: when names of coauthors are listed alphabetically, they are considered to be equal contributors; if out of alphabetical order, then the first person listed is considered the lead author. Coauthors should explain their role or describe their contribution in the publication itself or when they submit the publication for evaluation.
Can the expression of shared credit be so stark, easy, and uniformly applied as this recommendation suggests? I have questions and concerns. How might “responsibility and accountability” be apportioned in contexts where some collaborators provide content, others a digital and intellectual infrastructure for analysis or for publication, and still others are providing design expertise for digital presentation? All of these are part and parcel of a scholarly argument embodied in a digital project. All of these require thought, expertise, and conversation as part of a team. So maybe we should be looking for models in places where teamwork is more a norm. What about scientific publishing? Scholarly editing? Or maybe the most promising: R&D collectives in architecture and the arts?
Apportionment and expression of credit will never be simple or formulaic in digital humanities scholarship, because of the multiple communities and community norms which must be respected and engaged in any collaborative project. The best example I know in the digital humanities is INKE — the huge, multi-national, and interdisciplinary project on Implementing New Knowledge Environments in the context of the digital transformations of the book. I spend some time describing INKE and its governing documents in the Profession piece, so I won’t do that very closely now, but I want to encourage you to take a look at it. This group is notable in the digital humanities community for being self-reflective and regularly conducting analyses of its own processes of collaboration and project management. I think of INKE as a laboratory for measuring the effectiveness of mechanisms like project charters in large and heterogenous groups. Our Praxis Program in the Scholars’ Lab has taken a page from INKE in teaching the drafting of charters for collaborative work.
The basic idea of the INKE charter was to negotiate thorny issues of credit, authorship, and intellectual property in advance — and to have a way to bring new partners into an ongoing project in a way that gave them a sense of the group’s culture and ethos. The decisions about authorship and collective credit that INKE lighted on clearly have much in common with the lab model of the sciences.
According to the charter, collaborators
receive named co-authorship credit on presentations and publications that make direct use of research in which they took an active, as opposed to passive, role (i.e. research to which the individual made a unique and discernible contribution with a substantial effect on the knowledge generated); otherwise, [they] receive indirect credit via the INKE corporate authorship convention. (15–16)
This “corporate authorship convention” is a neat thing. Beyond the noticeable fact that INKE papers often have more listed authors than is common to see in the humanities, you’ll often observe “and INKE Research Group” as a formal listing in the byline of articles and conference presentations. Basically, when the INKE project itself is the topic of a presentation the charter specifies that “all team members should be co-authors.” Here are some more specifics:
We will adopt the convention of listing the team itself, so that typically the third or fourth author will be listed as INKE Research Group, while the actual named authors will be those most responsible for the paper. The individual names of members of the INKE Research Group should be listed in a footnote, or where that isn’t possible, through a link to a web page. Any member can elect at any time not to be listed, but may not veto publication. For presentations or papers that spin off from this work, only those members directly involved need to be listed as co-authors. The others should be mentioned if possible in the acknowledgments, credits, or article citations. (15–16)
The INKE group is quick to assert that the symbolic dimension of its crediting guidelines and charter is key to the success of the project, that it “signals the nature of [the INKE] working relationship.” They call it “a visible manifestation” of agreed-upon relationships, writing that “any published work and data represent the collaboration of the whole team, past and present, not the work of any sole researcher” (6–7). Clearly, they haven’t solved the problem of shared credit in digital humanities, but what’s important is that they have offered a documented and specific model which, over time, could be assessed for its effectiveness and for its impact both on the work that’s being done and on the careers of the people working — many of whom include postdoctoral researchers.
Of course, you don’t write a project charter or a statement of professional ethics unless you’re worried about something. Strong tensions underlie all of these things I’ve highlighted. Many seem to stem not from uncertainty about digital humanists’ ability to negotiate interpersonal relationships, but from a recognition that our institutional policies (listen up, attending deans and provosts!) codify inequities among collaborators of differing employment status. These are university policies that govern position descriptions, the awarding of research time or sabbaticals, standards for annual review, the definition of intellectual labor vs. mere “work for hire,” and (crucially) the ability of staff to assert ownership over their own intellectual property, including for purposes of releasing it as open access content or open source code.
These were the concerns driving an NEH-funded workshop called “Off the Tracks: Laying New Lines for Digital Humanities Scholars,” which was held earlier this year. The workshop focused on administrative issues relating to equitable treatment and professionalization of “scholar-programmers” and “alternate academics” — those employees most likely to claim shared credit alongside faculty partners in digital research.
I was on a working group asked to look at issues of scholarly collaboration — together with Matt Kirschenbaum, Doug Reside, and Tom Scheinfeldt, and we drew on our experience administering MITH, the Scholars’ Lab, and the Roy Rosenzweig Center for History and New Media — three centers that are sites for a great deal of collaboration among people who may have similar backgrounds as scholars and technologists, but whose formal institutional status may vary a great deal. We drafted something we called a “Collaborators’ Bill of Rights,” which was later endorsed by the full workshop assembly and posted for public comment.
#trx4hx Recommendations
Basically, it’s an appeal for fair, honest, legible, portable (this is important!), and prominently-displayed crediting mechanisms. It also offers a dense expression of underlying requirements for healthy collaboration and adequate assessment from the point of view of practicing digital humanists, with special attention to the vulnerabilities of early-career scholars and staff or non-tenure-track faculty. I think things like this, and the INKE charter, are good demonstrations that the digital humanities community is increasingly prepared to address fundamental matters of collaborative credit leading to fair and accurate assessment of digital scholarship. This is going to happen at the grassroots level, and in ways that make sense to practicing digital humanists.
But your task is otherwise. Your audience is different.
What is going to resonate in our academic departments and among our disciplinary professional societies? What might we think of as the chief preconditions for the evaluation of collaborative digital humanities scholarship? I’ll give you six — maybe something to critique, or something to get you started:
So here are six possible preconditions. But really, underlying them all and maybe the most important thing you could clarify coming out of the NINES Institute, is that faculty under evaluation for promotion or tenure on the basis of collaborative digital projects must never be penalized for offering a full and fair catalog of contributions made by others — that it’s not a zero-sum game.
If the recommendations of this Institute can promote that understanding, and get picked up in the drafting of local, institutional policies, you’ll not only be enabling acts of intellectual generosity. I think you’re going to do something truly strategically productive for our disciplines. Formal and regular acknowledgment of collaboration as part of the ritual of assessment and faculty self-governance will have an educative function in the humanities, and it’ll be deeply consequential for policy and praxis within allied information and knowledge professions, like cultural heritage, IT, and libraries. I think we could expect it to lead to strengthened research-and-development partnerships in the digital humanities — and you’ve already heard me say that I think (back to our 3 P’s) that promoting a sense of shared ownership of knowledge production will result in better design decisions and more enthusiastic preservation of our cultural and scholarly record.
We’ve also got to keep fluid production, publication, and reception venues in the digital humanities in mind, and understand that new media offer important opportunities for scholars to engage not only new audiences but new peers, who will help to make and remake our digital scholarship in the years to come. By accepting any set of “preconditions,” we’re acknowledging that a great deal of work remains to be done, both by our professional societies in making recommendations and setting standards, and on the local scene in which individual scholars and committees of faculty peers continually enact our shared values.
There’s no reason to be afraid of a bit of work. And I think the loveliest thing about this Institute, in terms of the problem of evaluating collaborative digital scholarship, is that you’ve signed on to address the issue not just intensively, over the next few days, but collaboratively. I’ll be watching to see how you’re all credited on the final report!
Originally published by Bethany Nowviskie on May 31, 2011.
Advice for Authors, Reviewers, Publishers, and Editors of Literary Scholarship. Modern Language Association. MLA, 2007–08. Web. 6 July 2011.
Bhopal, Raj, et al. “The Vexed Question of Authorship: Views of Researchers in a British Medical Faculty.” British Medical Journal 314 (1997): 1009–12. Print.
“Collaborators’ Bill of Rights.” Off the Tracks: Laying New Lines for Digital Humanities Scholars. Maryland Inst. for Technology in the Humanities, U of Maryland. 21 Jan. 2011. Web. 5 July 2011.
Ede, Lisa, and Andrea A. Lunsford. “Collaboration and Concepts
of Authorship.” PMLA 116.2 (2001): 354–69. Print.
Guidelines for Evaluating Work with Digital Media in the Modern
Languages. Modern Language Association. MLA, 2000. Web. 19
Feb. 2011.
Harris, Joseph. “Meet the New Boss, Same as the Old Boss: Class Consciousness in Composition.” College Composition and Communication 52 (2000): 43–68. Print.
Hill, Timothy. “Modes of Collaboration in the (Digital) Humanities.” Digital Humanities—Works in Progress. Centre for Computing in the Humanities, King’s Coll. London, 28 Dec. 2010. Web. 19 Feb. 2011. Blog.
Kent, Phillip G., and Jenny Ellis. The Emerging Role of Scholar-Practitioner: Response to the Draft “Work Focus Categorisation Policy.” U of Melbourne Lib., 28 Jan. 2011. Web. 19 Feb. 2011.
Klenk, Nicole L., et al. “Evaluating the Social Capital Accrued in Large Research Networks: The Case of the Sustainable Forest Management Network, 1995–2009.” Social Studies of Science 40.6 (2010): 931–60. Print.
Liao, Chien Hsiang. “How to Improve Research Quality? Examining the Impacts of Collaboration Intensity and Member Diversity in Collaboration Networks.” Scientometrics 86.3 (2011): 1–15. Print.
Making Faculty Work Visible: Reinterpreting Professional Service, Teaching, and Research in the Fields of Language and Literature: Report of the MLA Commission on Professional Service. Modern Language Association. MLA, 12 Nov. 2003. Web. 5 July 2011.
McGann, Jerome. Imagining What You Don’t Know: The Theoretical Goals of the Rossetti Archive. 1997. Inst. for Advanced Technology in the Humanities, U of Virginia, 14 July 2010. Web. 19 Feb. 2011.
Nowviskie, Bethany. “Monopolies of Invention: Collaboration across Class Lines in the Digital Humanities.” 6 Dec. 2010. VeRSI. VeRSI, n.d. Web. 5 July 2011. Video of conf. address.
Purdy, James P., and Joyce R. Walker. “Valuing Digital Scholarship: Exploring the Changing Realities of Intellectual Work.” Profession (2010): 177–95. Print.
“Report of the MLA Task Force on Evaluating Scholarship for Tenure and Promotion.” Profession (2007): 9–71. Print.
Rosenblum, Brian, et al. “Readings on or Models of Collaboration in DH Projects?” Digital Humanities Questions and Answers. Assn. for Computers and the Humanities, Dec. 2010. Web. 19 Feb. 2011. Forum.
Ruecker, Stan, and Milena Radzikowska. “The Iterative Design of a Project Charter for Interdisciplinary Research.” Proceedings of the Seventh ACM Conference on Designing Interactive Systems DIS 08 (2008): 288–94. Web. 19 Feb. 2011.
Scheinfeldt, Tom. “Why Digital Humanities Is ‘Nice.’” Found History. Scheinfeldt, 26 May 2010. Web. 19 Feb. 2011. Blog.
Sheikh, Aziz. “Publication Ethics and the Research Assessment Exercise: Reflections on the Troubled Question of Authorship.” Journal of Medical Ethics 26 (2000): 422–26. Print.
Siemens, Lynne, and INKE Research Group. “From Writing the Grant to Working the Grant: An Exploration of Processes and Procedures in Transition.” New Knowledge Environments 1 (2009): 1–33. Web. 19 Feb. 2011.
Spiro, Lisa. “Collaborative Authorship in the Humanities.” Digital Scholarship in the Humanities. Spiro, 21 Apr. 2009. Web. 19 Feb. 2011. Blog.
———. “Examples of Collaborative Digital Humanities Projects.” Digital Scholarship in the Humanities. Spiro, 1 June 2009. Web. 19 Feb. 2011. Blog.
Statement of Professional Ethics. Modern Language Association. MLA, 16 Aug. 2005. Web. 5 July 2011.
Statement on Standards of Professional Conduct. Amer. Historical Assn., 8 June 2011. Web. 5 July 2011.
Woodward, Kathleen. “The Future of the Humanities—in the Present and in Public.” Dædalus 138.1 (2009): 110–23. Print.
Working Group on Evaluating Public History Scholarship. Tenure, Promotion, and the Publicly Engaged Academic Historian. Natl. Council on Public History, 5 June 2010.
This is a collaboratively-written call for the American Historical Association to appoint a task force to survey the profession as to the place of digital historical scholarship in promotion and tenure and graduate student training and to recommend standards and guidelines for the profession to follow. This document is a product of many of the exciting changes discussed below. It began at a session at THATCamp AHA 2012 that included graduate students, tenured and non-tenured faculty, and librarians. These participants and others continued their conversations at the physical conference and afterwards on the web. Additional signatures and edits in the Google Doc were solicited via Twitter, and through posts on Jason’s blog and by Alex on GradHacker. The letter was then submitted to the American Historical Association’s Research Division on January 26, 2012. On June 2, 2012 the AHA announced the establishment of a Task Force on Digital Scholarship.
The addition of the term “digital” to the humanities signals an exciting turn spurred by both technological change and an expanded understanding of scholarship. The unprecedented number of sessions focusing on digital scholarship at the 126th Annual American Historical Association in Chicago indicates that historians are active participants in a digital revolution promoting interdisciplinary, open, and collaborative scholarship. Practitioners of digital history are producing excellent models of research, pedagogy, and public engagement. Some models unsettle our understanding of units of scholarship, such as the monograph, while others fall into the recognizable forms of journal publications and edited volumes. The encouragement and recognition of this work by peers has been important to fostering more innovation that will continue to change the field.
Digital tools are transforming the practice of history, yet junior scholars and graduate students are facing obstacles and risks to their professional advancement in using methods unrecognized as rigorous scholarly work. Their peers and evaluators are often unable or unwilling to address the scholarship on its merits. Opportunities to publish digital work, or to even have it reviewed are limited. Finally, promotion and tenure processes are largely built around 19th-century notions of historical scholarship that do not recognize or appropriately value much of this work. The disconnect between traditional evaluation and training and new digital methods means young scholars take on greater risks when dividing their limited time and attention on new methods that ultimately may not ever face scholarly evaluation on par with traditional scholarly production.
Six years ago the American Council of Learned Societies (ACLS) reflected: “We might expect younger colleagues to use new technologies with greater fluency and ease, but with more at stake, they will also be more risk-adverse. . . . Senior scholars now have both the opportunity and responsibility to take certain risks, first among which is to condone risk taking in their junior colleagues and their graduate students, making sure that such endeavors are appropriately rewarded.”[1] Historians have responded to these difficulties by challenging promotion and tenure processes within their own institutions, developing graduate programs that train scholars in digital practices, and by experimenting with new models of peer-review in publishing.
These early adopters face difficulties in having their digital scholarship properly assessed and valued for promotion and tenure. The faculty of UCLA’s Digital Humanities program have noted difficulties stemming from the fact that digital projects may not look like traditional academic scholarship. They stress that “new knowledge is not just new content but also new ways of organizing, classifying, and interacting with content. This means that a major part of the intellectual contribution of a digital project is the design of the interface, the database, and the code, all of which govern the form of the content.”[2] Therein lies the conundrum: the “digital turn” in the humanities is opening up exciting opportunities for complex digital scholarship, graduate programs are beginning to instruct students in the theories and methods of digital history, and institutions are hiring tenure-line faculty to pursue this new genre of scholarly communication but a concomitant evolution of the customs and standards of valuing and assessing this new model of scholarship has not developed apace. Or, as the UCLA digital humanities scholars contend, “digital scholars are not only in the position of doing original research but also of inventing new scholarly platforms after 500+ years of print so fully naturalized the ‘look’ of knowledge that it may be difficult for reviewers to understand these new forms of documentation and intellectual effort that goes into developing them.” “This,” they say, “is the the dual burden—and the dual opportunity—for creativity in the digital domain.”[3]
Nearly two decades ago, an AHA ad hoc committee on redefining historical scholarship noted: “The AHA defines the history profession in broad, encompassing terms, but is that definition meaningful as long as only certain kinds of work are valued and deemed scholarly within our discipline?”[4] We are asking the American Historical Association to again take up this question, with the ACLS’s observation in mind, and begin paving the way for evaluating digital methods and training. It is essential that the AHA demonstrate leadership to encourage these solutions and to provide guidelines for a widespread institutional definition of what counts as scholarly work in the profession. An ad hoc committee would be instrumental to help achieve the following:
The merits of digital scholarship in the historical profession demand that we again ask what counts.
Originally drafted and signed by,
Alex Galarza, Michigan State University
Jason Heppler, University of Nebraska – Lincoln
Douglas Seefeldt, University of Nebraska – Lincoln*
Further edited/signed by,
Brian Sarnacki, University of Nebraska – Lincoln
Robert Voss, University of Nebraska – Lincoln
Michael J. Kramer, Northwestern University
Brandon Locke, University of Nebraska – Lincoln
Peter Alegi, Michigan State University
Chad Black, University of Tennessee, Knoxville
Heather Munro Prescott, Central Connecticut State University
Brian Rutledge, Cornell University
Miriam Posner, University of California, Los Angeles
Larry Cebula, Eastern Washington University, Cheney
Leslie C. Working, University of Nebraska – Lincoln
Gretchen A. Adams, Texas Tech University
Amanda H. Forson, Loyola University Chicago and Dominican
University
Gary J. Kornblith, Oberlin College
Naoko Shibusawa, Brown University
Melissa Bruninga-Matteau, Yavapai College
Brenda Elsey, Hofstra University
Sharon M. Leon, Rosenzweig Center for History and New Media, George
Mason University
W. Caleb McDaniel, Rice University
Kristen D. Nawrotzki, Pädagogische Hochschule Heidelberg,
Germany
Frederic L. Propas, Instructor, San José State University
Allen Dieterich-Ward, Shippensburg University
Angel David Nieves, Hamilton College
*Douglas Seefeldt is now at Ball State Uninversity
Originally published on January 26, 2012.
The American Association for History & Computing, “Guidelines for Evaluating Digital Media Activities in Tenure, Review, and Promotion.” (2006)
American Historical Association, “Report of the American Historical Association Ad Hoc Committee on Redefining Scholarly Work.” (1993)
Modern Language Association, Report of the Task Force on Evaluating Scholarship for Tenure and Promotion. (2006)
National Council on Public History, “Tenure, Promotion, and the Engaged Academic Historian” (PDF) Whitepaper. (2010)
University of Nebraska Center for Digital Research in the Humanities (CDRH) “Guidelines on Evaluating Digital Scholarship.”
Todd Presner, “How to Evaluate Digital Scholarship.” (September 2011)
The purpose of this document is to provide a set of guidelines for the evaluation of digital scholarship in the Humanities, Social Sciences, Arts, and related disciplines. The document is aimed, foremost, at Academic Review Committees, Chairs, Deans, and Provosts who want to know how to assess and evaluate digital scholarship in the hiring, tenure, and promotion process. Secondarily, the document is intended to inform the development of university-wide policies for supporting and evaluating such scholarship.
1. Fundamentals for Initial Review: The work must be evaluated in the medium in which it was produced and published. If it’s a website, that means viewing it in a browser with the appropriate plug-ins necessary for the site to work. If it’s a virtual simulation model, that may mean going to a laboratory outfitted with the necessary software and projection systems to view the model. Work that is time based — like videos — will often be represented by stills, but reviewers also need to devote attention to clips in order to fully evaluate the work. The same can be said for interface development, since still images cannot fully demonstrate the interactive nature of interface research. Authors of digital works should provide a list of system requirements (both hardware and software, including compatible browsers, versions, and plug-ins) for viewing the work. It is incumbent upon academic personnel offices to verify that the appropriate technologies are available and installed on the systems that will be used by the reviewers before they evaluate the digital work.
2. Crediting: Digital projects are often collaborative in nature, involving teams of scholars who work together in different venues over various periods of time. Authors of digital works should provide a clear articulation of the role or roles that they have played in the genesis, development, and execution of the digital project. It is impractical — if not impossible — to separate out every micro-contribution made by team members since digital projects are often synergistic, iterative, experimental, and even dynamically generated through ongoing collaborations. Nevertheless, authors should indicate the roles that they played (and time commitments) at each phase of the project development. Who conceptualized the project and designed the initial specifications (functional and technical)? Who created the mock-ups? Who wrote the grants or secured the funding that supported the project? What role did each contributor play in the development and execution of the project? Who authored the content? Who decided how that content would be accessed, displayed, and stored? What is the “public face” of the project and who represents it and how?
3. Intellectual Rigor: Digital projects vary tremendously and may not “look” like traditional academic scholarship; at the same time, scholarly rigor must be assessed by examining how the work contributes to and advances the state of knowledge of a given field or fields. What is the nature of the new knowledge created? What is the methodology used to create this knowledge? It is important for review committees to recognize that new knowledge is not just new content but also new ways of organizing, classifying, and interacting with content. This means that part of the intellectual contribution of a digital project is the design of the interface, the database, and the code, all of which govern the form of the content. Digital scholars are not only in the position of doing original research but also of inventing new scholarly platforms after 500+ years of print so fully naturalized the “look” of knowledge that it may be difficult for reviewers to understand these new forms of documentation and the intellectual effort that goes into developing them. This is the dual burden — and the dual opportunity — for creativity in the digital domain.
4. Crossing Research, Teaching, and Service: Digital projects almost always have multiple applications and uses that enhance—at the same time—research, teaching, and service. Digital research projects can make transformative contributions in the classroom and sometimes even have an impact on the public-at-large. This ripple effect should not be diminished. Review committees need to be attentive to colleagues who dismiss the research contributions of digital work by cavalierly characterizing it as a mere “tool” for teaching or service. Tools shape knowledge, and knowledge shapes tools. But it is also important that review committees focus on the research contributions of the digital work by asking questions such as the following: How is the work engaged with a problem specific to a scholarly discipline or group of disciplines? How does the work reframe that problem or contribute a new way of understanding the problem? How does the work advance an argument through both the content and the way the content is presented? How is the design of the platform an argument? To answer this last question, review committees might ask for documentation describing the development process and design of the platform or software, such as database schema, interface designs, modules of code (and explanations of what they do), as well as sample data types. If the project is, in fact, primarily for teaching, how has it transformed the learning environment? What contributions has it made to learning and how have these contributions been assessed?
5. Peer Review: Digital projects should be peer reviewed by scholars in fields who are able to assess the project’s contribution to knowledge and situate it within the relevant intellectual landscape. Peer review can happen formally through letters of solicitation but also be assessed through online forums, citations and discussions in scholarly venues, grants received from foundations and other sources of funding, and public presentations of the project at conferences and symposia. Has the project given rise to publications in peer-reviewed journals or won prizes by professional associations? How does it measure up to comparable projects in the field that use or develop similar technologies or similar kinds of data? Finally, grants received are often significant indicators of peer review. It is important that reviewers familiarize themselves with grant organizations across schools and disciplines, including the Humanities, the Social Sciences, the Arts, Information Studies and Library Sciences, and the Natural Sciences, since these are indicators of prestige and impact.
6. Impact: Digital projects can have an impact on numerous fields in the academy as well as across institutions and even the general public. They often cross the divide between research, teaching, and service in innovative ways that should be remarked. Impact can be measured in many ways, including the following: support by granting agencies or foundations, number of viewers or contributors to a site and what they contribute, citations in both traditional literature and online (blogs, social media, links, and trackbacks), use or adoption of the project by other scholars and institutions, conferences and symposia featuring the project, and resonance in public and community outreach (such as museum exhibitions, impact on public policy, adoption in curricula, and so forth).
7. Approximating Equivalencies: Is a digital research project “equivalent” to a book published by a university press, an edited volume, a research article, or something else? These sorts of questions are often misguided since they are predicated on comparing fundamentally different knowledge artifacts and, perhaps more problematically, consider print publications as the norm and benchmark from which to measure all other work. Reviewers should be able to assess the significance of the digital work based on a number of factors: the quality and quantity of the research that contributed to the project; the length of time spent and the kind of intellectual investment of the creators and contributors; the range, depth, and forms of the content types and the ways in which this content is presented; and the nature of the authorship and publication process. Large-scale projects with major funding, multiple collaborators, and a wide-range of scholarly outputs may justifiably be given more weight in the review and promotion process than smaller scale or short-term projects.
8. Development Cycles, Sustainability, and Ethics: It is important that review committees recognize the iterative nature of digital projects, which may entail multiple reviews over several review cycles, as projects grow, change, and mature. Given that academic review cycles are generally several years apart (while digital advances occur more rapidly), reviewers should consider individual projects in their specific contexts. At what “stage” is the project in its current form? Is it considered “complete” by the creators, or will it continue in new iterations, perhaps through spin-off projects and further development? Has the project followed the best practices, as they have been established in the field, in terms of data collection and content production, the use of standards, and appropriate documentation? How will the project “live” and be accessible in the future, and what sort of infrastructure will be necessary to support it? Here, project specific needs and institutional obligations come together at the highest levels and should be discussed openly with Deans and Provosts, Library and IT staff, and project leaders. Finally, digital projects may raise critical ethical issues about the nature and value of cultural preservation, public history, participatory culture and accessibility, digital diversity, and collection curation, which should be thoughtfully considered by project leaders and review committees.
9. Experimentation and Risk-Taking: Digital projects in the Humanities, Social Sciences, and Arts share with experimental practices in the Sciences a willingness to be open about iteration and negative results. As such, experimentation and trial-and-error are inherent parts of digital research and must be recognized to carry risk. The processes of experimentation can be documented and prove to be essential in the long-term development process of an idea or project. White papers, sets of best practices, new design environments, and publications can result from such projects and these should be considered in the review process. Experimentation and risk-taking in scholarship represent the best of what the university, in all its many disciplines, has to offer society. To treat scholarship that takes on risk and the challenge of experimentation as an activity of secondary (or no) value for promotion and advancement, can only serve to reduce innovation, reward mediocrity, and retard the development of research.
Originally published by Todd Presner in September 2011.
This document was authored by Todd Presner, with contributions, feedback, and language provided by John Dagenais, Johanna Drucker, Diane Favro, Peter Lunenfeld, and Willeke Wendrich. At this point, it has not been “approved” or “adopted” by any institutional body and does not reflect university policies; instead, it is meant to be a discussion document for establishing best practices in the changing academic review process. The authors named above are all affiliated faculty with UCLA’s Digital Humanities program. http://www.digitalhumanities.ucla.edu
This short guide gathers a collection of questions evaluators can ask about a project, a check list of what to look for in a project, and some ideas about how to find experts in one place. This assumes that evaluators who are assessing digital work for promotion and tenure are:
This is an annotated expansion of (PDF) which was prepared as a one page checklist.
Some questions to ask about a digital work that is being evaluated:
The best way to tell if a candidate has been submitting their work for regular review is their record of peer-reviewed conference presentations and invited presentations. Candidates should be encouraged to present their work locally (at departmental or university symposia), nationally (at national society meetings) and internationally (at conferences outside the country organized by international bodies.) This is how experts typically share innovative work in a timely fashion and most conferences will review and accept papers about work in progress where there are interesting research results. Local symposia (what university doesn’t have some sort of local series) are also a good way for evaluators to see how the candidate presents her work to her peers.
A scholarly pedagogical project is one that claims to have advanced our knowledge of how to teach or learn. Such claims can be tested and there is a wealth of evaluation techniques including dialogical ones that are recognizable as being in the traditions of humanities interpretation. Further, most universities have teaching and learning units that can be asked to help advise (or even run) assessments for pedagogical innovations from student surveys to focus groups. While these assessments are typically formative (designed to help improve rather than critically review) the simple existence of a assessment plan is a sign that the candidate is serious about asking whether their digital pedagogical innovation really adds to our knowledge. Where assessments haven’t taken place evaluators can, in consultation with the candidate, develop an assessment plan that will return useful evidence for the stakeholders. Evaluators should not look for enthusiastic and positive results — even negative results (as in this doesn’t help students learn X) are an advance in knowledge. A well designed assessment plan that results in new knowledge that is accessible and really helps others is scholarship, whether or not the pedagogical innovation is demonstrated to have the intended effect.
Here is a short list of what to check for in digital work:
Once choices are made about the content then a digital scholar has to make choices about how the materials are digitized and to what digital format. There are guidelines, best practices and standards for the digitization of materials to ensure their long term access, like the or the . These are rarely easy to apply to particular evidence so evaluators should look for a discussion of what guidelines were adapted, how they were adapted, and why they were chosen. Absence of such a discussion can be a sign that the candidate does not know of the practices in the field and therefore has not made scholarly choices.
As mentioned in the previous point there are guidelines for encoding scholarly electronic texts from drama to prose. The TEI is a consortium that maintains and updates extensive encoding guidelines that are really documentation of the collective wisdom of expert panels in computing and the target genre. For this reason candidates encoding electronic texts should know about these guidelines and have reasons for not following them if they choose others. The point is that evaluators should check that candidates know the literature about the scholarly decisions they are making, especially the decisions about how to encode their digital representations. These decisions are a form of editorial interpretation that we can expect to be informed though we should not enforce blind adherence to standards. What matters is that the candidate can provide a scholarly explanation for their decisions that is informed by the traditions of digital scholarship it participates in.
One of the promises of digital work is that it can provide rich supplements of commentary, multimedia enhancement, and annotations to provide readers with appropriate historical, literary, and philosophical context. An electronic edition can have high resolution manuscript pages or video of associated performances. A digital work can have multiple interfaces for different audiences from students to researchers. Evaluators should ask about how the potential of the medium has been exploited. Has the work taken advantage of the multimedia possibilities? If an evaluator can imagine a useful enrichment they should ask the candidate whether they considered adding such materials.
Enrichment can take many forms and can raise interesting copyright problems. Often video of dramatic performances are not available because of copyright considerations. Museums and archives can ask for prohibitive license fees for reproduction rights which is why evaluators shouldn’t expect it to be easy to enrich a project with resources, but again, a scholarly project can be expected to have made informed decisions as to what resources they can include. Where projects have negotiated rights evaluators should recognize the decisions and the work of such negotiations.
In addition to evaluating the decisions made about the representation, encoding and enrichment of evidence, evaluators can ask about the technical design of digital projects. There are better and worse ways to implement a project so that it can be maintained over time by different programmers. A scholarly resource should be designed and documented in a way that allows it to be maintained easily over the life of the project. While a professional programmer with experience with digital humanities projects can advise evaluators about technical design there are some simple questions any evaluator can ask like, “How can new materials be added?”; “Is there documentation for the technical set up that would let another programmer fix a bug?”; and “Were open source tools used that are common for such projects?”
The first generations of digital scholarly works were typically developed by teams of content experts and programmers (often students.) These project rarely considered interface design until the evidence was assembled, digitized, encoded and mounted for access. Interface was considered window dressing for serious projects that might be considered successful even if the only users where the content experts themselves. Now best practices in web development suggest that needs analysis, user modeling, interface design and usability testing should be woven into large scale development projects. Evaluators should therefore ask about anticipated users and how the developers imagined their work being used. Did the development team conduct design experiments? Do they know who their users are and how do they know how their work will be used? Were usability experts brought in to consult or did the team think about interface design systematically? The advantage to a candidate of engaging in design early on is that it can result in publishable results that document the thinking behind a project even where it may be years before all the materials are gathered.
It should be noted that interface design is difficult to do when developing innovative works for which there isn’t an existing self-identified and expert audience. Scholarly projects are often digitizing evidence for unanticipated research uses and should, for that reason, try to keep the data in formats that can be reused whatever the initial interface. There is a tension in scholarly digital work between a) building things to survive and be used (even if only with expertise) by future researchers and b) developing works that can be immediately accessible to scholars without computing skills. It is rare that a project has the funding to both digitize to scholarly standards and develop engaging interfaces that novices find easy. Evaluators should look therefore for plans for long term testing and iterative improvement that is facilitated by a flexible information architecture that can be adapted over time. A project presented by someone coming up for tenure might have either a well documented and encoded digital collection of texts or a well documented interface design process, but probably not both. Evaluators should encourage digital work that has a trajectory that includes both scholarly digital content and interface design, but not expect such a trajectory to be complete if the scope is ambitious. Evaluation is, after all, often a matter of assessing scholarly promise so evaluators should ask about the promise of ambitious projects and look for signs that there is real opportunities for further development.
Places to start to find an expert who can help with the evaluation:
Originally published by Geoffrey Rockwell in July 2009.
An Open Letter to the Promotion and Tenure Committee at Texas A&M University, Department of English, upon their request for information about how to evaluate digital work for promotion and tenure.
The first thing to do in evaluating digital scholarship is to ask the scholar who has produced it to submit it, if at all possible, for peer review. There are several avenues for doing so. First, any electronic scholarly edition can be submitted to the MLA Scholarly Editions Committee for peer review, and junior faculty should be encouraged to do so. The kinds of editions that will pass peer review by the SCE could be very print-like, so the fact that a digital edition did not receive the SCE seal is not completely indicative of its value as research, about which I’ll say more below. Another venue for peer-reviewing is Nineteenth-century Scholarship Online (NINES) for nineteenth-century electronic scholarship. That NINES model is being expanded: my own 18thConnect peer-reviews eighteenth-century digital projects, and three other peer-reviewing organizations are coming into existence: MESA for medieval, REKn for Renaissance / Early Modern, and ModNets for Modernists.
There are also digital journals. In its “Statement on Publication in Scholarly Journals,” the MLA writes:
The electronic journal is a viable and credible mode of scholarly publication. When departments evaluate scholarly publications for purposes of hiring, reappointment, tenure, and promotion, the standing of an electronic journal should be judged according to the same criteria used for a print journal.
If a digital journal has a peer-reviewing system and an illustrious editorial board of premier scholars, articles published in that digital journal should be valued as highly as those published in print journals, and language to that effect should be incorporated into departmental promotion and tenure guidelines.
Practically speaking, we access ALL journals digitally, via JSTOR and Project Muse among other databases, and there is no difference between the value of printed and digital journals due to medium alone. Levels of prestige are no longer measurable by print and digital forms of publication, if they ever were. Thus the faculty who publish in Praxis (an online, peer-reviewed journal hosted by Romantic Circles) are from Berkeley, Princeton, Duke, etc. — institutions that we aspire to emulate. There are differences in prestige among digital journals just as there are among journals in print. External reviewers and period specialists should be asked to rank the journals according to all the ordinary ways of doing so — rejection statistics, contributors’ profiles, editorial board composition, and circulation statistics or other measures of disciplinary centrality — but in thinking about prestige, mode of access should be ignored.
Similarly, materials published digitally that have been peer-reviewed by NINES or 18thConnect pass through editorial boards as illustrious as those of any major press. Not only that, but technological review committees for these peer-reviewing organizations insure that the resources which pass peer review meet the highest standards for digital materials: these are library, archival quality, not web sites of the sort that anyone could mount. Letters from the directors of these organizations tell promotion and tenure (P&T) committees “equivalents”: a database may in fact be more like an article in terms of work and impact than like a book, it may resemble an edition more than argument, or in both cases vice versa.
Finally, prizes and awards can indicate the value of a resource. They are not exactly a substitute for peer review, but they do locate the resource within the field. The Blake Archive won the “Distinguished Scholarly Edition” award for 2003 from MLA — not best DIGITAL edition, but best edition per se.
I would now like to offer some ideas about how to judge digital scholarship in the absence of these more obvious signs by defining it.
In effective digital research, digital media are not incidental but integral to the scholarly work. Digital scholarship is not, in other words, simply scholarship that takes place in digital media: all the digitized journal articles in JSTOR and Project Muse do that, and in fact all publications either now or will shortly have some kind of digital manifestation, even books. Most e-books might as well be books. In fact, it would be a lot more convenient if they were: the printed codex never needs to be recharged. If publishing a work in paper involves no loss of functionality, then the candidate should have published it in paper, with some exceptions discussed below. The implications of this principle are twofold. First, this knocks out of the running any digital project in which a scholar acts as a “content provider” and drops his or her work off at the door of IT Services. Second, it means that digital scholarship by its very nature requires collaboration, and so we must have peer-reviewing mechanisms that take that into account.
Let me just emphasize the potential catch-22 here: if someone publishes something online that is really, in its core idea, a print artifact, members of P&T committees might be justified in thinking, “This candidate only made a digital edition because no one would publish this work.” But conversely, if a candidate pursues digital scholarship for the sake of finding out what can be done in new media, his or her research requires collaborating with designers, computer programmers — real collaboration, of the sort sponsored each year through summer fellowships funded by the National Endowment for the Humanities (NEH) and sponsored by the online journal Vectors. In that case, P&T committees threaten to say “collaboration doesn’t count.” It is because new media require collaboration that the provosts and deans at the NINES Summer Institute composed a document about authorship: please go to “Whitepapers and Documents” at http://institutes.nines.org.
To get back to the first half of this catch-22, however, it is indeed sometimes the case that “no one would publish” scholarship that deserves to be published. I am technical editor of Lynda Pratt’s amazing e-collection of Robert Southey’s Letters coded and published by Romantic Circles, and I have a great story about why those letters were published digitally. Lynda was being interviewed on the BBC about her work. Linda Bree, acquisitions editor for Cambridge Univ. Press in the field of Romantic Studies, was listening to the interview, and began to walk to the phone to call Lynda with an offer for publication. The interviewer asked Lynda the extent of the collection. It is huge: we have 877 letters tei-encoded and up in Romantic Circles, and we have only published parts I and II of the eight-part edition. Upon hearing Lynda Pratt describe the scope of her edition, Linda Bree of Cambridge UP hung up the phone before calling her. I have had trouble publishing editions of poetess poetry and criticism. They are lesser-known writers and poets, the publishing of whom no press can risk financially. In these cases, the digital edition may in fact closely resemble a print edition, but the editing must be as rigorous as with any print edition.
Editorial rigor involves different things in the digital world than it does in the world of print, though of course the two are connected. Digital Electronic editions will ideally fulfill about 70% of the guidelines for vetting electronic editions offered by the MLA Committee on Scholarly editions, including:
For something larger than an edition, a digital archive for instance, one needs to ask, does the digital archive make available what one would expect such a resource to provide? (This is comparable to asking, “Why didn’t a book on this topic discuss X?”)
More generally, in assessing digital scholarship, it may help to think of digital research as “curation,” a term that has been much discussed in the digital humanities and library communities recently. Typically, scholars in literary history, for instance, go into the archive and emerge with an argument backed up by particular texts and images that have been winnowed out of a mass of data that the scholar examined. If one thinks about a monograph in a particular subfield of a discipline as a lens for bringing the past into focus by bringing this particular text to the fore and relegating another to background information — a kind of organizing that even occurs in New Literary History, for all its radical leveling of genres and canons — then what a scholar does online in creating a thematic research collection is not so distant from monograph writing after all. Based on a particular reasoned theory, that person selects some materials and deselects others. Whereas in the case of the monograph, this “filtering” is done for the sake of making one particular argument, in the case of curating textual data in online research environments involves making possible a number of arguments, all of them nonetheless theoretically inflected by what has been brought into the limelight and relegated to obscurity.
Digital archives are close enough to monographs and editions that judging their value as research can be fairly straightforward.
Here follow two examples of some items that might not look like research or scholarship that in fact ARE such in the field of digital humanities, accompanied by arguments as to why these particular works ought to be valued highly by P&T committees.
Screenshot of HyperCities Berlin
HyperCities director Dr. Presner’s original mapping project involved using Google maps and overlaying historical maps in order to present Berlin, both what one would find there now and the monuments of its past. But the project evolved into a platform that anyone could use to launch and record mappable histories.
Screenshot of HyperCities Egypt
Here (above) you can see one of the maps comprising one instance of HyperCities called “HyperCities Egypt.” Here, someone has hooked a map of Cairo up to a twitter stream that was recorded during the demonstrations against Mubarak. A marker on the map shows where a person was who “tweeted” or “re-tweeted” something about events as they transpired, while they were on the streets of Cairo using their cell phones. This twitter stream runs in movie-like fashion (you can see the “slow” and “fast” commands available at the upper-right, top). This particular use of HyperCities provides an amazing resource for historians of current events — including the Tsunami in Japan, for instance, as well as cities swept up by the Arab Spring.
Todd Presner’s work on the HyperCities mapping project has, ever since its first appearance in Vectors, taken the digital humanities world by storm. Vectors is not just a journal that publishes what we call “digitally born” projects, those for which digital media are intrinsic rather than extrinsic. Vectors directs an NEH Fellowship program, bringing scholars for six weeks of the summer to the University of Southern California’s Institute for Multimedia Literacy where they collaborate with computer scientists and graphic designers to create digital resources. As editor Tara McPherson points out, these projects are often later funded by the NEH with other grants, and of course receiving grant funding is one important indicator that digital scholarship constitutes valuable research. Another indicator is the number of speaking engagements to which a scholar is invited in order to present their project: Dr. Presner has been invited to speak about HyperCities worldwide.
As a digital project, HyperCities does precisely what is held up as most valuable about digital technologies in the book Digital Humanities from MIT Press, that he has co-authored with Jeffrey Schnapp, Johanna Drucker, Peter Lunenfeld, and Anne Burdick: it expands the public sphere and allows humanists to participate in it along with others whose concerns, needs, and capacities for selecting and shaping data are considered as equal to if not more important than the concerns of experts. It transforms humanities expertise into a platform for enabling discussion, contestation, and what the education manuals have infelicitously called “life-long learning.” By promoting data curation — which is to say, allowing groups of people to use the HyperCities platform in order to create HyperCities Now, HyperCities LA, and HyperCities Iran — this platform, which originally presented the history of Berlin, gives people a structure for organizing huge amounts of data: twitter, photo, and YouTube streams as they respond to crises of historical moment or document the day-to-day.
It is tempting to see Dr. Presner’s development of this research platform as service rather than research, as merely enabling others to investigate rather than itself being new scholarship. Presner defines the methodological affordances offered by HyperCities, the kind of research that it enables, as “thick mapping,” obviously playing upon Clifford Geertz’s ethnographical notion that was taken up by New Historicists, “thick description.” In the platform’s interactivity with social media, HyperCities promotes interactions among a genuinely global public sphere. This means that software used by and for people all over the world is itself causing people to learn and information to embody a methodological principal coming from the humanities.
By counting a professor’s development of a platform as research, we legitimize as scholarship building software to promote the activities of citizen scholars in the ways that humanists see as valuable. I would like to suggest any de-legitimization of such work, interventions in the public sphere by humanities scholars in the academy, profoundly suspicious, on an ideological level, insofar as such denigrations contribute to marginalizing the humanities and eroding our impact on the world at large.
But the thing to know when such projects emerge is that those software programs and platforms which are capable of harnessing, fostering, and designing massive amounts of non-scholarly, extramural cultural production, using principals that humanists have developed, that get others involved in critical thinking of the sort we perform and teach — doing that takes a huge amount of serious, intellectual work, well beyond the purview of simple technological development. If one defines research in the digital humanities as discovering and creating resources that empower people, direct tasks, and structure information according to articulated and articulable humanities principles, then HyperCities is research in the field of digital humanities. It needs to be recognized as such by those doing research in humanities disciplines with which it overlaps but to which it is not equivalent. Any department wishing to participate in supporting the digital humanities needs to be prepared to value HyperCites along with a monograph published by Duke University Press. In fact, it is getting more and more common to see a digital resource such as the Trans-Atlantic Slave Trade Database spawning or accompanied by a book from a major university press, as well as to see presses undertake to publish digital resources.
Screenshot of Voyant
I wish to give just one more example of an out-of-the-ordinary research project in digital humanities for which someone should be tenured and promoted: software. Geoffrey Rockwell, a philosopher, and Stéfan Sinclair, a literature scholar, developed a series of tools that linguists might use to analyze texts and then visualize the results. These tools are part of TaPOR, a portal to which scholars can go. No one came. Next, after many usability studies and false starts, they developed what they were calling the Voyeur window. (Stéfan is French Canadian and so didn’t know the connotations in English until someone pointed it out to him.) Voyant is a window where you can load up texts and then see them analyzed, immediately, in a number of tools. The most amazing thing about this new software program is that it allows you to embed a window in a digital article, and this window provides a place where live textual analysis is possible. This is one of the first minor ways of changing what an article can do digitally from what it can do in print, but it is a huge step, in my view. Throughout their careers, Rockwell and Sinclair have consistently argued that literature professors can use tools developed by computational linguists for qualitative literary analysis, for close reading. The Voyant window enacts this argument:
If you go to http://hermeneuti.ca/voyeur, you will come to a wiki providing two major texts. In one, the tool is explained via an instance of it use in argumentation, “The Rhetoric of Text Analysis;” the other is an instruction manual. Both of these — constituting the equivalent of a book — are major publications in the field of digital humanities. One can see precisely who wrote and revised what on the wiki’s history pages, and therefore once can see how intensive and fruitful co-authoring can be.
How could faculty not in the digital humanities judge the importance of The Rhetoric of Text Analysis and the Voyant Manual to the field of digital humanities as a whole? That two workshops on it were held at DH2010 and DH2011, the Digital Humanities Conferences with a 30 to 40% acceptance rate held at King’s College London and Stanford University, respectively, is a clue, but understanding Voyant’s impact would be easier with the help of an expert in the field of digital humanities. One request that comes up continuously in discussions of rewards for digital scholarship is that P&T committees need access to the names and addresses of experts in the field who could consult with them as well as write external evaluation letters. I’m part of a group called “dhcommons,” and we are working on developing a database of faculty experts in the field.
In closing, I offer the following resources:
Thank you.
Sincerely,
Laura Mandell
Originally published by Laura Mandell in 2012.
Developing standards to evaluate scholarly digital output is one of the most significant problems our generation of digital humanists can work on. The improvement of tools and methods, the elaboration of theoretical perspectives, and (above all else) the development of digital outputs, will always be of primary importance. The elaboration of evaluation standards, however, has a broader reach: it is part of our responsibility as good scholarly citizens. Regardless of how digital humanities develops in the next few years humanists are going to need quality standards that help us distinguish ‘good’ from ‘bad’ work. In our universities this is primarily connected with the administrative requirements of performance and tenure review, but it goes much further than that.
The humanities have always valued quality scholarship; our tools for evaluating it have evolved over hundreds of years. But in many ways our evaluation and review mechanisms are broken. This isn’t only in relation to publishing models and peer review systems that the Journal of Digital Humanities and other initiatives aim to augment. An arguably more fundamental problem is the evaluation of born-digital products (tools, websites, ontologies, data models and so on) that are fundamentally different to anything produced by humanists before. While computer science standards must obviously be considered, the aims of digital humanists (and their technical ability) will often be at odds with them. While traditional humanities standards need to be part of the mix, the domain is too different for them to be applied without considerable adaptation.
Just like the analog humanities, it is unlikely that there will ever be one humanities standard for evaluating digital output. Different approaches will be needed for different contexts (universities, libraries, museums etc.) and different digital humanities sub-disciplines (history, classics, literary studies etc.). At a birds-eye level, though, it might be possible to come up with broad frameworks that can guide more detailed evaluation. By defining them we will be able to communicate to our peers the standards we’ve chosen to work to.
My feeling is that, in simple terms there are five levels of standards met by most digital humanities projects, and a sixth that doesn’t really make the grade at all. This isn’t a hierarchical scale as much as a classification framework describing types of projects seen ‘in the wild’. Not all digital humanities outputs are intended to be Category 1, for instance. Some, like blog posts, serve a quite different function. Other projects are produced by people just starting out with a new technology, so there is little chance the product will reach a standard required for tenure or review. They might be experienced digital humanists trying out a new method or experimenting with something likely to fail, or they might be a beginner learning the ropes.
In short, these are ‘layers’ that all contribute in important ways to the digital humanities ecosystem. Each layer has a function, and is in many ways inter-dependent with the others. To denigrate any layer is to undermine our goal of building a broad, inclusive and open ecosystem welcoming of a variety of approaches.
This is only a very broad-brush framework. Like any other field, the important thing with digital humanities outputs is that the producer of them understands where their output fits within the broader intellectual context. While this won’t always be the case — we always hope that something will come from left-field — it indicates both an understanding of the field, and respect for it. In general, though, I expect that builders of digital humanities outputs have consciously designed and positioned their product within the broader landscape of digital humanities, and understand that there is a broader matrix of standards and expectations alive in the community. Although as the field grows only Categories 1, 2 and 5 tend to get much airtime, it really doesn’t matter which category the final product falls into….unless it’s Category 6 and even then people don’t tend to get too bothered: it is what it is.
Originally published by James Smithies on September 20, 2012. Revised for the Journal of Digital Humanities December 2012.
Two years ago I was preparing for a semester in which all of my classes involved “multimodal” student work — that is, theoretically-informed, research-based work that resulted in something other than a traditional paper. For years I’d been giving students in my classes the option of submitting, for at least one of their semester assignments, a media production or creative project (accompanied by a support paper in which they addressed how their work functioned as “scholarship”) — but given that this cross-platform work would now become the norm, I thought I should take some time to think about how to fairly and helpfully evaluate these projects. How do we know what’s good?
This revision of that piece adds some insights I’ve gleaned from other sources since then, including the collection of essays on “Evaluating Digital Scholarship” that came out in the MLA’s Profession late last year. In recent years the MLA and other professional organizations have made statements and produced guides regarding how “digital scholarship” should be assessed in faculty (re)appointment and review — and these statements are indeed valuable resources — but I’m more interested in here in how to evaluate student work.[1]
* * * * *
A different take on multimedia evaluation. Television-set testing at Underwriters Labs.
In most of my classes we spend a good deal of time examining projects similar to those we’re creating — other online exhibitions, data visualizations, mapping projects, etc., both those created by fellow students and “aspirational” professional projects that we could never hope to achieve over the course of a semester — and assessing their strengths and weaknesses. Exposing students to a variety of “multimedia genres” helps them to see that virtually any mode of production can be scholarly if produced via a scholarly process (we could certainly debate what that means), and can be subjected to critical evaluation.
Steve Anderson’s “Regeneration: Multimedia Genres and Emerging Scholarship” acknowledges the various genres — and “voices” and “registers” and “modes” of presentation — that can be made into multimedia scholarship. Particularly helpful, I think, is his acknowledgment that narrative — and, I would add, personal expression — can have a place in scholarship. Some students, I imagine, might have a hard time seeing how the same technologies they use to watch entertainment media, the same crowd-sourced maps they use to rate their favorite vegan bakeries or upload hazy Instagrams from their urban dérives — the same platforms they’re frequently told to use to “express themselves” — can be used as platforms for research and theorization. Personal expression and storytelling can still pay a role in these multimodal research projects, but one in service of a larger goal; as Anderson says, “narrative may productively serve as an element of a scholarly multimedia project but should not serve as an end in itself.”
The class as a whole, with the instructor’s guidance, can evaluate a selection of existing multimodal scholarly projects and generate a list of critical criteria before students attempt their own critiques — perhaps first in small groups, then individually. Asking the students to write and/or present formal “reader’s reports” — or, in my classes, exhibition or map critiques — and equipping them with a vocabulary tends to push their evaluation beyond the “I like it” / “I don’t like it” / “There’s too much going on” / “I didn’t get it” territory. The fact that users’ evaluations frequently reside within this superficial “I (don’t) like it” domain is not necessarily due to any lack of serious engagement or interest on their part, but may be attributable to the fact that they (faculty included!) don’t always know what criteria should be informing their judgment, or what language is typically used in or is appropriate for such a review.
Once students have applied a set of evaluative criteria to a wide selection of existing projects, they can eventually apply those same criteria to their own work, and to their peers’. (Cheryl Ball has designed a great “peer review” exercise for her undergraduate “Multimodal Composition” class.)
After reviewing a great deal of existing literature and assessment models — all of which, despite significant overlap, have their own distinctive vocabularies — I thought it best to consolidate all those models and test them against our on-the-ground experience in the classroom over the past several years, to develop a single, (relatively) manageable list of evaluative criteria.
Steve Anderson and Tara McPherson remind us of the importance of exercising flexibility in applying these criteria in our evaluation of “multimedia scholarship.” What follows should not be regarded as a checklist. Not all these criteria are appropriate for all projects, and there are good reasons some projects might choose to go against the grain. Referring to the MLA’s suggestion that projects be judged based on how they “link to other projects,” for instance, Anderson and McPherson note that linking may be a central goal for some projects, but, “linking itself should not be an inflexible standard for how multimedia scholarship gets evaluated.” Nor should the use of “open standards,” like open-source platforms — which, while generally desirable, isn’t always possible.[2]
The following is a mash-up up these sources, with some of my own insight mixed in: Steve Anderson & Tara McPherson, “Engaging Digital Scholarship: Thoughts on Evaluating Multimedia Scholarship,” Profession (2011): 136-151; Fred Gibbs, “Critical Discourse in the Digital Humanities,” FredGibbs.net (4 November 2011); Institute for Multimedia Literacy, “IML Project Parameters,” USC School of Cinematic Arts: IML (29 June 2009); Virginia Kuhn, “The Components of Scholarly Multimedia,” Kairos 12:3 (Summer 2008); MLA, “Short Guide to Evaluation of Digital Work,” wiki.mla.org (last updated 6 July 2010).
Perhaps in some utopian future, when cognitive science is integrated into *all* disciplines, we can use brain scans as a form of assessment. Just kidding!
Originally published by Shannon Christine Mattern on August 28, 2012.
Steve Anderson & Tara McPherson, “Engaging Digital Scholarship: Thoughts on Evaluating Multimedia Scholarship,” Profession (2011): 136-151.
Cheryl Ball, “Assessing Scholarly Multimedia: A Rhetorical Genre Studies Approach“ Technical Communication Quarterly, 21:1 (2012): 1-17; and “Adapting Editorial Peer Review for Classroom Use“ Writing & Pedagogy (Forthcoming 2013).
Fred Gibbs, “Critical Discourse in the Digital Humanities,” FredGibbs.net (4 November 2011).
Institute for Multimedia Literacy, “IML Project Parameters,” USC School of Cinematic Arts: IML (29 June 2009).
Virginia Kuhn, “The Components of Scholarly Multimedia,” Kairos 12:3 (Summer 2008).
Shannon Christine Mattern, “Evaluating Multimodal Student Work,” (11 August 2010).
Shannon Christine Mattern, “Evaluation & Critique of DH Projects,” (16 October 2012).
MLA, “Short Guide to Evaluation of Digital Work,” wiki.mla.org (last updated 6 July 2010).
In this piece, Zach Coble explores the benefits of creating guidelines for the evaluation of librarians’ digital humanities work for the purposes of hiring, appointment, tenure, and promotion, and offers a basic framework for what those guidelines might look like.
Digital humanities, as well as related fields such digital media studies and digital libraries, have presented many opportunities for libraries. These include the establishment of digital humanities centers, the development of new data standards, new forms of scholarly communication, the creation of new resources (and novel ways of asking questions of those resources), and the development of new tools for scholarship and accessing collections.[1] However, traditional modes of evaluation do not address many of the key aspects of digital humanities work.
As librarians become more involved in digital humanities and begin to take on the title of “Digital Humanities Librarian,” how can we ensure that their work will be appropriately reviewed? While some librarians work individually on personal digital humanities projects or scholarship, most collaborate with faculty, fellow librarians, and information technologists across campus and across institutions. The collaborative nature of digital humanities work often blurs the lines when it comes to defining individual’s responsibilities and contributions. Similarly, new forms of scholarly output, such as a website rather than a paper or presentation, present additional challenges for those tasked with evaluating digital humanities work.
Written guidelines for evaluation ensure that projects are reviewed fairly and provide a clear path for job hiring and advancement. Libraries clearly understand the importance of assessment and evaluation. The Association of College and Research Libraries (ACRL) has guidelines for the evaluation of tenure track librarians and for those without faculty status. In 2010, Megan Oakleaf made waves with her Value of Academic Libraries report, which utilized existing assessment measures, such as college students’ information literacy skills, to demonstrate the positive impact of libraries. As the field of digital humanities continues to grow, libraries will increasingly be called upon to dedicate time and resources to supporting this work. In order to encourage more libraries to support digital humanities, to provide a framework that will encourage individual librarians to participate in digital humanities, and to acknowledge and reward excellent work, libraries should develop guidelines for evaluation of librarians engaging in digital humanities work.
Although librarians are often cited as important collaborators in digital humanities projects, librarianship as a profession lacks a coordinated approach to digital humanities. There are many reasons for this, such as the broad interdisciplinarity and rapidly evolving nature of digital humanities, which makes it difficult to articulate a large-scale response. Yet it also stems from the fact that library involvement in digital humanities varies across institutions: some libraries at large research-intensive universities host active digital humanities centers while many small schools (as well as public libraries, special libraries, and so forth) are only vaguely aware of digital humanities, if at all.
A framework for evaluating digital humanities work performed by librarians would ideally be one piece of a program to address digital humanities from libraries.
In a recent survey by the Association of College and Research Libraries Digital Humanities Discussion Group, most of the librarians who responded did not have digital humanities in their job title or description. Equally diverse are the types of work that librarians contribute to digital humanities projects. A 2011 report on digital humanities in libraries by the Association of Research Libraries (ARL) noted that digital humanities projects often call upon librarians for consultation and project management, technical and metadata support, instructional services, and resource identification.
A framework for evaluating digital humanities work performed by librarians would ideally be one piece of a program to address digital humanities from libraries. Such a program, possibly from the Association of College and Research Libraries, might also include criteria for undertaking digital projects and best practices for doing digital humanities work. As the 2011 ARL report notes, “The general lack of policies, protocols, and procedures has resulted in a slow and, at times, frustrating experience for both library staff and scholars. This points toward the need for libraries to coordinate their efforts as demand for such collaborative projects increases.” Without an organized response, librarians lack the incentives, resource support, institutional backing, and network of colleagues necessary to be successful.[2] On the other hand, a coordinated approach could encourage more librarians to get involved in digital humanities, motivate individual libraries to adopt related policies specific to their local needs, foster greater participation among libraries in the digital humanities community, and create the demand for increased training opportunities — both as continuing education for professionals and in library schools.
Other organizations, such as the Modern Language Association, NINES, and 18thConnect, have recognized the distinct nature of digital humanities work and adopted separate guidelines for the evaluation of digital projects.[3] Libraries would benefit from having a similar set of guidelines. Of course, every institution is different and no one set of guidelines will work for everyone. Also, the context and scope of a librarian’s contribution should be taken into account — a librarian asked to consult on metadata standards should not be faulted if the project fails to follow web design best practices. While acknowledging such nuances, there are certain baseline ideas that should be addressed. The following list draws upon existing guidelines for the evaluation of digital humanities work mentioned above and incorporates additional elements specific to libraries. It is intended to help generate conversation and is not meant to be comprehensive.
Originally published by Zach Coble on December 3, 2012.
On December 3, 2012 I spent a day talking with a community arts organization, the New Urban Arts Center, that takes a different approach to their arts education and humanities-driven mission than most arts and humanities funders are accustomed to supporting. We talked about the ways that the organization constantly needs to explain its process-driven approach to art to those not intimately involved at the Center. In doing so, they are educating supporters and funders to their humanities-driven educational community praxis.
This type of educating and guiding also is required of digital humanists when demonstrating the value of their scholarship and scholarly contributions to digital processes, code, sites, tools, et al. The approaches are new and not fully accepted and integrated into academic departments, or into most cultural heritage institutions, that are used to assessing value and impact in different ways. Non-digital humanists are capable of assessing scholarship in digital formats, but we still need to guide them into understanding in the type of work we do and the meaning that it holds.
We cannot assume that our work stands alone, particularly when we are implementing new methods and types of scholarships. We have to constantly talk about our work to different audiences so as to guide colleagues, a committee, or a department how to read and understand the digital work before them. Writing in a plain style and illustrating in plain design, should articulate the complexity of thought required by a review committee, while also demonstrating that the digital work we do is grounded in our humanities training. That style is most often incorporated into grant proposals and products.
One way to present digital humanities work could be to let grant proposals and related reports or white papers do some of the talking for us, because those forms of writing already provide intellectual rationales behind digital projects and illustrate the theory in practice.
At the Roy Rosenzweig Center for History and New Media, when we introduce new staff to our projects we generally ask them to read through a funded grant proposal and, if applicable, reports, and products (most often published as a website). Why not try this approach with a review committee?
Funded proposals, after all, are peer-reviewed publications and peer-accepted rationales for pursuing research work. Grant proposals, particularly ones that receive federal monies, are more heavily scrutinized by a larger number of experts than would ever peer review a prospectus or a draft manuscript for a publisher. Receipt of funding equals a nod of approval from leaders in the field that the rationale proposed is grounded, and that the project will have some real impact on the field or fields in which it is nestled.
This style and tone of writing is different than what one uses for a journal article, but a proposal similarly requires that the author, or authors, persuasively constructs and supports an argument to fund a new digital project or pursue research. The authors must illuminate how the digital humanities project is unique among the sea of other digital humanities projects and how it is meaningful for the targeted audience by demonstrating knowledge of a field through literature reviews and environmental scans.
If you’re writing for the National Endowment for the Humanities, for example, you must position your work, be it a content project, employing digital methodologies, or building a tool, deeply within humanities scholarship. To do so, those conceptualizing a project must also provide a rationale that explains methodological choices and generates scholarly use cases for how the specific project will be or might be used by others pursuing their own humanities work.
Proposals also must be written in a style that is free of jargon, while also explicitly describing methodologies and technologies incorporated into creating the project. By writing in plain style, proposal authors are also opening a door into understanding for non-specialists of what the project does and how it can “count” as research and scholarship.
Progress from proposal to finished product can be traced in interim and final reports. Reports detail the work done during a specific time period and, importantly, reports are often the place where a project manager or director discusses diversions and revisions of the work detailed in a proposal. In some cases, final reports are useful places for project teams to reflect on how well the project achieved its goals and explain where the team may have diverted from the proposal in intellectual and methodological approaches and outcomes. Again, the style of a report is such that non-specialists should be able to understand what is happening during the life of a grant.
If you are not doing grant-funded work, could it make sense to follow guidelines from an NEH-Office of Digital Humanities grant, or by modeling a final grant report as ways to describe your work in a portfolio?
Grants also produce specific deliverables, and it seems logical to present those pieces of scholarly digital work that are designed for a specific medium to a committee for review in that digital environment. Again, this may require additional work on the part of the scholar to open up the medium, whether through some documentation of method or by creating a digital entry point for reviewers (a sandbox, perhaps) to examine where the work happens. Additionally, a conference presentation or process paper that is published to one’s or a project’s blog might explain and provide that visual guide through the method and medium in which the work was produced.
In some ways, this is my case for creating better documentation for using the digital projects we build. We need to do a better job (generally, because there are some good exceptions) of talking about digital methodologies and projects to non-specialist audiences. This helps to encourage those eager to test out our methods, but who aren’t quite sure how to start. And, opens up this seemingly-difficult-to-decipher work to our colleagues and those assessing our work.
Originally published by Sheila Brennan on December 4, 2012.
I want to offer some context about my particular experience with tenure and promotion, because George Mason University (GMU) has a new tenure policy that allows candidates to go up for tenure either on the basis of “genuine excellence in research” or “genuine excellence in teaching.” In either situation, the other criteria are also held to a high standard (for example, “genuine excellence in teaching” also demands “highly competent research”).
I went up for tenure on the basis of my teaching — really, the scholarship of teaching and learning, as I have long treated my teaching as an object of study and scholarship, which I should share publicly and which others can contest, build upon, or simply learn from. Among Mason’s other is the question of impact. Specifically, the criterion is worded this way: “Evidence of teaching and learning impact beyond the classroom.” This statement is followed by a number of possible examples. But what I want to emphasize is that the form (or platform) by which the impact is made is left intentionally open-ended. Books, articles, blogs, talks, digital projects, teaching portfolios — all of these could count as evidence. The criteria is indeed platform agnostic.
I don’t mean to say that my tenure case was straightforward because GMU had this policy. In fact, I believe I was one of the first professors to approach tenure through this route at George Mason, and certainly within my department. I was a test case, a guinea pig. Therefore, as strong as a candidate as I might have been for “genuine excellence in teaching,” I wanted to make sure all the other aspects of my tenure case were unassailable.
This is where my case gets especially interesting, as much of my research with literature, new media and videogames has taken unconventional forms. To name one example, 10 PRINT CHR$(205.5+RND(1)); : GOTO 10 is a book from a university press (MIT Press), which I’m sure made all the people on my committee go Yay! But it’s also collaboratively written with nine other people — and not as individual chapters each written by a separate author as in an edited collection, but as a kind of wikified hive mind in which it’s nearly impossible to say who wrote what, a fact which I’m sure made my campus RPT committee go Wha? Furthermore, its methodological premise rests upon a close reading (Yay!) of a single line of computer code (Wha?). I’m not sure how to generalize from this example in a way that’s useful to others, other than to say if you do do unconventional work, do it with verve and confidence, and work with a good team.
As for the digital work in my research portfolio, it ranged from in electronic journals to of other people’s scholarly works to that I argued (following Kathleen Fitzpatrick’s work) were subject to post-publication peer review. In these examples and the others I could share, the key principle is, again, impact. And what’s important for any candidate is to demonstrate that impact, with evidence.
What counts as evidence of impact deserves a post of its own. For now, I’ll say that everything worked out for me and even turned out better than I had hoped for. I am fortunate to be at an institution that pays more than lipservice to innovation. For example, in my dean’s recommendation for tenure he explicitly mentioned the impact of my blogging, and he noted:
…because Dr. Sample openly engages readers in comments, this constitutes an effective and new form of public intellectual work. For these new types of publications, whose spontaneity is their hallmark, prior review must give way to subsequent analysis, and in this Dr. Sample has excelled.
Even better was my provost’s recommendation for my tenure and promotion. While I had gone up for tenure on the basis of genuine excellence in teaching, the provost recommended (and the president approved) my tenure for both genuine excellence in teaching and genuine excellence in research — a welcome recognition of the digital scholarly work I have done and will continue to do.
As I said, I’m fortunate to be at George Mason University. It’s an impressive research institution that is open to new forms of scholarly communication and places a premium on teaching where it counts. That said, I wouldn’t recommend my own particular tenure path to most people yet, unless they like risk. I took a gamble. I pursued what I wanted to pursue, and in a way that made the most sense to me. But it was a gamble. As I wrote in my tenure portfolio, “I have staked much of my scholarly worth in new modes of digital writing, collaboration, and publishing.” It paid off for me, and I hope that by writing publicly here — and elsewhere, in future blog posts — I can help to lower the stakes for the generation of faculty members behind me.
Originally published by Mark Sample on September 29, 2012.
In preparing my tenure and promotion dossier I was advised that I needed to explain my fields and contextualize my work in a more accessible way. Without many models for doing this, I made up my own rules, then tore apart my dossier, then re-assembled it, then tore it apart again (this happened 3 more times), then revised my narratives (this happened 6 more times). I received some well-meaning but conflicting advice and ultimately had to make up my own mind about how best to sell Digital Humanities, scholarly editing, and Digital Pedagogy to my colleagues. I received much help from the twitter-verse, but I really wish that a storehouse or consulting arena existed for this kind of professional documentation. As a Digital Humanist, I do so much that’s ephemeral but integral to my work. This is true with Digital Pedagogy and scholarly editing, too.
With that being said, I offer up my statement as a faculty member at a teaching-intensive, Master’s-granting, large public/state university (30,000 students). I teach a 4-4 with 4 preps, often 1 new each semester. I also submitted a scholarship activity report and revised one category to fit the “public scholarship” and “community outreach” sections of my CV. Appended to that section is a description of my digital archive, now a legacy project because we were never able to migrate it to TEI but wanted to maintain it as a scholarly edition. Money was a huge factor with that decision. But, the archive is used constantly in classrooms and cited in traditional work. That’s the marker of success in Digital Humanities (to me).
What follows is the primary document explaining my role as a Digital Humanist and scholarly editor.[Part 1] I’ve also appended my research statement, but it’s very, very long.[Part 2] I include my work on Twitter and my blog. That language might be helpful for some. In the dossier, I also include an explanation of my digital archive, complete with citations from the last 7 years. It’s too long to include here, but I was very careful to explain the value of a digital archive that doesn’t meet current standards for technology. I’m calling it a legacy project.
Since September 2005, San Jose State University has provided me with a foundation to explore both traditional and non-traditional venues for service, teaching and scholarship. Because we are situated in Silicon Valley, we have the unique opportunity to form industry partnerships with Google, Adobe, Microsoft, Hewlett Packard and others. As a literary scholarly, this is perhaps a more difficult task than science or business faculty. Because I received support from former Dean Karl Toepfer (Section 6: I.B.6), Academic Technology (see Section 5: I.A.6), the English Department (Section 6: I.B.2), and the scholarly community at large (see External Reviews Section 7), I have been able to accomplish much as a literary scholar and a Digital Humanist, a field that relies on collaboration and inter-disciplinarity:
The digital humanities is an area of research, teaching, and creation concerned with the intersection of computing and the disciplines of the humanities. Developing from an earlier field called humanities computing, today digital humanities embrace a variety of topics ranging from curating online collections to data mining large cultural data sets. Digital Humanities currently incorporates both digitized and born-digital materials and combines the methodologies from the traditional humanities disciplines (such as history, philosophy, linguistics, literature, art, archaeology, music, and cultural studies) with tools provided by computing (such as data visualisation, information retrieval, data mining, statistics, computational analysis) and digital publishing.[1]
More specifically, my role in Digital Humanities is as a scholarly editor engaged in recovering unknown literary works by women; in other words, I use technology to create and disseminate open-access digital archives of otherwise inaccessible print materials with my primary work being The Forget Me Not Archive. The Modern Language Association defines scholarly digital editions as follows:
One of the most useful contributions of digital humanists has been to create online scholarly electronic editions of resources of interest from historical documents to literary works. While there are many electronic versions of classic literary texts, often put up in a bout of enthusiasm by students, scholarly electronic editions represent significant careful and informed work that can be accessed widely. The work of the electronic editor is not trivial — he or she has to make a series of decisions informed by knowledge of the context and original about what to show and hide, how to enrich the material, and how to represent it online. The opportunities and fluidity of the electronic form mean the editor must master two fields, the intellectual context of the work and current practices in digital representation.[2]
Though Digital Humanities has been established as a field for more than four decades, evaluating the work produced by a Digital Humanist can sometimes be daunting. To aid in that endeavor, the Modern Language Association, the governing body for English Departments, recently crafted guidelines for evaluating Digital Work along with other resources for evaluating digital work. One of the most important features is the impact a particular digital work has on the scholarly community. Not only has the data from my archive been migrated to an updated and technologically-standardized sister archive, but faculty, students, and scholars have also continued to cite and use The Forget Me Not Archive, as is evidenced by the materials included in this dossier.
More recently, I have pushed Digital Humanists to incorporate students into their research and have been part of the growing number of Digital Humanists who also use Digital Pedagogy in the college classroom. With an underlying commitment to integrating, exploring and intellectualizing technology and its tools, my scholarship, teaching, and service has allowed me to become part of a cutting-edge movement that is re-shaping the Humanities. The 4Humanities movement, spear-headed by senior scholar and UC Santa Barbara literature professor, Alan Liu, invited me to participate in a campaign about the value of the Humanities and Digital Humanities. As the Digital Pedagogy representative in the video, I am pleased to be grouped with Johanna Drucker and Alan Liu, two imminent scholars in both the literary and Digital Humanities fields.
As a tenured Assistant Professor of English Literature, I teach not only literature, but also all types of cultural texts that will prepare our students for their professional lives. Keeping this in mind as well as the goals and missions of San José State University, I always look for methods to better my teaching, including improving lectures, incorporating interesting assignments, providing historical and cultural background, inviting other faculty to guest lectures, proposing new courses or implementing new and varied types of technology. I consistently teach in Smart rooms using websites, digital tools, movies, and more, to bring literature to life. I have paid attention to peer reviewers’ comments, students’ informal and formal evaluations, and colleagues’ suggestions — the end result is that my courses have improved both for my students and myself.
Though I employ traditional lecture, writing, discussion and student-centered classroom activities, I also believe in integrating our students’ quotidian knowledge to unpack texts. To this end, I speak to them through technology. Each course is accompanied by: an online course website which I design, code, and update daily; and a commitment to introducing relevant technology tools. There is a certain art to using technology in the classroom, and at times, it can overwhelm the content. At other times, it can empower students to the point where I can become a mediator of their discussions. (For an assessment of my experiments with technology in the classroom, see a letter from the Incubator Classroom’s Instructional Designer Menko Johnson, Section 5: I.A.6 “Other Evaluations”.) Students struggle with and appreciate the use of technology in the classroom; see two students’ unsolicited letters (Section 5: I.A.6 “Other Evaluations”). I continue to develop a relationship with Silicon Valley industry by using their tools in my classrooms; with these continued relationships, it is my hope that these industry partners will fiscally contribute and support our university.
In order to stay current with quickly-evolving pedagogical and scholarly issues and to encourage the discussion around pedagogy and technology, I maintain a research blog, http://triproftri.wordpress.com, where my conference papers (some with video), recent scholarly adventures, and new ideas live for the scholarly community to review and comment upon. Among others, my post, “Silence in the Archives?,” was recognized by DHNow as their Editor’s Choice.[3] At the 2013 Modern Association Language Convention, this topic will be more fully discussed during my talk at the “Digital Archives and their Margins” panel with Dr. Alan Galey. This topic was also the inspiration for my guest lecture at Scripps College in Fall 2013 as well as a special cluster of articles for Digital Humanities Quarterly in 2014. I also contribute to the Romantic Circles Pedagogies blog and am one of the key bloggers for FairMatters, a Norton Publishers blog about literature, teaching, and publishing (see contract with Norton Publishers). You may also find me conversing with students and colleagues over Twitter as @triproftri. Both my blogging and tweeting have lead to numerous invitations to speak about my work on literary annuals, Gothic short stories, Digital Humanities, scholarly editing, and most frequently, Digital Pedagogy. See a list of those recent talks (most of which occurred during my Spring 2012 sabbatical) on my blog.
There is now quite a bit of documentation regarding my tenure case. I think that a careful reading of it will demonstrate the ways in which I have tried to respond to all legitimate criticism of my work and my teaching in good faith and with concrete actions.
These are only highlights of what I have been able to achieve here at San Jose State University At the outset of each dossier section, I have included a detailed statement of activities. I revel in my mission as a teacher-scholar and would not be able to produce anything of relevance without the enthusiasm and dedication of our students. As a graduate of California State University, Los Angeles, I understand how much they have sacrificed to be here. I look forward to continuing my relationship with them and to connecting them with the world at-large.
As with most Digital Humanists, my work straddles the traditional and non-traditional worlds of scholarship. In addition to embracing social networking in order to advance scholarly conversations, I have also been working on two traditional projects that focus on nineteenth-century print culture.
With the open access movement and the rapid pace of scholarly conversations, I have become one of the many voices in a vibrant online community of Digital Humanities and literary scholars. Foremost among my social networking conversations is my participation on Twitter as @triproftri. These conversations often lead to blog posts that become conference presentations that also become articles and larger projects to be disseminated in open access journals. In this section, I highlight my pursuit of open conversation with my colleagues across international boundaries and have become a leader in Digital Pedagogy and Digital Humanities because of these social networking conversations.
In the interest of being a public intellectual, this blog hosts my conference papers, slideshows, grant proposals, book projects, reviewers’ comments, calls for papers, position papers, and article drafts on a variety of topics. triproftri blog posts have been cited in Debates in the Digital Humanities (see Brier & Waltzer articles) as well as ongoing online conversations in The Chronicle of Higher Education, Digital Humanities Now, and colleagues’ blog posts on literary research, pedagogy, digital archives. See the pingbacks at the conclusion of each post.
Assessing the impact of blogs in tenure and promotion cases continues to be difficulty in most cases. “Hits” or visits to a blog post can be interpreted as readership. Though there are several reasons why a particular blog post might obtain a high number of visitors, evidence of engagement with a larger scholarly community can be signaled by citation in other blogs and articles. For instance, my blog has received 13,099 visitors March 2010-September 2012; at the time that I submitted this dossier, I have authored a total of 45 posts with 221 comments directly submitted to the triproftri blog site. URLs of my blog posts have been tweeted 698 times. See attached stats for blog post hits monthly and daily. See the Top Posts summary.
The most viewed post with 1098 hits, “Acknowledgements on Syllabi,” was posted in March 2012 and was then cited in numerous other posts (including a Chronicle of Higher Education article) and received 48 distinct tweets that forwarded the URL.
The conference poster that I presented at the Digital Humanities Conference in 2011, the Digital Humanities’ community premiere conference, provides a reference place for other sources, including the University of Kansas Library. Other posts have been mentioned in literary organizations’ blogs, including on the North American Society for the Study of Romanticism blog.
My post about “Silence in the Archives” was part of the DHNow Editor’s Choice and was cited in other conversations around the scholarly blogsphere. My post about NITLE is listed as a resource and referenced by other Digital Humanists.
My blogging increases my scholarly profile and accessibility, acts as outreach to other communities, and posits me as being on top of emerging trends, and ultimately spurs better quality work.
Because of my numerous conference presentations and triproftri blogging, Norton Publishers contracted me for a year to write blog posts for their Fairmatter.com blog — along with two other literary scholars. See contract with Norton Publishers.
2009: http://ra.tapor.ualberta.ca/~dayofdh/KatherineHarris/
2010: http://ra.tapor.ualberta.ca/~dayofdh2010/katherineharris/
2011: http://ra.tapor.ualberta.ca/~dayofdh2011/katherineharris/
2012: http://dayofdh2012.artsrn.ualberta.ca/members/triproftri/
In 2009, 2010, 2011, and 2012, I was selected to participate in the project, A Day in the Life of Digital Humanist, along with approximately 75 other scholars, students, and technologists. On a single day in March during both years, all participants blogged about their tasks for that day. As it happens, both days fell on teaching days for me. My blog, though not peer reviewed, is considered a large-scale collaborative research publication — my blog in particular chronicles the mission of a teaching-focused university and demonstrates the innovation and versatility of our students, an important voice that is often lost in Digital Humanities. See letter from project coordinator, Dr. Geoffrey Rockwell, a senior colleague in Digital Humanities.
This project, another edition, offers more than 95 heretofore unstudied short stories from nineteenth-century literary annuals. Using specific definitions to identify these Gothic short stories from over 300 volumes of literary annuals, I created a collection that includes engravings and exact transcriptions using the protocols required by the Modern Language Association, the governing body for all language and literature scholarly projects. This project required knowledge of nineteenth-century literary and publishing contents and expertise in scholarly editing. Scholars have only recently begun rifling through the literary remains of the Gothic short story published in the 1820s — primarily because collections of literary annuals, like Gothic chapbooks, are scarce. The collection, critical introduction, and appendices, published in December 2012 and the focus of my keynote talk for the Studies in Gothic Fiction Conference last Spring, provides evidence that nineteenth-century Gothic literature evolved from taboo novels filled with tales of foreign adventure into short stories about the English countryside — still outfitted with ghosts, moral imperatives and a hero but acceptable because they were published in literary annuals. See contract with Zittaw Press.
Because collections of British literary annuals are difficult to find in any library, I created a digital archive from my private collection of The Forget Me Not annuals: “Forget Me Not Hypertextual Archive,” an open-access digital collection of the first literary annual. The Archive, conforming to standards for early digital scholarly editions, is now considered a legacy project because of the rapid shifts in technological standards for digital archives. Its metadata was migrated to Text Encoding Initiative (a mark-up language that allows for searching) and incorporated into The Poetess Archive Database, a project that has been peer reviewed by the governing body for nineteenth-century studies. Because the metadata and images live within the larger database with many other materials by nineteenth-century authors, the original Forget Me Not Archive will remain in its current instantiation as a scholarly edition. As one of the original digital projects to be included in the MLA International Bibliography and the focus of my participation in the first annual Nebraska Digital Workshop in 2006, The Forget Me Not Archive still provides valuable access to this hidden genre — most recently cited in the below materials:
- cited in Blackwell’s: THOMAS, SOPHIE. “Literary Annual, Poetry.” The Encyclopedia of Romantic Literature. Burwick, Frederick (ed). Blackwell Publishing, 2012. Blackwell Reference Online. 18 January 2012
- cited in Sydney Owenson, Lady Morgan and the Politics of Style By Julie Donovan (2009) p.87
- cited in “Picturing Scotland through the Waverley novels: Walter Scott and the origins” By Richard J. Hill p.57 (2010)
- archive used extensively for access to materials in dissertation, “Consecrating the romantic pen: Hemans and Abdy in the literary annual” (Virginia Hromulak, 2011)
- archive used in dissertation: “Grace Aguilar’s Historical Romances” (Kathrine Klein, 2009)
- archive cited in Dictionary of Nineteenth-Century Journalism in Great Britain and Ireland By Laurel Brake, Marysa Demoor p.805 (2008)
- archive used/cited in article “Maria Jane Jewsbury to Henry Jephson, M.D.: an undiscovered poetic fragment” by Kathleen Beres Rogers in Victorian Poetry 46:2008
See Statement about External Reviews and External Reviewers’ letters directly following this Statement. This panel of reviewers was assembled to assess my scholarly work for the 4th year dossier. Prof. Stephen Behrendt assessed my work overall in 2010. And, the senior scholars in Digital Humanities added their support in 2010 during my request for tenure. See also General Editor, Dr. Laura Mandell’s letter regarding the migration of The Forget Me Not Archive’s data to The Poetess Archive.
Now, scholarship takes many forms. Though not peer-reviewed, the below online materials represent a substantial amount of work in the process of crafting each area of my literary and Digital Humanities/Pedagogy expertise.
In this book project, the full manuscript, an expanded and significantly revised version of my dissertation, is currently under consideration with Ohio University Press. An abridged chapter is forthcoming in Textual Cultures, Spring 2013. The project had been previously positively reviewed by the editor and readers for Indiana University Press. However, due to internal conflicts, the manuscript was not published with IUP. See letter from Dr. Wayne Storey, series editor with Indiana UP.
[Note: In order to reduce the amount of paper in my dossier and to focus the argument, I excluded all of the grant proposals and award letters.]
National
San Jose State University
[Note: Also to focus the argument in my dossier, I exclude the papers and images presented at conferences and instead supply only a list of 1) invited talks, keynotes & plenaries, 2) workshops that I've given, and 3) conference presentations. The list is very, very long; the ordering of the categories became important to highlight that I've begun to be invited places. I also highlighted the top conferences in my field, but I forgot to mention that some, like the Digital Humanities Conference, are peer-reviewed.]
I continue to present several times each year at both national and international conferences on Romantic-era literature, literary annuals, history of the book, Digital Humanities, and pedagogy. During 2011 and 2012, I received several requests to conduct workshops or give keynote speeches.
The Modern Language Association is the seminal organization for the fields of Literature and Languages; an invitation to this convention not only signals emerging work in the field, but also acceptance by an audience with the largest number of colleagues. To this end, I was invited to participate in a panel at the 2012 MLA Convention and was accepted to run a digital pedagogy roundtable. At the 2013 Convention, I have again been invited to participate on a panel regarding Digital Humanities and women’s authorship in 19th-century England.
During my sabbatical in Spring 2012, I gave workshops on Digital Pedagogy. Dr. Jentery Sayers and I worked with NITLE (National Institute for Technology in Liberal Education) to present a one- hour webinar on Digital Pedagogy. Digital Humanities in the undergraduate classroom differs slightly from Digital Pedagogy. The latter deals with implementing tools in the curriculum to allow students to gain a set of hard skills in technology as well as to open up the possibilities for learning. Digital Humanities, for me, in a curriculum involves asking students to build something. I talked about doing just this and collaborating with our Special Collections in a presentation for the American Library Association Conference 2012 with Dr. Danelle Moon, SJSU’s Special Collections Director.
I continue to be active in the field of Digital Humanities and British Romantic-era Literature. The list of conferences and roundtables is listed in the following pages, but most important is the fact that I have begun to offer keynotes in my fields and be recognized by my colleagues as an expert among them.
Originally published by Katherine D. Harris on October 1, 2012.
While we have used digital research in teaching at University College Cork for many years, the central role played by digital artefacts in the new Digital Humanities programmes is a relatively recent addition. This pivotal shift is new for both staff and students who, by the nature of new media technologies, cannot benefit from generations of received wisdom on assessment and evaluation. In this piece, we undertake a frank and personal investigation from both a pedagogical and scholarly perspective.
Image credit: Róisín O’Brien
Assessing digital artefacts in academe is driven by the problem of grading the digital work being produced in our Phd and MA programmes in Digital Arts and Humanities. When the DAH Phd consortium developed the proposal for a structured Phd, there was agreement that the outputs could include digital artefacts but no detailed discussion on what those might be and what criteria might be applied.
With that Phd in its second year and our own one year MA course in UCC in its first, this issue needs to be addressed in a very immediate way. Our students are currently working on digital products which we need to guide, and that guidance needs to be shaped by a clear awareness of what our expectations are, and how we will assess their digital work.
Disciplines not only have signature pedagogies, they also have signature assessments, and the skill of grading those is often handed down from generation to generation as an artisan craft. This is understood across the community of the discipline so external examiners have no problem validating the marks assigned in their discipline. Colleagues who have never needed to explicitly consider grade descriptors or grading rubrics find it difficult to conceive of how one might grade an as yet undefined assessed digital object.
Grading is part art, and never wholly science, but in an interdisciplinary field like digital humanities, where we must assess new types of student work, some frameworks are necessary. In the National University of Ireland, we have clear and well established guidelines in the NUI grade descriptors (PDF). The descriptors clearly lay out, in a general, non-discipline specific way, the sort of ‘evidence of a mind at work’ we should expect at various grade brackets.
The NUI descriptors require no real modification for application to essays and other traditional work. Following on from traditional rubrics for “regular” student essays, people have produced countless rubrics for assessing blog posts and posting in discussion forums. These are, after all, written work and different from traditional academic writing mostly in extent, and sometimes in the formality of tone or voice.
When we move into less conventional forms, matters become more complicated. How do you assess a database, a critical edition, a performance, a piece of multimedia or an ‘app’? The optimal manner to build a relational database is, at one level, defined by a set of normalisation rules which are pretty clear and preclude, it seems, developing an argument. At another level, the choice of data to capture, the varieties of datatypes, indices and relations, are all driven by the questions which are informed by the particular inquiry being pursued.
Digital Artefacts — databases, corpora, and other things — designed by different students for different inquiries can and should differ. One would expect to see individual choices about analysis and synthesis of the material to suit particular analytical questions being asked of the original raw, pre-digital material.
A common criticism of digital work is that “if it were a proper academic essay, it would of course have footnotes and so on” as if it was not a proper academic essay. But a work like ‘The Phoenix Tapes“ are a very fine collection of six essays on Hitchcock: they choose themes, extract examples, arrange those examples in a structure which includes a clear progression from introduction through development, to show the often horrific end result of the obsessions, highlighting along the way the manner in which Hitchcock visually expresses these themes.
If we skate over the detail that the original footage was not digital, the Phoenix tapes are an artefact, an essay of sorts, albeit in a medium which doesn’t easily permit footnotes. Nevertheless, submitted with a copy of the script, including references, they should, under the NUI grade descriptors, merit a clear first class mark.
Part of our problem with assessment of digital artefacts is that many academics have never explicitly considered their instinctive grading rubrics. When challenged in discussion on a particular essay, most academics can explain why they gave it the assigned mark, but do not have a set of grading rubrics to hand, nor do they provide students with copies of grading rubrics at the start of courses.
As leaders in digital humanities, we are asking our students to leave accepted pathways and march into the desert; we have a responsibility to know enough to help them draw a new map.
This piece will focus on personal experiences of recent entrants into digital humanities scholarship thus far, setting them in a framework of evaluating digital scholarship as learners. It will highlight the challenges faced by digital humanities novices in assessing scholarly literature. It will also refer to the digital tools utilized in scholarly endeavours and publications, while synopsising how these have affected learning to date. Instead of focusing on the technical shell of standardising evaluation and assessment of digital scholarship, this piece will concentrate on the innards of the issue; that is to say the main principle of a free-form approach which will guide evaluation as opposed to regulating it.
This is particularly indicative in the language used to discuss and communicate within the current digital humanities, as well as the audience’s reading when engaging with, not only the written material, but meaning-making in general.
Current debates in the field of the digital humanities about the divergent practices of ‘close’ and ‘distant’ reading are really a screen for deeper changes called for by the advent of new media. Digital technologies do more than propose new ways of thinking, as did theory; they require new modes of being (Schreibman 126).
The 2010 Digital Humanities Conference saw Kathleen Fitzpatrick define digital humanities as a “nexus of fields within which scholars use computing technologies to investigate the kinds of questions that are traditional to the humanities, or . . . who ask traditional kinds of humanities-oriented questions about computing technologies”. It is this unrestricted classification that makes digital humanities such an attractive field, but also, unsurprisingly, presents problems for students who wonder whether their contribution is a valid one. For instance, words like collaboration, TEI, temporality, remediation, and xml have been passed back and forth in our classes, while taking for granted the fact that we were all aware of their meanings.
Depending on who you read, digital humanities is a minefield: it can be riddled with “charlatanism . . . that . . . undersells the market by providing a quick-and-dirty simulacrum of something that, done right, is expensive, time-consuming, and difficult” (Unsworth); consequently, the most earnest of students, worried they may be tarred with such a brush, can experience what Mullen terms “digital-humanities impostor syndrome”. The desire to be certified and qualified in something jars with the field’s characteristic lack of structure, and many students require an awareness of the workings of assessment and evaluation to feel secure in their chosen area.
Arguably, trying to find a sense of identity in the digital world presents one of the main struggles which has led to the emergence of Digital Arts and Humanities. After all, the Humanities need to carve out a digital persona just as much as any large corporation, in order to make their presence felt in an ever-changing digital atmosphere. There is a sense of not wanting to be left behind evident in this move into the realm of IT. Jaron Lanier explores this concern in his book, You Are Not a Gadget. His concluding thoughts express this Humanist need to stay true to oneself while entering into the digital:
The most important thing about postsymbolic communication is that I hope it demonstrates that a humanist softie like me can be as radical and ambitious as any cybernetic totalist in both science and technology, while still believing that people should be considered differently, embodying a special category.
From the beginning of the Masters Degree in Digital Arts and Humanities, an open-minded approach to scholarship was promoted. A first encounter was with digital publications of literature, and the second was in the setting up of academic blogs for the purpose of open, online discourse.
It was at these two points in this year’s scholarly endeavour that the issue of criteria for standards of evaluation was raised, namely:
The opportunity exists to also set a precedent for intervarsity communication. If seized upon, and the result acknowledged by each institution, this chance could provide students with a wider pool from which to form connections, build projects, and review each other’s work on a structured basis (though still less formal than official journals), as the community grows cumulatively larger. In this collaborative sense, digital humanities encompasses the acknowledgement that the physical days of education can no longer stand alone as a means for learning.
Last October’s Digital Archiving in Ireland (DRI) National Survey of the Humanities and Social Sciences (PDF) saw one respondent state
[w]hen I see the word[s] Digital Repository Ireland, I would expect to find born-digital records are stored there and preserved there so that they can be migrated forward into new formats and then preserved and made accessible at the right time. And I really think that is where the gap is more than any other gap.
The demand is there for Ireland to stockpile and standardise — not homogenise — digital scholarly work. In relation to library archiving systems, the report notes one institution’s emphasis on being aware of any “broad national perspective on things…so if there are a lot of institutions moving…[in the same direction], we would move in a very coherent way.’” As Priego writes, “core critical and practical skills applicable to a wide variety of web tool scenarios would be a great thing to have a structured, recognised framework for.”
It soon became clear that standard practices in carrying out digital scholarship include technical skills and digital tools such as XML, TEI and databases. There is also an array of less daunting tools that are available to postgraduate students for research in any discipline. The problem lies in the fact that the sheer volume of digital tools can, at times, make the digital humanities realm awkward to navigate. To this end, a general, online, instructive directory, with general guidelines to popular software or particularly useful blogs would — though perhaps tedious to maintain and regularly update — be a great help. Self-directed learning, while expected at a postgraduate level, can become problematic if the student feels he or she is left without a map. With the impostor complex facing many at the beginning of their digital humanities explorations, how can we implement a structure that will reassure the budding digital humanist that, by the end of her studies, she will be qualified to actually do anything?
The solution, as with any set of tools, is to be discerning in one’s choices; rather than indulging in experimentation for its own sake and at the risk of confusion, one should build up to a knowledge of more esoteric tools. The following are some examples which have featured in our academic involvement this year, and which have a strong possibility of becoming standard practice for pedagogy and reflective learning outside of digital humanities, that is if their value can be demonstrated to institutions. When evaluating digital humanities, the benefits cannot be ignored.
Moodle, for example, acts as an online classroom and discussion forum, allowing a significant depth of reflection on course material. Similarly, Blackboard acts as a virtual library for course readings, as well as being another forum for discussion. Tools such as Skype and Google Hangouts enable the perimeters of the classroom to be endlessly extended; similarly, the eReader has now gained widespread availability. Even those who do not shop online are now inundated by its display in their local Tesco.
Clearly, the process of reading and learning has, to paraphrase Yeats, “changed, changed utterly”, but is this change a “terrible beauty” (The Collected Poems, 193) or welcome evolution? D. Randy Garrison’s E-Learning in the 21st Century summarizes the change in educational focuses which one can argue that digital humanities represents: “To be constrained by the restricted frame of traditional classroom presentational approaches is to ignore the capabilities and potential of e-learning” (54).
As the first academic term nears its end, MA DAH students have already started to shed insecurities about personal judgement in assessing academic literature. Learning that the reader’s response to literature is not a trivial feature in terms of assessment of the quality of its contribution to a digital library was a formative experience. One may not feel practiced in the art of evaluation, however it is true to say that there is worth in every reader’s response. No matter what form the scholar’s interpretation takes, the exercise of assessing literature for its scholarly worth is a vital part of the process of handing responsibility back to the learner and relates directly to the vital strive for experiential learning. The student not only learns how to identify insightful literature, but also takes the early steps in laying a foundation for their own autonomous learning.
What does all this mean for the evaluators? How can any individual possess the abilities to work and evaluate across such a broad spectrum of practise? The issue of interpretation and intended meaning requires further interrogation and greater development through discourse. According to Schreibman, Mandell, and Olsen (2011), humanities scholars are, for the most part, “ill equipped … to recognize the scholarship” or the “intellectual content” of projects in which theoretical and technical choices inform project design. This issue is highlighted in Clement’s “Half-Baked; The State of Evaluation in the Digital Humanities”, in which she asserts that academic works relating to evaluation in the digital humanities have given rise to “a conversation that has very few listeners or readers in the humanities capable of appreciating the scholarship represented in this interdisciplinary work” (2012). This is supported by Browner (2011), who states that“[o]wning a computer and being able to click on a link is only the first and perhaps most easily addressed issue in assuring a real democracy of knowledge. Having intellectual access is much harder”.
On that note, there are three main features of scholarship which encapsulate the main characteristics of the process of learning in the backdrop of web 2.0. Each one is spurred by the use of digital tools, such as the aforementioned blogs. These are:
In order to assess the understanding of a scholar, one must also assess their cognitive presence, both in terms of critical thinking and discourse. An important point of reference in evaluating this is the Practical Inquiry Descriptors and Indicators model, as illustrated below and in D. Randy Garrison’s E-Learning in the 21st Century – A Framework for Research and Practice (52).
Practical Inquiry Descriptors and Indicators from “E-Learning in the 21st Century: A Framework for Research and Practice” by D. Randy Garrison (2011).
Garrison suggests that “practical inquiry is the model within which we operationalize and assess cognitive presence” (51). The aim is to offer “a practical means to judge the nature and quality of critical reflection and discourse in a community of inquiry”. The question of standard is, understandably, a hot topic for the budding digital humanist in particular, given that there can be such disparity between articles, studies or blog entries that all file themselves under the same digital humanities umbrella. By using a related, but less fixed, model in evaluating digital scholarship, we can tread the middle ground between a laissez-faire stance and a forced setting in stone of standards. One should not establish a system of rating but as an alternative examine the qualities already expected from scholarship and simply allow these to standards to homogenise in a digital setting.
Accessibility is a characteristic that must be stamped on digital scholarship. This applies to scholarship both in terms of publication of literature, and also the accessibility of data for the purposes of XML analysis. Without accessibility of data, information is as useful as a piece of chalk on an interactive whiteboard. The notion of access must be a guiding light for one’s own academic goals, to make a conscious move to live and breathe accessibility, thus exposing one’s work for the theoretical benefit of the academic community and allowing standards to grow from it.
Not only must the situation of ‘intellectual access’, or as Stefiks terms it “sensemaking”(Liu, 2011), be remedied through education and an increased academic and industrial awareness, but a more urgent predicament must be answered, in fact demands a response; “a reader could easily ask of these books what humanities scholars everywhere consistently ask of digital humanities writ large: So what? Is that it? And what does this have to do with our research?” (Clement, 2012).The origin of this issue of relevancy may stem from a predicament identified by Bartscherer and Coover (2011): “scholars and artists understand little about the technologies that are so radically transforming their fields, while IT specialists have scant or no training in the humanities or traditional arts”. Is it any wonder that a difficulty has arisen with regards to evaluation and perceived value within the digital humanities?
Previous university graduates will have had an understanding of the requirements of a postgraduate degree. However, web 2.0 has modified the reality of higher level academia, and offered an opportunity to reshape the traditional structures in education. In the MA DAH at UCC, the digital evolution has been fully embraced as an appropriate setting for the rounded learning of a 21st century student in line with communities of practice. The campus is now both physical and virtual. Web 2.0, and social media in particular, has changed the reality of the academy into a virtual experience, with room for immediate distribution of relevant, up-to-date knowledge. Such practice is essential to the promotion of accessibility. It is up to scholars to harness the energy of web 2.0. However, the linking of scholarship with the digital realm is an individual choice that each researcher will have to make.
Collaborative-learning environments, and the learner as a sounding board for standards, may be the main catalysts in the development towards an empowered learner and an adequate set of learner-centred standards, as opposed to the decree of an elite crew. In other words, e-learning should be part of an organic process in terms of developing standards for evaluation of scholarship in all its digital manifestations. In order for this to be fully realised, one must begin at the first marker of making accessibility an integral part of the academic world, or, in other words, a widely accepted standard.
An area which stirs up an array of controversy is the use of blogging within digital humanities, and indeed education in general. Issues exist surrounding the question of whether the blog can be considered a legitimate tool for research, or citation in an academic paper. The reality is that an amount of time and effort, equivalent to that which is being put into scholarly publications, is now being directed into the blogosphere. Alan Liu offers an entry point for such examples of social media to become more respected:
In the digital humanities, cultural criticism — in both its interpretive and advocacy modes — has been noticeably absent by comparison with the mainstream humanities. . . . How the digital humanities advance, channel, or resist the great postindustrial, neoliberal, corporatist, and globalist flows of information-cum-capital, for instance, is a question rarely heard in the digital humanities associations, conferences, journals, and projects with which I am familiar (Where is Cultural Criticism in The Digital Humanities, np).
Engagement with more thoughtful scholarship which directs itself towards cultural criticism could strengthen the consideration of blogs and other social media tools for digital humanities scholarship, through the fusion of discussions of data use with cultural commentary. Social media is fast becoming the leading publishing house for new material. Could one go so far as to argue that web 2.0 is the Humanities life-support system?
In terms of our experience of evaluation, there are several sides to evaluating literature, many of which were encountered through e-learning by the questions that were raised:
In assessing digital scholarship, the following points are also taken into consideration:
Price (2011) takes this issue a step further when it is revealed that this issue of evaluation and perceived relevancy carries right up the academic ladder to “tenure and promotion committees [that] have a notoriously difficult time in the humanities with multi-authored projects (characteristic of digital humanities projects)”. Clement (2012) supports this, indicating “And so the game continues: players lay their claims on the table and the winner is the person who makes the claims deemed most insightful by respondents”. Another significant hurdle against progress in adopting a new system of evaluation is indicated by Browner (2011): that “the habits, biases, power centers, and economics that shaped print over the last 500 years are also shaping the digital world”.
There needs to be some synergy between disciplinary standards of governance and the increasing use of more liberal forms of research using technology. Julia Fraser conceives that “digital humanities as a whole has revealed precisely how interwoven and mutually consequential ‘technical’ and ‘disciplinary’ standards often are” (Collaborative Research in the Digital Humanities, 68). Digital humanities demands these sectors strike the right balance when merging in research.
Some of the questions raised in evaluating broad-spectrum scholarship can be applied to any form of learning. However, most disciplines now collide with the challenges and enhancements of digital scholarship. In the MA DAH, feedback from teachers and interactivity through virtual text and verbal discussion allows standards to form in an organic way. This is the radical crux of the argument: that defining digital standards in any form should come from those who both produce scholarly outputs and read them. To do this, one cannot simply hypothesise. Instead, one must tear down speculation and examine the plain evidence. One must then share this knowledge in order to establish a true form of best practice, as derived from pedagogical practices, and, perhaps more importantly, our own innate learning experiences. If one shares these models of learning, instead of theorising ad infinitum, one will be able to demonstrate their actual implementation on a personal level and therefore on realistic terms.
In reality, the basic criteria for assessment and evaluation will reflect the standards which have always existed in any form of scholarship, including a cohesive, well-formed argument, presented in an accessible manner. It is not for scholarship to be clinically assessed in any hypothetical way; rather, feedback can be drawn from an existing set of evaluation principles, and refined to establish a pattern of acceptable forms for the digital version. Creation, data, collaboration, innovation and publication appear in new media forms, but the core elements of best practice remain the same, regardless of the medium. While digital scholarship allows for an enrichment of existing principles, the most important category that we cannot neglect is again the accessibility of work to all interested parties, as this is where standards of evaluation and assessment are born.
Part of the digital humanities utopian view is that of a democratic world of collaborative, open source, non-hierarchical understanding. As Professor William Pannapacker conveyed, “with leaders who have never known a time when scholarship in the humanities wasn’t in crisis, digital humanities is moving us — finally — from endless hand-wringing toward doing something to create positive change throughout academe”. If we smother digital humanities and digital scholarship’s free-form, shape-shifting attributes by attacking scholarship and delving into the task of structuring standards, we are treading the dangerous ground of inhibiting the organic growth thereof, and consequently stifling digital scholarship and goals of accessibility. Although organisation is a necessary feature of scholarship, we first need to start with a hands-off approach and make adjustments along the way where necessary. After all, the internet began as a communication device — what if we had tried to tighten our grip and to slam it down with definitions of its existence? To use an artistic analogy, instead of enforcing a theme of standards, we need to move towards freeform brushwork and bring out the features of optical art, which mutates before our scholarly eyes.
Clement (2012) takes a positive posture with regard to “Interdisciplinary conversations, on the other hand, are much harder: they are fruitful and productive when, in our attempt to understand each other, we produce knowledge”, while Spiro and Segal (2011) when investigating the field of digital scholarship in American literature, observed that within digital humanities scholarship ‘using’ digital infrastructure provides for more innovative scholarship than ‘making’, and Judith Donath (2011) who likened the current changes in scholarship within the digital humanities as a mutation, where “the richness of life comes from a myriad of accidental yet advantageous mutations — at the cost of the many that failed. As we enter the digital era, we are able to program the level of risk we are willing to take with unexpected changes”.
Digital humanities is a field in transit. It is moving from a world of constraint to a world of scholarly freedom. So far, digital humanities appears to the novice to be a culture of collaboration and experimentation. Perhaps it is too early to pinpoint what exactly it is; or, perhaps, what new material regularly unfolds persists in proving the field too rich to be confined by definition. The solution for the novice may be to content herself, for now, with the uncertain process of trial and error. It is up to us all to work out how to navigate this transition in a thoughtful, cautious manner.
Originally published by Mike Cosgrave, Anna Dowling, Lynn Harding, Róisín O’Brien & Olivia Rohan on December 3, 2012.
Bartscherer, T., and R. Coover, eds. Switching Codes: Thinking Through Digital Technology in the Humanities and the Arts. Chicago: University of Chicago Press, 2011.
Browner, S.P. “Digital Humanities and the Study of Race and Ethnicity.” The American Literature Scholar in the Digital Age. Eds. A.E. Earheart and A. Jewell. Ann Arbor: University of Michigan Press and University of Michigan Library, 2011. 209-28.
Cohen, M. “Design and Politics in Electronic American Literary Archives.” The American Literature Scholar in the Digital Age. Eds. A.E. Earheart and A. Jewell. Ann Arbor: University of Michigan Press and University of Michigan Library, 2011. 228-49.
Deegan, Marilyn, and Willard McCarty. Collaborative Research in the Digital Humanities: A Volume in Honour of Harold Short. Surrey: Ashgate Publishing Ltd, 2012.
Earheart, A.E., and A. Jewell, eds. The American Literature Scholar in the Digital Age. Ann Arbor: University of Michigan Press and University of Michigan Library, 2011.
Garrison, D. Randy. E-Learning in the 21st Century: A Framework for Research and Practice. New York: Routledge, 2011.
Gold, Matthew K. Debates in the Digital Humanities. Minneapolis: University of Minnesota Press, 2012.
Lanier, Jaron. You Are Not A Gadget: A Manifesto. New York: Alfred A. Knopf, 2010.
Liu, Alan. “Where is Cultural Criticism in the Digital Humanities.” The History and Future of the Digital Humanities. Modern Language Association Convention. Los Angeles, 7 January, 2011.
—, “We Will Really Know.” Switching Codes: Thinking Through Digital Technology in the Humanities and the Arts. Eds. T. Bartscherer and R. Coover. Chicago: University of Chicago Press, 2011. 89-94.
Price, K. M. “Collaborative Work and the Conditions for American Literary Scholarship in a Digital Age.” The American Literature Scholar in the Digital Age. Eds. A.E. Earheart and A. Jewell. Ann Arbor: University of Michigan Press and University of Michigan Library, 2011. 9-26.
Rockwell, G. “On the Evaluation of Digital Media as Scholarship.” Profession 1 (2011): 52-68. <http://www.mlajournals.org/doi/pdf/10.1632/prof.2011.2011.1.152> (PDF)
Schreibman, S., L. Mandell, S. Olsen. “Evaluating Digital Scholarship: Introduction.” Profession 1 (2011): 123-201. <http://www.mlajournals.org/doi/pdf/10.1632/prof.2011.2011.1.123> (PDF)
Spiro L., J. Segal. “Scholars’ Usage of Digital Archives in American Literature.” The American Literature Scholar in the Digital Age. Eds. A.E. Earheart and A. Jewell. Ann Arbor: University of Michigan Press and University of Michigan Library, 2011. 101-24.
Yeats, W.B. The Collected Poems of W.B.Yeats. Ed. Richard J.Finneran. New York: Simon and Schuster, 1989.
Building on conversations within their respective organizations, in 2007 the American Historical Association, National Council on Public History, and Organization of American Historians organized a working group to evaluate public history scholarship.
Representing the American Historical Association were Kristin Ahlberg, Edward Countryman, and Debbie Ann Doyle; from the National Council on Public History were Bill Bryans, Kathleen Franz, and John R. Dichtl; and from the Organization of American Historians were Constance B. Schulz, Gregory E. Smoak, and Susan Ferentinos.
The Working Group’s White Paper (PDF) from 2010 provides context and background for the formal report.
This white paper will provide useful advice for public historians on the tenure track; history departments and department chairs seeking fair evaluation standards for their colleagues; and deans, provosts, and other administrators at colleges and universities that employ public historians. The working group by no means intends to devalue traditional scholarship; rather, we argue for expanding the definition of scholarship to incorporate the types of work public history faculty are hired to do. Because public history often blurs the lines between the traditional categories of scholarship, teaching, and research, this white paper will address all three aspects of scholarly life.
The Working Group report, “Tenure, Promotion, and the Publicly Engaged Historian,” (PDF) is a formal statement adopted in 2010 by the three major organizations for historians working in the United States.
This report is the product of the Working Group on Evaluating Public History Scholarship convened by the American Historical Association, Organization of American Historians, and National Council on Public History. It is designed to help faculty members, personnel committees, department heads, deans, and other administrators develop a plan for evaluating historians who do public and collaborative scholarship. Drawing on a survey of existing promotion and tenure guidelines and input from public history aculty members, the report offers suggestions for evaluating public history work as community engagement, scholarship, teaching, and service. It defines a number of best practices and describes possible approaches to the hiring, review, and promotion.
The following guidelines are designed to help departments and faculty members implement effective evaluation procedures for hiring, reappointment, tenure, and promotion. They apply to scholars working with digital media as their subject matter and to those who use digital methods or whose work takes digital form.
Digital media are transforming literacy, scholarship, teaching, and service, as well as providing new venues for research, communication, and the creation of networked academic communities. Information technology is an integral part of the intellectual environment for all humanities faculty members, but for those working closely in new media it creates special challenges and opportunities. Digital media have expanded the objects and forms of inquiry of modern language departments to include images, sounds, data, kinetic attributes like animation, and new kinds of engagement with textual representation and analysis. These innovations have considerably broadened notions of language, language teaching, text, textual studies, and literary and media objects, the traditional purview of modern language departments.
While the use of computers in the modern languages is not a new phenomenon, the transformative adoption of digital information networks, coupled with the proliferation of advanced multimedia tools, has resulted in new literacies, new literary categories, new approaches to language instruction, and new fields of inquiry. Humanists are adopting new technologies and creating new critical and literary forms and interventions in scholarly communication. They also collaborate with technology experts in fields such as image processing, document encoding, and computer and information science. User-generated content produces a wealth of new critical publications, applied scholarship, pedagogical models, curricular innovations, and redefinitions of author, text, and reader. Academic work in digital media must be evaluated in the light of these rapidly changing technological, institutional, and professional contexts, and departments should recognize that many traditional notions of scholarship, teaching, and service are being redefined.
Institutions and departments should develop written guidelines so that faculty members who create, study, and teach with digital objects; engage in collaborative work; or use technology for pedagogy can be adequately and fairly evaluated and rewarded. The written guidelines should provide clear directions for appointment, reappointment, merit increases, tenure, and promotion and should take into consideration the growing number of resources for evaluating digital scholarship and the creation of born-digital objects. Institutions should also take care to grant appropriate credit to faculty members for technology projects in teaching, research, and service. Because many projects cross the boundaries between these traditional areas, faculty members should receive proportional credit in more than one relevant area for their intellectual work. New guidelines for reappointment, tenure, and promotion appear regularly. The Committee on Information Technology recommends that persons interested in such guidelines search for documents on evaluating work in digital media or digital humanities at institutions comparable to their own.
Documentation of projects might include examples of success at engaging new audiences; securing internal or external funding, awards, or other professional recognition; and fostering adoption, distribution, or publication of digital works, as well as reviews and citations of the work in print or digital journals. In framing their work, faculty members should be careful to clarify the context and venue of publications, exhibitions, or presentations (e.g., conference proceedings are among the most prestigious publications in computer science, whereas they are generally deemed to be a lesser form of publication in the humanities).
The pace of technological change makes it impossible for any one set of guidelines to account completely for the ways digital media and the digital humanities are influencing literacies, literatures, and the teaching of modern languages. A general principle nonetheless holds: institutions that recruit or review scholars working in digital media or digital humanities must give full regard to their work when evaluating them for reappointment, tenure, and promotion.
These guidelines were approved by the MLA Executive Council at its 19-20 May 2000 meeting and were last reviewed by the Committee on Information Technology in January 2012.
Originally published by the Modern Language Association.
This material was developed by an online open-content collaborative of individuals and groups working to develop a common resource for the profession using an online wiki. The structure of the project allowed anyone with an Internet connection to alter its content. As such, it does not represent the official positions of the Modern Language Association. The wiki is still available for editing.
So you expect to be evaluated on the grounds of your new media work, whether instructional, service or research. Here is a list of some of the types of materials you may want to keep in order to make your case.
How can you summarize information about a new media work so that your committee can understand the context. Here is a fictional example drawn from a real case. It is followed by a bullet point summary of the salient items of information included.
My article, titled “Teaching in Second Life: The Garden of Games,” co-authored with Jane Philodorus was published by the online journal Digital Learning at <>. This article of approximately 23 pages length describes the design and pedagogical assessment of an assignment in a games studies course where students developed 3D models of historic video game characters. We found that the assignment significantly increased engagement with the course and provided students a better understanding of virtual worlds. Also included online is an appendix of approximately 30 web pages of lesson materials including the guidelines for training students in Second Life modeling. I taught the course, directed the project and wrote the article in collaboration with Jane Philodorus of the Centre for Teaching Learning. Philodorus ran the Second Life 3D modelling training for students, designed the assessment and analyzed the results of the assessment survey. My contribution can therefore be estimated at 75% and that of Philodorus at 50%. The journal Digital Learning, while new (it was started in 1998) has a wide readership and a rigorous peer review process. The review process is documented online as is the Editorial Board (which includes leaders in the field like Wilford Wright and Wardrip Aarseth). According to the Editor the journal rejects about 70% of the submissions. The journal is widely read with over 10,000 visitors per day and a print copy of all the articles is archived at five libraries around the world. The article has already be blogged by academic blogger George Rockwell in almost.theoreti.ca where he notes that “the authors did their assessment right and at arm’s length rather than just charming the students with a questionnaire.
This is the December 12, 2012 version of “Documenting a New Media Case.” The wiki is still available for editing.
Blais, Joline, Jon Ippolito, and Owen Smith. New Criteria for New Media. New Media Department, University of Maine, January 2007. http://newmedia.umaine.edu/interarchive/new_criteria_for_new_media.html.
Burgess, Helen J., and Jeanne Hamming. “New Media in the Academy: Labor and the Production of Knowledge in Scholarly Multimedia.” Digital Humanities Quarterly 5, no. 3 (2011). http://www.digitalhumanities.org/dhq/vol/5/3/000102/000102.html.
Cavanagh, Sheila. “Living in a Digital World: Rethinking Peer Review, Collaboration and Open Access.” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/living-in-a-digital-world-by-sheila-cavanagh/.
Fitzpatrick, Kathleen. “Peer Review, Judgment, and Reading.” Profession (2011): 196–201. http://www.mlajournals.org/doi/abs/10.1632/prof.2011.2011.1.196.
Galarza, Alex, Jason Heppler, and Douglas Seefeldt. “A Call to Redefine Historical Scholarship in the Digital Turn.” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/a-call-to-redefine-historical-scholarship-in-the-digital-turn/.
Gibbs, Fred. “Critical Discourse in the Digital Humanities.” Journal of Digital Humanities 1, no. 1 (Winter 2012). http://journalofdigitalhumanities.org/1-1/critical-discourse-in-digital-humanities-by-fred-gibbs/.
Kelly, T. Mills. “Making Digital Scholarship Count (Part I- of III).” Edwired, June 13, 2008. http://edwired.org/2008/06/13/making-digital-scholarship-count/.
Nowviskie, Bethany. “Evaluating Collaborative Digital Scholarship (or, Where Credit is Due).” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/evaluating-collaborative-digital-scholarship-by-bethany-nowviskie/.
———. “Where Credit Is Due: Preconditions for the Evaluation of Collaborative Digital Scholarship.” Profession (2011): 169–181. http://www.mlajournals.org/doi/abs/10.1632/prof.2011.2011.1.169.
Anderson, Steve, and Tara McPherson. “Engaging Digital Scholarship: Thoughts on Evaluating Multimedia Scholarship.” Profession (2011): 136–151. http://www.mlajournals.org/doi/abs/10.1632/prof.2011.2011.1.136.
Bates, David. “Peer Review and Evaluation of Digital Resources for the Arts and Humanities.” Institute of Historical Research – Digital Resources, n.d. http://www.history.ac.uk/projects/digital/peer-review.
Brennan, Sheila. “Let the Grant Do the Talking.” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/let-the-grant-do-the-talking-by-sheila-brennan/.
Coble, Zach. “Evaluating DH Work: Guidelines for Librarians.” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/evaluating-digital-humanities-work-guidelines-for-librarians-by-zach-coble.
Coletta, Cristina Della. “Guidelines for Promotion and Tenure Committees in Judging Digital Work.” Evaluating Digital Scholarship – NINES/NEH Summer Institutes: 2011-2012, n.d. http://institutes.nines.org/docs/2011-documents/guidelines-for-promotion-and-tenure-committees-in-judging-digital-work/.
Cosgrave, Mike, Anna Dowling, Lynn Harding, Róisín O’Brien, and Olivia Rohan. “Evaluating Digital Scholarship: Experiences in New Programmes at an Irish University.” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/evaluating-digital-scholarship-experiences-in-new-programmes-at-an-irish-university/.
Davidson, Cathy. “How Can A Digital Humanist Get Tenure?” HASTAC, September 17, 2012. http://hastac.org/blogs/cathy-davidson/2012/09/17/how-can-digital-humanist-get-tenure.
Harley, Diane, Jonathan Henke, Shannon Lawrence, et al. Use and Users of Digital Resources: A Focus on Undergraduate Education in the Humanities and Social Sciences. Berkeley’s Center for Studies in Higher Education, April 5, 2006. http://cshe.berkeley.edu/publications/publications.php?id=211.
Harris, Katherine. “Explaining Digital Humanities in Promotion Documents.” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/explaining-digital-humanities-in-promotion-documents-by-katherine-harris/.
Mandell, Laura. “Promotion and Tenure for Digital Scholarship.” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/promotion-and-tenure-for-digital-scholarship-by-laura-mandell/.
Marchionini, G., Plaisant, C., & Komlodi, A. “The People in Digital Libraries: Multifaceted Approaches to Assessing Needs and Impact.” In Digital Library Use: Social Practice in Design and Evaluation, edited by Bishop, A. P. et al., 119–160. Cambridge: The MIT Press, 2003.
Mattern, Shannon Christine. “Evaluating Multimodal Work, Revisited.” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/evaluating-multimodal-work-revisited-by-shannon-mattern/.
Presner, Todd. “How to Evaluate Digital Scholarship.” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/how-to-evaluate-digital-scholarship-by-todd-presner/.
Rockwell, Geoffrey. “On the Evaluation of Digital Media as Scholarship.” Profession (2011): 152–168. http://www.mlajournals.org/doi/abs/10.1632/prof.2011.2011.1.152.
———. “Short Guide to Evaluation of Digital Work.” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/short-guide-to-evaluation-of-digital-work-by-geoffrey-rockwell.
Sample, Mark. “Tenure as a Risk-Taking Venture.” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/tenure-as-a-risk-taking-venture-by-mark-sample/.
Shaw, Ryan. “On Tenure and Why Code Can’t Speak for Itself,” n.d. http://aeshin.org/thoughts/on-tenure/.
Smithies, James. “Evaluating Scholarly Digital Outputs: The 6 Layers Approach.” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/evaluating-scholarly-digital-outputs-by-james-smithies/.
Tanner, Simon. Measuring the Impact of Digital Resources: Balanced Value Impact Model. London: King’s College, October 2012. http://www.kdcs.kcl.ac.uk/innovation/impact.html.
Wouters, Paul, and Rodrigo Costas. Users, Narcissism and Control – Tracking the Impact of Scholarly Publications in the 21st Century. SURFfoundation, February 2012. http://www.surf.nl/nl/publicaties/Documents/Users%20narcissism%20and%20control.pdf.
AAHC. “Tenure Guidelines.” American Association for History and Computing, n.d. http://theaahc.org/about/tenure-guidelines/.
Ahlberg, Kristin, William S. Bryans, Constance B. Schulz, Debbie Ann Doyle, Kathleen Franz, John R. Dichtl, Edward Countryman, Gregory E. Smoak, and Susan Ferentinos. Tenure, Promotion and the Publicly Engaged Historian. AHA/NCPH/OAH Working Group on Evaluating Public History Scholarship, n.d. http://ncph.org/cms/wp-content/uploads/Engaged-Historian.pdf.
Center for Digital Research in the Humanities, University of Nebraska-Lincoln. “Promotion & Tenure Criteria for Assessing Digital Research in the Humanities.” Center for Digital Research in the Humanities, n.d. http://cdrh.unl.edu/articles/eval_digital_scholar.php.
MLA. “Documenting a New Media Case.” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/documenting-a-new-media-case-evaluation-wiki-from-the-mla/.
———. “Guidelines for Editors of Scholarly Editions.” Modern Language Association, n.d. http://www.mla.org/resources/documents/rep_scholarly/cse_guidelines.
———. “Guidelines for Evaluating Work in Digital Humanities and Digital Media.” Journal of Digital Humanities 1, no. 4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/guidelines-for-evaluating-work-in-digital-humanities-and-digital-media-from-the-mla/.
Purdue University. “Evaluation Criteria for the Scholarship of Engagement”, n.d. http://www.vet.purdue.edu/engagement/files/documents/Evaluationcriterion.pdf.
Unsworth, John. “Evaluating Digital Scholarship, Promotion & Tenure Cases.” University of Virginia College and Graduate School of Arts and Sciences – Office of the Dean, n.d. http://artsandsciences.virginia.edu/dean/facultyemployment/evaluating_digital_scholarship.html.
Center for Digital Research in the Humanities, University of Nebraska-Lincoln. “Recommendations for Digital Humanities Projects.” Center for Digital Research in the Humanities, n.d. http://cdrh.unl.edu/articles/best_practices.php.
Koh, Adeline. “The Challenges of Digital Scholarship.” The Chronicle of Higher Education. ProfHacker, January 25, 2012. http://chronicle.com/blogs/profhacker/the-challenges-of-digital-scholarship/38103.
Kramer, Michael. “What Does Digital Humanities Bring to the Table?” Issues in Digital History, September 25, 2012. http://www.michaeljkramer.net/issuesindigitalhistory/blog/?p=862.
Spiro, Lisa. “Tips on Writing a Successful Grant Proposal.” Digital Scholarship in the Humanities, September 9, 2008. http://digitalscholarship.wordpress.com/2008/09/09/tips-on-writing-a-successful-grant-proposal/.
Summit on Digital Tools for the Humanities. The Institute for Advanced Technology in the Humanities – University of Virginia, 2006. http://www.iath.virginia.edu/dtsummit/SummitText.pdf.
Visconti, Amanda. “‘Songs of Innocence and of Experience:’ Amateur Users and Digital Texts.” University of Michigan, 2010. http://hdl.handle.net/2027.42/71380.