Evaluating Multimodal Work, Revisited

Two years ago I was preparing for a semester in which all of my classes involved “multimodal” student work — that is, theoretically-informed, research-based work that resulted in something other than a traditional paper. For years I’d been giving students in my classes the option of submitting, for at least one of their semester assignments, a media production or creative project (accompanied by a support paper in which they addressed how their work functioned as “scholarship”) — but given that this cross-platform work would now become the norm, I thought I should take some time to think about how to fairly and helpfully evaluate these projects. How do we know what’s good?

This revision of that piece adds some insights I’ve gleaned from other sources since then, including the collection of essays on “Evaluating Digital Scholarship” that came out in the MLA’s Profession late last year. In recent years the MLA and other professional organizations have made statements and produced guides regarding how “digital scholarship” should be assessed in faculty (re)appointment and review — and these statements are indeed valuable resources — but I’m more interested here in how to evaluate student work.[1]

*   *   *   *   *

Television-set testing at Underwriters Labs.

A different take on multimedia evaluation. Television-set testing at Underwriters Labs.

Modeling Evaluation

In most of my classes we spend a good deal of time examining projects similar to those we’re creating — other online exhibitions, data visualizations, mapping projects, etc., both those created by fellow students and “aspirational” professional projects that we could never hope to achieve over the course of a semester — and assessing their strengths and weaknesses. Exposing students to a variety of “multimedia genres” helps them to see that virtually any mode of production can be scholarly if produced via a scholarly process (we could certainly debate what that means), and can be subjected to critical evaluation.

Steve Anderson’s “Regeneration: Multimedia Genres and Emerging Scholarship” acknowledges the various genres — and “voices” and “registers” and “modes” of presentation — that can be made into multimedia scholarship. Particularly helpful, I think, is his acknowledgment that narrative — and, I would add, personal expression — can have a place in scholarship. Some students, I imagine, might have a hard time seeing how the same technologies they use to watch entertainment media, the same crowd-sourced maps they use to rate their favorite vegan bakeries or upload hazy Instagrams from their urban dérives — the same platforms they’re frequently told to use to “express themselves” — can be used as platforms for research and theorization. Personal expression and storytelling can still pay a role in these multimodal research projects, but one in service of a larger goal; as Anderson says, “narrative may productively serve as an element of a scholarly multimedia project but should not serve as an end in itself.”

The class as a whole, with the instructor’s guidance, can evaluate a selection of existing multimodal scholarly projects and generate a list of critical criteria before students attempt their own critiques — perhaps first in small groups, then individually. Asking the students to write and/or present formal “reader’s reports” — or, in my classes, exhibition or map critiques — and equipping them with a vocabulary tends to push their evaluation beyond the “I like it” / “I don’t like it” / “There’s too much going on” / “I didn’t get it” territory. The fact that users’ evaluations frequently reside within this superficial “I (don’t) like it” domain is not necessarily due to any lack of serious engagement or interest on their part, but may be attributable to the fact that they (faculty included!) don’t always know what criteria should be informing their judgment, or what language is typically used in or is appropriate for such a review.

Once students have applied a set of evaluative criteria to a wide selection of existing projects, they can eventually apply those same criteria to their own work, and to their peers’. (Cheryl Ball has designed a great “peer review” exercise for her undergraduate “Multimodal Composition” class.)

Evaluative Criteria

After reviewing a great deal of existing literature and assessment models — all of which, despite significant overlap, have their own distinctive vocabularies — I thought it best to consolidate all those models and test them against our on-the-ground experience in the classroom over the past several years, to develop a single, (relatively) manageable list of evaluative criteria.

Steve Anderson and Tara McPherson remind us of the importance of exercising flexibility in applying these criteria in our evaluation of “multimedia scholarship.” What follows should not be regarded as a checklist. Not all these criteria are appropriate for all projects, and there are good reasons some projects might choose to go against the grain. Referring to the MLA’s suggestion that projects be judged based on how they “link to other projects,” for instance, Anderson and McPherson note that linking may be a central goal for some projects, but, “linking itself should not be an inflexible standard for how multimedia scholarship gets evaluated.” Nor should the use of “open standards,” like open-source platforms — which, while generally desirable, isn’t always possible.[2]

The following is a mash-up up these sources, with some of my own insight mixed in: Steve Anderson & Tara McPherson, “Engaging Digital Scholarship: Thoughts on Evaluating Multimedia Scholarship,” Profession (2011): 136-151; Fred Gibbs, “Critical Discourse in the Digital Humanities,” FredGibbs.net (4 November 2011); Institute for Multimedia Literacy, “IML Project Parameters,” USC School of Cinematic Arts: IML (29 June 2009); Virginia Kuhn, “The Components of Scholarly Multimedia,” Kairos 12:3 (Summer 2008); MLA, “Short Guide to Evaluation of Digital Work,” wiki.mla.org (last updated 6 July 2010).

Concept & Content

  • Is there a strong thesis or argument at the core of this project? Does the project clearly articulate, or some way make “experiential,” this conceptual “core”? Is this conceptual core effectively developed through the various segments or dimensions of the project?
  • “Does the project display evidence of substantial research and thoughtful engagement with its subject?” (IML) Does it effectively “triangulate” a variety of sources and make use of a variety of media formats?
  • Is the platform merely a substrate for a “cool data set” or a set of media objects — or are individual “pieces” of content (data and media in various formats, etc.) contextualized? Are they linked together into a compelling argument? (Of course students might be working to create their own data sets or digital archives, in which case we might apply different standards of evaluation.)
  • Is the data sufficiently “enriched”? (MLA) Is it annotated, linked, cited, supplemented with support media, etc., where appropriate?
  • Does the project exploit the “repurpose-ability” of data? Does it pull in, and effectively re-contextualize, data from other projects? (Students should also recognize that their own data can, and should, be similarly repurposed.) This recognition that individual records — a photo or video a student uploads, or a data-set they import, etc. — can serve different purposes in different projects offers students great insight into research methodology, into the politics of research, into questions regarding who gets to make knowledge, etc. As Fred Gibbs acknowledges, discussing how a project uses data also “encourages conversations about ownership [and] copyright.”

Concept/Content-Driven Design & Technique

  • Does the project’s form suit its concept and content? “Do structural and formal elements of the project reinforce the conceptual core in a productive way?” (IML)
  • Is the delivery system robust? Do the chosen platforms or modes of delivery “fit” and “do justice to” the subject matter? Need this have been a multimedia project, or could it just as easily have been executed on paper?
  • Does the project “exhibit an understanding of the affordances of the tools used,” and does it exploit those affordances as best possible — and perhaps acknowledge and creatively “work around” known limitations?
  • Is there a “graceful balance of familiar scholarly gestures and multimedia expression which mobilizes the scholarship in new ways?” (IML) A balance of the old and familiar, to help users feel that they can rely on their tried-and-true codes of consumption; and the new, to encourage engagement and promote reconsideration of our tradition ways of knowing?
  • At the same time, do the project creators seem to exercise control over their technology? Or does technology seem to be used gratuitously or haphazardly? “Are design decisions deliberate and controlled?” Does the project “demonstrat[e] authorial intention by providing the user with a carefully planned structure, often made manifest through a navigation scheme?” (IML)
  • Do the project creators seem to understand their potential users, and have they designed the project so it accommodates those various audiences and uses?
  • How does the interface function “rhetorically” in the project? Does it inform user experience in a way that supports the project’s conceptual core and argument? Does it effectively organize the “real estate” of the screen to acknowledge and put into logical relationships the key components — subject content, technical tools, etc. — of the project?
  • Has the project been tested? Are their plans for continual testing and iterative development? Is the project adaptable?
Photo: Simon Fraser/Science Photo Library via Guardian.uk

Perhaps in some utopian future, when cognitive science is integrated into *all* disciplines, we can use brain scans as a form of assessment. Just kidding!

Transparent, Collaborative Development and Documentation

  • Do project creators practice self-reflexivity? Do they “accoun[t] for the authorial understanding of the production choices made in constructing the project?” (IML)
  • Do project creators document their research and creative processes, and describe how those processes contributed to their projects’ “formal structure and thematic concerns?” (IML) McPherson and Anderson (2011) also emphasize the importance of “finely grained accounts of the processes involved in the production of multimedia scholarship in order to evaluate properly the labor required in such research” (142).
  • Do project developers document and/or otherwise communicate their process — perhaps through a “ticket” system like Trac or a service like GitHub — and make it transparent and understandable to students?
    • As Rory Solomon pointed out in a comment on my earlier blog post, the adoption of open standards also contributes to longevity: “…the open source movement provides means to help minimize these concerns in that open source projects provide many ways to evaluate a given software tool / format / platform. Any serious project will have an open, public web presence, including developer and user mailing lists, documentation, and etc. It is fairly easy then to evaluate the depth and breadth of the developer and user communities. It is useful to check, via wikipedia and other open source project websites, whether there are competing initiatives, whether the project is getting support from one of the larger foundations (eg, FSF, Apache, etc), and if there is competition then what trends there are in terms of which tools seem to be “winning out”. Once a critical mass is reached and/or once a certain level of standardization has been achieved (through things like IETF, ISO, RFC’s, etc), one can be fairly confident that a tool will be around for a very long time (eg, no one questions the particular voltage and amp levels coming out of our wall sockets) and even if a tool does become obsolete, there will be many users and developers also contending with this issue, and many well-defined and well-publicized “migration paths” to ensure continued functioning, accessibility, etc.”
  • Are students involved in the platform’s development? Does this dialogue present an opportunity for students to learn about the process of technological development, to see “inside the black box” of their technical tools, to develop a skill set and critical vocabulary that will aid them not only in their own projects, but in the collaborative process?
    • Students should be asked for feedback on technical design; this conversation needs to happen as part of a structured dialogue, so it’s made clear to students what would be required to implement their requests — and whether or not such implementation is even feasible. Students should also be encouraged to translate their technical snafus — bugs, error messages, etc. — into opportunities to learn about how technology functions, about its limits, and about how to fix it when it’s not cooperating. Ideally, students should have a sense of ownership over not only their own projects, but also the platform on which they’re built.
    • I wrote about some of these frustrations-turned-into-positive learning-experiences in regard to my Fall 2011 Urban Media Archaeology class. Besides, these hiccups — and yes, on occasion, outright disasters — are an inevitable part of any technological development process. The error-laden development process defines every project out in the “real world”; why should a technological development project taking place within the context of an academic class be artificially “smoothed out” for students, artificially error-free?

Academic Integrity & Openness

  • Does the project evidence sound scholarship, which upholds traditional codes of academic integrity (which, of course, might need to be adapted for an age in which “publishing” and “authorship” mean something quite different than they did when many of these standards were developed)?
  • Does it credit sources where appropriate, and, if possible, link out to those sources? Does it acknowledge precedents and sources of conceptual or technical inspiration?
    • For my classes, I’ve made special arrangements with several institutions for copyright clearances and waiver of reproduction fees. In other cases, students will have to negotiate (with the collections’ and my assistance) copyright clearances; this is a good experience for them!
  • Does the project include credits for all collaborators, including even those performing roles that might not traditionally be credited?
  • “Is it accessible to the community of study?” (MLA) Is the final “product” available and functional for all its intended users – and open enough to accommodate even unexpected audiences? Is the process sufficiently well documented to make the intention behind and creation of the project accessible and intelligible to its publics?
    • Telling students that their work will be publicly accessible, and that it could have potential resonance in the greater world, can be a great motivator. Of course some students might feel vulnerable about trying out new ideas and skills in public view — and teachers should consider whether certain development stages should take place in a secure, off-line area.
  • “Do others link to it? Does it link out well? (MLA) Does the project make clear its web of influence and associations, and demonstrate generosity in giving credit where it’s due?
    • Emphasizing proper citations — of data, archival work, even human resources that have contributed to the project — reinforces the fact that academic integrity matters even within the context of a nontraditional research project, and it allows both the students and the collaborating institutions and individuals to benefit from their affiliation — e.g., the archives can show that researchers are using their material, and the students can take pride in being associated with these external organizations.

Review & Critique

  • “Have there been any expert consultations? Has this been shown to others for expert opinion?” (MLA) Given the myriad domains of expertise that most multimodal projects draw upon, those “experts” might be of many varieties: experts in the subject matter, experts in graphic design, experts in motion graphics, experts in user experience, experts in database design, etc.
  • “Has the work been reviewed? Can it be submitted for peer review?… Has the work been presented at conferences?… Have papers or reports about the project been published?” (MLA)  Writing up the work for publication or presentation at conferences elicits feedback. Grant-seeking also gives one an opportunity to subject the project to critique. There are also a few publications focusing on multimodal work — e.g., Vectors, KairosSensate — that have developed, or are developing, their own evaluative criteria.
    • Individual students in my classes have presented their own projects at conferences, submitted them to multimodal journals, or written about their multimodal work for more traditional journals. More informal, though no less helpful, forms of “peer review” can take place in the classroom — through design critiques with external “experts,” student peer-review, etc.

 

Originally published by Shannon Christine Mattern on August 28, 2012.


Additional Resources

Steve Anderson & Tara McPherson, “Engaging Digital Scholarship: Thoughts on Evaluating Multimedia Scholarship,” Profession (2011): 136-151.

Cheryl Ball, “Assessing Scholarly Multimedia: A Rhetorical Genre Studies Approach” Technical Communication Quarterly, 21:1 (2012): 1-17; and “Adapting Editorial Peer Review for Classroom Use” Writing & Pedagogy (Forthcoming 2013).

Fred Gibbs, “Critical Discourse in the Digital Humanities,” FredGibbs.net (4 November 2011).

Institute for Multimedia Literacy, “IML Project Parameters,” USC School of Cinematic Arts: IML (29 June 2009).

Virginia Kuhn, “The Components of Scholarly Multimedia,” Kairos 12:3 (Summer 2008).

Shannon Christine Mattern, “Evaluating Multimodal Student Work,” (11 August 2010).

Shannon Christine Mattern, “Evaluation & Critique of DH Projects,” (16 October 2012).

MLA, “Short Guide to Evaluation of Digital Work,” wiki.mla.org (last updated 6 July 2010).

  1. [1]A few months after writing this, Cheryl Ball wrote to let me know me that she’s written two fabulous, and highly relevant, articles about multimodal assessment: “Assessing Scholarly Multimedia: A Rhetorical Genre Studies Approach” Technical Communication Quarterly, 21:1 (2012): 1-17 and “Adapting Editorial Peer Review for Classroom Use” Writing & Pedagogy (Forthcoming 2013). These aren’t journals I’d typically read, so I’m grateful to Cheryl for bringing these articles to my attention!
  2. [2]Steve Anderson & Tara McPherson, “Engaging Digital Scholarship: Thoughts on Evaluating Multimedia Scholarship,” Profession (2011): 136-151, 142.

About Shannon Christine Mattern

Shannon Mattern is an Associate Professor in the School of Media Studies at The New School in New York. Her research and teaching address relationships between the forms and materialities of media and the spaces (architectural, urban, conceptual) they create and inhabit. She has written about libraries and archives, media companies' headquarters, place branding, public design projects, urban media art, media acoustics, media infrastructures, and material texts. You can find her at wordsinspace.net