THATCamp Theory Presentation

This past weekend I led a workshop at THATCamp Theory, at Rutgers, on evaluating and critiquing multimodal projects.I must admit, my talk was kind-of a mash-up of two older projects: my CUNY DHI talk from last October (video here) and this post. Above are my slides, and below are my notes.

I unfortunately was able to attend only Day One of the two-day conference (Rory said Day Two was quite the brain-bender). Yet I thoroughly enjoyed the sessions I was able to take part in!


This workshop will focus on developing a critical vocabulary for responding to DH and systems for providing meaningful evaluative feedback, including 1) developing critical evaluative criteria for various formats of multimodal work and 2) identifying theoretical frameworks that inform those criteria. We’ll consider both professional and student projects and spend some time considering how to make project evaluation an integral part of the DH classroom. Depending on the interests of the group, our case studies might include data visualizations, map-based projects, crowdsourced archival projects, and other interactive publications.

  • Recognize that there’s a history of considering “multimodal evaluation” in composition

[SLIDE 2] I’m not fully ensconced in the DH community – sympathetic to their interest in different forms, practices, praxes, of scholarship.

  • Craft as a useful model for considering how similar intellectual values and practices span domains – reading, writing, making in various modalities
  • But not all making is scholarship

[SLIDES 3-4] McPherson article: Multimodal Humanist – this term, still a mouthful, resonated more with me

[SLIDE 5] Scrivener on when production is research

[SLIDE 6] Question about Feedback & Evaluation — not simply so I could assign a grade, but so we could provide meaningful feedback

  • Work – particularly technical skills – were sometimes outside my area of expertise
  • How to balance weighting of form and content – “rigor” in concept or execution?
  • Individual vs. Group Accountability

[SLIDE 7] Revisited the list of criteria two years later

[SLIDES 8-10] Fall 2010 / 2011 / 2012 : Urban Media Archaeology

  • [SLIDE 11] Semester Schedule – discuss theories representing each unit
  • [SLIDE 12] PROJECT PROPOSALS – not different from trendy “contracts”
    • Justify choice of “genre” and format – use of media tools as method
  • [SLIDES 13-14] Student Proposed Projects
    • Carrier pigeons, electrification of lower Manhattan, video game arcades, newspaper company headquarters, “media actors” in Atlantic Yards using actor-network theory, etc.
    • I provide individual feedback; students post to blogs and classmates comment
    • This semester’s class hasn’t yet posted their proposals online
  • [SLIDE 15] Learn Data Modeling (interface now looks a bit different)
  • [SLIDE 16] User Scenarios
  • [SLIDE 17] Look inside Black Box – Software Development
  • [SLIDE 18] Pecha Kucha
    • DH projects inherently collaborative – need experts from multiple fields
  • [SLIDE 19] All the while, we’re collectively developing criteria for evaluation:
    • [SLIDE 20] By working in small groups and as a class to evaluate other “multimodal projects” + Hypercities
    • [SLIDE 21] Through individual map critiques
    • Thru Peer Review of one another’s projects
  • [SLIDE 22] Process Blogs – Self-Evaluation
    • Make public their process
      • [SLIDE 23] Discuss work w/ other public/cultural institutions – e.g., archives
    • [SLIDES 24-26] Practice “critical self-consciousness” – about their work processes, choice of methods, media formats, etc.
    • Hold themselves accountable for their choices
  • [SLIDE 27] Peer Evaluation: Paper Prototypes
  • Final Presentation: [SLIDE 28] My Feedback + [SLIDE 29] Students’ Peer Reviews

[SLIDE 30] Where was theory throughout?

  • Underlying the entire project, informing their understanding of the way cities work, informing their understanding of how maps work as media, informing how they design their data models, which are in shaped by how they want their projects to look for users – thus, theories about the visualization of data mix in with their theories about how databases work
  • And in order for students to know how we were going to evaluate success, these theories had to be made an integral part of our development process

[SLIDE 31] Through critique, we’ll reverse-engineer student and professional projects and find the theory that informed it

  • [SLIDE 32] From my list of evaluative criteria – Concept + Content; Concept-/Content-Driven Design + Technique; Documentation and Transparent, Collaborative Development; Academic Integrity and Openness; Review and Critique – are backed by theories: theories central to the project’s content, theories of design, theories of knowledge production, theories of labor, etc.
  • [CLICK] But we’ll focus on the few dimensions that are overtly theoretical, and that we can potentially discern in a quick review, in the short time we have here
  • [SLIDE 33] Break up into groups and assess the Concept + Content and Concept-Content-Driven Design + Technique of a few sample DH project and reverse-engineer that theories that might’ve informed their creation

[SLIDE 34] Case Studies:

  • These are the cases we choose from in my UMA class.
  • Solicit ideas for classes of projects to critique (e.g., data visualizations, map-based projects, crowd-sourced archival project, interactive publications)
  • Solicit ideas for specific projects groups can collaborative assess






Evaluating Multimodal Work, Revisited

Evaluating Multimodal Work, Revisited,” Journal of Digital Humanities 1:4 (Fall 2012).


Evaluating Multimodal Work, Revisited

Cat Sidh on Flickr 

Two years ago I was preparing for a semester in which all of my classes involved “multimodal” student work — that is, theoretically-informed, research-based work that resulted in something other than a traditional paper. For years I’d been giving students in my classes the option of submitting, for at least one of their semester assignments, a media production or creative project (accompanied by a support paper in which they addressed how their work functioned as “scholarship”) — but given that that this cross-platform work would now become the norm, I thought I should take some time to think about how to fairly and helpfully evaluate these projects. How do we know what’s good?

So I wrote up a little pseudo-literature review, “Evaluating Multimodal Student Work,” which, I was glad to see, quite a few folks seemed to find helpful. Since I’ve been explicitly discussing the process and politics of evaluation with my students for a couple years now — and because I was invited to lead a workshop on Evaluation & Critique of DH Projects at the upcoming THATCamp Theory at Rutgers in October — I thought I should revisit the issue.

I’ll try to clean up that post from two years ago and add some insights I’ve gleaned from other sources since then, including the collection of essays on “Evaluating Digital Scholarship” that came out in the MLA’s Profession late last year. In recent years the MLA and other professional organizations have made statements and produced guides regarding how “digital scholarship” should be assessed in faculty (re)appointment and review — and these statements are indeed valuable resources — but I’m more interested in here in how to evaluate student work.

[UPDATE 11/12: A few months after posting this, Cheryl Ball wrote to let me know me that she’s written two fabulous, and highly relevant, articles about multimodal assessment: “Assessing Scholarly Multimedia: A Rhetorical Genre Studies ApproachTechnical Communication Quarterly, 21:1 (2012): 1-17 and “Adapting Editorial Peer Review for Classroom UseWriting & Pedagogy (Forthcoming 2013). These aren’t journals I’d typically read, so I’m grateful to Cheryl for bringing these articles to my attention!]

Alright, then….

*   *   *   *   *

A different take on multimedia evaluation. Television-set testing at Underwriters Labs. via

In most of my classes we spend a good deal of time examining projects similar to those we’re creating — other online exhibitions, data visualizations, mapping projects, etc., both those created by fellow students and “aspirational” professional projects that we could never hope to achieve over the course of a semester – and assessing their strengths and weaknesses. Exposing students to a variety of “multimedia genres” helps them to see that virtually any mode of production can be scholarly if produced via a scholarly process (we could certainly debate what that means), and can be subjected to critical evaluation.

Steve Anderson’s “Regeneration: Multimedia Genres and Emerging Scholarship” acknowledges the various genres — and “voices” and “registers” and “modes” of presentation – that can be made into multimedia scholarship. Particularly helpful, I think, is his acknowledgment that narrative — and, I would add, personal expression — can have a place in scholarship. Some students, I imagine, might have a hard time seeing how the same technologies they use to watch entertainment media, the same crowd-sourced maps they use to rate their favorite vegan bakeries or upload hazy Instagrams from their urban derives – the same platforms they’re frequently told to use to “express themselves” – can be used as platforms for research and theorization. Personal expression and storytelling can still pay a role in these multimodal research projects, but one in service of a larger goal; as Anderson says, “narrative may productively serve as an element of a scholarly multimedia project but should not serve as an end in itself.”

The class as a whole, with the instructor’s guidance, can evaluate a selection of existing multimodal scholarly projects and generate a list of critical criteria before students attempt their own critiques – perhaps first in small groups, then individually. Asking the students to write and/or present formal “reader’s reports” – or, in my classes, exhibition or map critiques – and equipping them with a vocabulary tends to push their evaluation beyond the “I like it” / “I don’t like it” /“There’s too much going on” / “I didn’t get it” territory. The fact that users’ evaluations frequently reside within this superficial “I (don’t) like it” domain is not necessarily due to any lack of serious engagement or interest on their part, but may be attributable to the fact that they (faculty included!) don’t always know what criteria should be informing their judgment, or what language is typically used in or is appropriate for such a review.

Once students have applied a set of evaluative criteria to a wide selection of existing projects, they can eventually apply those same criteria to their own work, and to their peers’. (Cheryl Ball has designed a great “peer review” exercise for her undergraduate “Multimodal Composition” class.)


After reviewing a great deal of existing literature and assessment models – all of which, despite significant overlap, have their own distinctive vocabularies – I thought it best to consolidate all those models and test them against our on-the-ground experience in the classroom over the past several years, to develop a single, manageable list of evaluative criteria.

Steve Anderson and Tara McPherson remind us of the importance of exercising flexibility in applying these criteria in our evaluation of “multimedia scholarship.” What follows should not be regarded as a checklist. Not all these criteria are appropriate for all projects, and there are good reasons some projects might choose to go against the grain. Referring to the MLA’s suggestion that projects be judged based on how they “link to other projects,” for instance, Anderson and McPherson note that linking may be a central goal for some projects, but, “linking itself should not be an inflexible standard for how multimedia scholarship gets evaluated.” Nor should the use of “open standards,” like open-source platforms – which, while generally desirable, isn’t always possible (142).

The following is a mash-up up these sources, with some of my own insight mixed in: Steve Anderson & Tara McPherson, “Engaging Digital Scholarship: Thoughts on Evaluating Multimedia Scholarship” Profession (20111): 136-151; Fred Gibbs, “Critical Discourse in the Digital (4 November 2011); Institute for Multimedia Literacy, “IML Project Parameters,” USC School of Cinematic Arts: IML (29 June 2009); Virginia Kuhn, “The Components of Scholarly MultimediaKairos 12:3 (Summer 2008); MLA, “Short Guide to Evaluation of Digital Work, “ (last updated 6 July 2010).


  • Is there a strong thesis or argument at the core of this project? Does the project clearly articulate, or some way make “experiential,” this conceptual “core”? Is this conceptual core effectively developed through the various segments or dimensions of the project?
  • “Does the project display evidence of substantial research and thoughtful engagement with its subject?” (IML) Does it effectively “triangulate” a variety of sources and make use of a variety of media formats?
  • Is the platform merely a substrate for a “cool data set” or a set of media objects – or are individual “pieces” of content – data and media in various formats, etc. – contextualized”? Are they linked together into a compelling argument?
  • Is the data sufficiently “enriched”? (MLA) Is it annotated, linked, cited, supplemented with support media, etc., where appropriate?
  • Does the project exploit the “repurpose-ability” of data? Does it pull in, and effectively re-contextualize, data from other projects? (Students should also recognize that their own data can, and should, be similarly repurposed.) This recognition that individual records – a photo or video a student uploads, or a data-set they import, etc. – can serve different purposes in different projects offers students great insight into research methodology, into the politics of research, into questions regarding who gets to make knowledge, etc. As Fred Gibbs acknowledges, discussing how a project uses data also “encourages conversations about ownership [and] copyright.”


  • Does the project’s form suit its concept and content? “Do structural and formal elements of the project reinforce the conceptual core in a productive way?” (IML)
  • Is the delivery system robust? Do the chosen platforms or modes of delivery “fit” and “do justice to” the subject matter? Need this have been a multimedia project, or could it just as easily have been executed on paper?
  • Does the project “exhibit an understanding of the affordances of the tools used,” and does it exploit those affordances as best possible – and perhaps acknowledge and creatively “work around” known limitations?
  • Is there a “graceful balance of familiar scholarly gestures and multimedia expression which mobilizes the scholarship in new ways?” (IML) A balance of the old and familiar, to help users feel that they can rely on their tried-and-true codes of consumption; and the new, to encourage engagement and promote reconsideration of our tradition ways of knowing?
  • At the same time, do the project creators seem to exercise control over their technology? Or does technology seem to be used gratuitously or haphazardly? “Are design decisions deliberate and controlled?… Does the project “demonstrate[e] authorial intention by providing the user with a carefully planned structure, often made manifest through a navigation scheme?” (IML)
  • Do the project creators seem to understand their potential users, and have they designed the project so it accommodates those various audiences and uses?
  • How does the interface function “rhetorically” in the project? Does it inform user experience in a way that supports the project’s conceptual core and argument? Does it effectively organize the “real estate” of the screen to acknowledge and put into logical relationships the key components – subject content, technical tools, etc. – of the project?
  • Has the project been tested? Are their plans for continual testing and iterative development? Is the project adaptable?
Perhaps in some utopian future, when cognitive science is integrated into *all* disciplines, we can use brain scans as a form of assessment. Just kidding! Photo: Simon Fraser/Science Photo Library via


  • Do project creators practice self-reflexivity? Do they “accoun[]
    for the authorial understanding of the production choices made in constructing the project?” (IML)
  • Do project creators document their research and creative processes, and describe how those processes contributed to their projects’ “formal structure and thematic concerns?” (IML) McPherson and Anderson (2011) also emphasize the importance of “finely grained accounts of the processes involved in the production of multimedia scholarship in order to evaluate properly the labor required in such research” (142).
  • Do project developers document and/or otherwise communicate their process – perhaps through a “ticket” system like Trac– and make it transparent and understandable to students?
    • As Rory Solomon pointed out in a comment on my earlier blog post, the adoption of open standards also contributes to longevity: “…the open source movement provides means to help minimize these concerns in that open source projects provide many ways to evaluate a given software tool / format / platform. Any serious project will have an open, public web presence, including developer and user mailing lists, documentation, and etc. It is fairly easy then to evaluate the depth and breadth of the developer and user communities. It is useful to check, via wikipedia and other open source project websites, whether there are competing initiatives, whether the project is getting support from one of the larger foundations (eg, FSF, Apache, etc), and if there is competition then what trends there are in terms of which tools seem to be “winning out”. Once a critical mass is reached and/or once a certain level of standardization has been achieved (through things like IETF, ISO, RFC’s, etc), one can be fairly confident that a tool will be around for a very long time (eg, no one questions the particular voltage and amp levels coming out of our wall sockets) and even if a tool does become obsolete, there will be many users and developers also contending with this issue, and many well-defined and well-publicized “migration paths” to ensure continued functioning, accessibility, etc.”
  • Are students involved in the platform’s development? Does this dialogue present an opportunity for students to learn about the process of technological development, to see “inside the black box” of their technical tools, to develop a skill set and critical vocabulary that will aid them not only in their own projects, but in the collaborative process?
    • Students should be asked for feedback on technical design; this conversation needs to happen as part of a structured dialogue, so it’s made clear to students what would be required to implement their requests — and whether or not such implementation is even feasible. Students should also be encouraged to translate their technical snafus — bugs, error messages, etc. — into opportunities to learn about how technology functions, about its limits, and about how to fix it when it’s not cooperating. Ideally, students should have a sense of ownership over not only their own projects, but also the platform on which they’re built.
    • I wrote about some of these frustrations-turned-into-positive learning-experiences in regard to my Fall 2011 Urban Media Archaeology class. Besides, these hiccups — and yes, on occasion, outright disasters — are an inevitable part of any technological development process. The error-laden development process defines every project out in the “real world”; why should a technological development project taking place within the context of an academic class be artificially “smoothed out” for students, artificially error-free?


  • Does the project evidence sound scholarship, which upholds all the traditional codes of academic integrity?
  • Does it credit sources where appropriate, and, if possible, link out to those sources? Does it acknowledge precedents and sources of conceptual or technical inspiration?
    • For my classes, I’ve made special arrangements with several institutions for copyright clearances and waiver of reproduction fees. In other cases, students will have to negotiate (with the libraries’ and my assistance) copyright clearances; this is a good experience for them!
  • Does the project include credits for all collaborators, including even those performing roles that might not traditionally be credited?
  • “Is it accessible to the community of study?” (MLA) Is the final “product” available and functional for all its intended users – and open enough to accommodate even unexpected audiences? Is the process sufficiently well documented to make the intention behind and creation of the project accessible and intelligible to its publics?
    • Telling students that their work will be publicly accessible, and that it could have potential resonance in the greater world, can be a great motivator. Of course some students might feel vulnerable about trying out new ideas and skills in public view — and teachers should consider whether certain development stages should take place in a secure, off-line area.
  • “Do others link to it? Does it link out well? (MLA) Does the project make clear its web of influence and associations, and demonstrate generosity in giving credit where it’s due?
    • Emphasizing proper citations – of data, archival work, even human resources that have contributed to the project – reinforces the fact that academic integrity matters even within the context of a nontraditional research project, and it allows both the students and the collaborating institutions and institutions to benefit from their affiliation – e.g., the archives can show that researchers are using their material, and the students can take pride in being associated with these external organizations.


  • “Have there been any expert consultations? Has this been shown to others for expert opinion?” (MLA) Given the myriad domains of expertise that most multimodal projects draw upon, those “experts” might be of many varieties: experts in the subject matter, experts in graphic design, experts in motion graphics, experts in user experience, experts in database design, etc.
  • “Has the work been reviewed? Can it be submitted for peer review?… Has the work been presented at conferences?… Have papers or reports about the project been published?” (MLA)  Writing up the work for publication or presentation at conferences elicits feedback. Grant-seeking also gives one an opportunity to subject the project to critique. There are also a few publications focusing on multimodal work — e.g., Vectors, Kairos, Sensate— that are developing their own evaluative criteria.
    • Individual students in my classes have presented their own projects at conferences, submitted them to multimodal journals, or written about their multimodal work for more traditional journals. More informal, though no less helpful, forms of “peer review” can take place in the classroom — through design critiques with external “experts,” student peer-review, etc.