Categories
Blog

Evaluating Multimodal Work, Revisited

Cat Sidh on Flickr 

Two years ago I was preparing for a semester in which all of my classes involved “multimodal” student work — that is, theoretically-informed, research-based work that resulted in something other than a traditional paper. For years I’d been giving students in my classes the option of submitting, for at least one of their semester assignments, a media production or creative project (accompanied by a support paper in which they addressed how their work functioned as “scholarship”) — but given that that this cross-platform work would now become the norm, I thought I should take some time to think about how to fairly and helpfully evaluate these projects. How do we know what’s good?

So I wrote up a little pseudo-literature review, “Evaluating Multimodal Student Work,” which, I was glad to see, quite a few folks seemed to find helpful. Since I’ve been explicitly discussing the process and politics of evaluation with my students for a couple years now — and because I was invited to lead a workshop on Evaluation & Critique of DH Projects at the upcoming THATCamp Theory at Rutgers in October — I thought I should revisit the issue.

I’ll try to clean up that post from two years ago and add some insights I’ve gleaned from other sources since then, including the collection of essays on “Evaluating Digital Scholarship” that came out in the MLA’s Profession late last year. In recent years the MLA and other professional organizations have made statements and produced guides regarding how “digital scholarship” should be assessed in faculty (re)appointment and review — and these statements are indeed valuable resources — but I’m more interested in here in how to evaluate student work.

[UPDATE 11/12: A few months after posting this, Cheryl Ball wrote to let me know me that she’s written two fabulous, and highly relevant, articles about multimodal assessment: “Assessing Scholarly Multimedia: A Rhetorical Genre Studies ApproachTechnical Communication Quarterly, 21:1 (2012): 1-17 and “Adapting Editorial Peer Review for Classroom UseWriting & Pedagogy (Forthcoming 2013). These aren’t journals I’d typically read, so I’m grateful to Cheryl for bringing these articles to my attention!]

Alright, then….

*   *   *   *   *

A different take on multimedia evaluation. Television-set testing at Underwriters Labs. via

In most of my classes we spend a good deal of time examining projects similar to those we’re creating — other online exhibitions, data visualizations, mapping projects, etc., both those created by fellow students and “aspirational” professional projects that we could never hope to achieve over the course of a semester – and assessing their strengths and weaknesses. Exposing students to a variety of “multimedia genres” helps them to see that virtually any mode of production can be scholarly if produced via a scholarly process (we could certainly debate what that means), and can be subjected to critical evaluation.

Steve Anderson’s “Regeneration: Multimedia Genres and Emerging Scholarship” acknowledges the various genres — and “voices” and “registers” and “modes” of presentation – that can be made into multimedia scholarship. Particularly helpful, I think, is his acknowledgment that narrative — and, I would add, personal expression — can have a place in scholarship. Some students, I imagine, might have a hard time seeing how the same technologies they use to watch entertainment media, the same crowd-sourced maps they use to rate their favorite vegan bakeries or upload hazy Instagrams from their urban derives – the same platforms they’re frequently told to use to “express themselves” – can be used as platforms for research and theorization. Personal expression and storytelling can still pay a role in these multimodal research projects, but one in service of a larger goal; as Anderson says, “narrative may productively serve as an element of a scholarly multimedia project but should not serve as an end in itself.”

The class as a whole, with the instructor’s guidance, can evaluate a selection of existing multimodal scholarly projects and generate a list of critical criteria before students attempt their own critiques – perhaps first in small groups, then individually. Asking the students to write and/or present formal “reader’s reports” – or, in my classes, exhibition or map critiques – and equipping them with a vocabulary tends to push their evaluation beyond the “I like it” / “I don’t like it” /“There’s too much going on” / “I didn’t get it” territory. The fact that users’ evaluations frequently reside within this superficial “I (don’t) like it” domain is not necessarily due to any lack of serious engagement or interest on their part, but may be attributable to the fact that they (faculty included!) don’t always know what criteria should be informing their judgment, or what language is typically used in or is appropriate for such a review.

Once students have applied a set of evaluative criteria to a wide selection of existing projects, they can eventually apply those same criteria to their own work, and to their peers’. (Cheryl Ball has designed a great “peer review” exercise for her undergraduate “Multimodal Composition” class.)

EVALUATIVE CRITERIA

After reviewing a great deal of existing literature and assessment models – all of which, despite significant overlap, have their own distinctive vocabularies – I thought it best to consolidate all those models and test them against our on-the-ground experience in the classroom over the past several years, to develop a single, manageable list of evaluative criteria.

Steve Anderson and Tara McPherson remind us of the importance of exercising flexibility in applying these criteria in our evaluation of “multimedia scholarship.” What follows should not be regarded as a checklist. Not all these criteria are appropriate for all projects, and there are good reasons some projects might choose to go against the grain. Referring to the MLA’s suggestion that projects be judged based on how they “link to other projects,” for instance, Anderson and McPherson note that linking may be a central goal for some projects, but, “linking itself should not be an inflexible standard for how multimedia scholarship gets evaluated.” Nor should the use of “open standards,” like open-source platforms – which, while generally desirable, isn’t always possible (142).

The following is a mash-up up these sources, with some of my own insight mixed in: Steve Anderson & Tara McPherson, “Engaging Digital Scholarship: Thoughts on Evaluating Multimedia Scholarship” Profession (20111): 136-151; Fred Gibbs, “Critical Discourse in the Digital Humanities” FredGibbss.net (4 November 2011); Institute for Multimedia Literacy, “IML Project Parameters,” USC School of Cinematic Arts: IML (29 June 2009); Virginia Kuhn, “The Components of Scholarly Multimedia” Kairos 12:3 (Summer 2008); MLA, “Short Guide to Evaluation of Digital Work, “ wiki.mla.org (last updated 6 July 2010).

CONCEPT & CONTENT

  • Is there a strong thesis or argument at the core of this project? Does the project clearly articulate, or some way make “experiential,” this conceptual “core”? Is this conceptual core effectively developed through the various segments or dimensions of the project?
  • “Does the project display evidence of substantial research and thoughtful engagement with its subject?” (IML) Does it effectively “triangulate” a variety of sources and make use of a variety of media formats?
  • Is the platform merely a substrate for a “cool data set” or a set of media objects – or are individual “pieces” of content – data and media in various formats, etc. – contextualized”? Are they linked together into a compelling argument?
  • Is the data sufficiently “enriched”? (MLA) Is it annotated, linked, cited, supplemented with support media, etc., where appropriate?
  • Does the project exploit the “repurpose-ability” of data? Does it pull in, and effectively re-contextualize, data from other projects? (Students should also recognize that their own data can, and should, be similarly repurposed.) This recognition that individual records – a photo or video a student uploads, or a data-set they import, etc. – can serve different purposes in different projects offers students great insight into research methodology, into the politics of research, into questions regarding who gets to make knowledge, etc. As Fred Gibbs acknowledges, discussing how a project uses data also “encourages conversations about ownership [and] copyright.”

CONCEPT/CONTENT-DRIVEN DESIGN & TECHNIQUE

  • Does the project’s form suit its concept and content? “Do structural and formal elements of the project reinforce the conceptual core in a productive way?” (IML)
  • Is the delivery system robust? Do the chosen platforms or modes of delivery “fit” and “do justice to” the subject matter? Need this have been a multimedia project, or could it just as easily have been executed on paper?
  • Does the project “exhibit an understanding of the affordances of the tools used,” and does it exploit those affordances as best possible – and perhaps acknowledge and creatively “work around” known limitations?
  • Is there a “graceful balance of familiar scholarly gestures and multimedia expression which mobilizes the scholarship in new ways?” (IML) A balance of the old and familiar, to help users feel that they can rely on their tried-and-true codes of consumption; and the new, to encourage engagement and promote reconsideration of our tradition ways of knowing?
  • At the same time, do the project creators seem to exercise control over their technology? Or does technology seem to be used gratuitously or haphazardly? “Are design decisions deliberate and controlled?… Does the project “demonstrate[e] authorial intention by providing the user with a carefully planned structure, often made manifest through a navigation scheme?” (IML)
  • Do the project creators seem to understand their potential users, and have they designed the project so it accommodates those various audiences and uses?
  • How does the interface function “rhetorically” in the project? Does it inform user experience in a way that supports the project’s conceptual core and argument? Does it effectively organize the “real estate” of the screen to acknowledge and put into logical relationships the key components – subject content, technical tools, etc. – of the project?
  • Has the project been tested? Are their plans for continual testing and iterative development? Is the project adaptable?
Perhaps in some utopian future, when cognitive science is integrated into *all* disciplines, we can use brain scans as a form of assessment. Just kidding! Photo: Simon Fraser/Science Photo Library via Guardian.uk

DOCUMENTATION AND TRANSPARENT, COLLABORATIVE DEVELOPMENT

  • Do project creators practice self-reflexivity? Do they “accoun[]
    for the authorial understanding of the production choices made in constructing the project?” (IML)
  • Do project creators document their research and creative processes, and describe how those processes contributed to their projects’ “formal structure and thematic concerns?” (IML) McPherson and Anderson (2011) also emphasize the importance of “finely grained accounts of the processes involved in the production of multimedia scholarship in order to evaluate properly the labor required in such research” (142).
  • Do project developers document and/or otherwise communicate their process – perhaps through a “ticket” system like Trac– and make it transparent and understandable to students?
    • As Rory Solomon pointed out in a comment on my earlier blog post, the adoption of open standards also contributes to longevity: “…the open source movement provides means to help minimize these concerns in that open source projects provide many ways to evaluate a given software tool / format / platform. Any serious project will have an open, public web presence, including developer and user mailing lists, documentation, and etc. It is fairly easy then to evaluate the depth and breadth of the developer and user communities. It is useful to check, via wikipedia and other open source project websites, whether there are competing initiatives, whether the project is getting support from one of the larger foundations (eg, FSF, Apache, etc), and if there is competition then what trends there are in terms of which tools seem to be “winning out”. Once a critical mass is reached and/or once a certain level of standardization has been achieved (through things like IETF, ISO, RFC’s, etc), one can be fairly confident that a tool will be around for a very long time (eg, no one questions the particular voltage and amp levels coming out of our wall sockets) and even if a tool does become obsolete, there will be many users and developers also contending with this issue, and many well-defined and well-publicized “migration paths” to ensure continued functioning, accessibility, etc.”
  • Are students involved in the platform’s development? Does this dialogue present an opportunity for students to learn about the process of technological development, to see “inside the black box” of their technical tools, to develop a skill set and critical vocabulary that will aid them not only in their own projects, but in the collaborative process?
    • Students should be asked for feedback on technical design; this conversation needs to happen as part of a structured dialogue, so it’s made clear to students what would be required to implement their requests — and whether or not such implementation is even feasible. Students should also be encouraged to translate their technical snafus — bugs, error messages, etc. — into opportunities to learn about how technology functions, about its limits, and about how to fix it when it’s not cooperating. Ideally, students should have a sense of ownership over not only their own projects, but also the platform on which they’re built.
    • I wrote about some of these frustrations-turned-into-positive learning-experiences in regard to my Fall 2011 Urban Media Archaeology class. Besides, these hiccups — and yes, on occasion, outright disasters — are an inevitable part of any technological development process. The error-laden development process defines every project out in the “real world”; why should a technological development project taking place within the context of an academic class be artificially “smoothed out” for students, artificially error-free?

ACADEMIC INTEGRITY & OPENNNESS

  • Does the project evidence sound scholarship, which upholds all the traditional codes of academic integrity?
  • Does it credit sources where appropriate, and, if possible, link out to those sources? Does it acknowledge precedents and sources of conceptual or technical inspiration?
    • For my classes, I’ve made special arrangements with several institutions for copyright clearances and waiver of reproduction fees. In other cases, students will have to negotiate (with the libraries’ and my assistance) copyright clearances; this is a good experience for them!
  • Does the project include credits for all collaborators, including even those performing roles that might not traditionally be credited?
  • “Is it accessible to the community of study?” (MLA) Is the final “product” available and functional for all its intended users – and open enough to accommodate even unexpected audiences? Is the process sufficiently well documented to make the intention behind and creation of the project accessible and intelligible to its publics?
    • Telling students that their work will be publicly accessible, and that it could have potential resonance in the greater world, can be a great motivator. Of course some students might feel vulnerable about trying out new ideas and skills in public view — and teachers should consider whether certain development stages should take place in a secure, off-line area.
  • “Do others link to it? Does it link out well?” (MLA) Does the project make clear its web of influence and associations, and demonstrate generosity in giving credit where it’s due?
    • Emphasizing proper citations – of data, archival work, even human resources that have contributed to the project – reinforces the fact that academic integrity matters even within the context of a nontraditional research project, and it allows both the students and the collaborating institutions and institutions to benefit from their affiliation – e.g., the archives can show that researchers are using their material, and the students can take pride in being associated with these external organizations.

REVIEW & CRITIQUE

  • “Have there been any expert consultations? Has this been shown to others for expert opinion?” (MLA) Given the myriad domains of expertise that most multimodal projects draw upon, those “experts” might be of many varieties: experts in the subject matter, experts in graphic design, experts in motion graphics, experts in user experience, experts in database design, etc.
  • “Has the work been reviewed? Can it be submitted for peer review?… Has the work been presented at conferences?… Have papers or reports about the project been published?” (MLA)  Writing up the work for publication or presentation at conferences elicits feedback. Grant-seeking also gives one an opportunity to subject the project to critique. There are also a few publications focusing on multimodal work — e.g., Vectors, Kairos, Sensate— that are developing their own evaluative criteria.
    • Individual students in my classes have presented their own projects at conferences, submitted them to multimodal journals, or written about their multimodal work for more traditional journals. More informal, though no less helpful, forms of “peer review” can take place in the classroom — through design critiques with external “experts,” student peer-review, etc.
Categories
Blog

Urban Media Archaeology, Round 3

This was the Weekend of Websites! Last night I finished my second course site for the semester — this time for Urban Media Archaeology, which I’m teaching (with my trusty sidekick Rory) for the third semester. I have yet to work out a few details, including confirming our guest speaker for our “critical cartography” lesson (I’ve been in conversation with someone fantastic, but we haven’t yet sealed the deal) and our pecha kucha critics — but, for the most part, everything’s in place!

In a reprise of his 2010 meeting with us, Andrew Blum‘s taking us on another “walking tour of the Internet” — but this time, because the class meets a bit later at night than before, Andrew wants to make it “spooky.” I’m excited to see what that means. We also have two former students joining us to talk about their past projects, on independent bookstores and key sites in the Fluxus movement. And as usual, we’ll talk about media archaeology, “multimodal scholarship,” archival research, the history and politics of mapping, software development, data modeling, and modes of multimedia argumentation; the students will do weekly map critiques and a group-critique of a selection of prominent Digital Humanities projects; and we’ll have a mid-semester pecha kucha. Should be fun, as usual! (knock on wood)

Categories
Blog

Archival Projects from the Archive

Mormon Genealogical Archive Vaults, Little Cottonwood Canyon, NV, via http://bit.ly/IShv9J

At the end of every semester for the past few years I’ve tried to write a recap post for each of my classes. I share highlights from the students’ projects and reflect on what we accomplished, and what we might do differently next time. However, back in Spring 2011, when I first taught my Archives, Libraries & Databases class, I neglected to write a summary post — probably because I was freaking out about the tenure dossier that was due in two months.

So, for the benefit of students in my current section of ALD, who might like to see what their predecessors have done  — and for my own enjoyment, since I get to revisit some fantastic work and remember some fabulous students from the past — I’ll briefly summarize the Spring 2011 projects here:

  • Grace examined soil painting, dance, and song as archival practices among the Talaandig tribe in Bukidnon Province, the Philippines — which is where Grace is from.
  • Lily examined the influence of Belle Da Costa Greene, Pierpont Morgan’s personal librarian, in shaping not only the Morgan Library, but also the field of librarianship.
  • Sue studied various cases in which photography has been used to archive urban redevelopment.
  • Chris offered a fabulous psychoanalytic reading — using the work of Derrida and Carolyn Steedman — of the Mormon Archive.
  • Allison, who worked for the New York City Ballet, discussed historical and recent attempts to archive live dance performance, and her discussion included various approaches to dance notation.
  •  Christo explored the spatiality of databases: the space occupied by databases’ technical infrastructure; the departmental spaces linked together by an institution’s (e.g., police or immigration) databases; and the geographic spaces from which data is drawn, and which are housed together on a database.
  • Chris, who worked for UNICEF, critically assessed his own team’s efforts to introduce digital kiosks and SMS-based systems to increase access to information and “mirror the work of public libraries” in Africa.
  • Danielle examined the evolving material form of the book, and how that morphing object necessitates changes within the institutions charged with selling, storing, and cataloging it.
  • Maria, a native of Bogotá, examined her city’s network of public libraries — comprised of dozens of architecturally significant buildings constructed within the past 15 years — and the vital role they play in civic life.
  • Stephen, who maintains his own extensive database of videogame artwork, considered the notion of “fidelity” in regard to the archived, born-digital image.
  • Kelly conducted fieldwork in public libraries in and around Crown Heights, Brooklyn, to see how teenagers were being served, if at all.
  • Ran examined the archival practices (including how various media formats are processed) and politics of the Lesbian Herstory Archives.
  • Darrell studied the creation of the Fugazi Live Series by consulting with the band and participating archivists.
  • Nick questioned the notion of the “document” in the work of Walid Raad and The Atlas Group.
  • Steve dug into the Stasi archive, focusing in particular on the epistemic shift – the intellectual “renovation” of the archives – that accompanies a regime change, as well as the political, cultural and affective consequences of that shift.
  • And Rory speculated on ways that libraries might make more material and transparent their systems for classifying and storing knowledge, particularly those forms of digital knowledge that seem to have no material body.

Fugazi Live Series from Bill McKenna on Vimeo.

Categories
Blog

New Archives, Libraries Databases Course Website

The website for my Fall 2012 section of Archives, Libraries & Databases is up! There’s a lot of continuity since I last taught the class a year-and-a-half ago, but I did a thorough review of all the material and decided to do a little rearranging and refreshing, too. The “future of the library” debate is raging — so there’s a lot of fantastic new stuff to draw from on that topic. I refreshed some other readings and tweaked some of the assignments. There are few details I have yet to finalize — e.g., I’m still confirming a couple field trips and guest speakers — but for the most part, it’s finished!

I have until Tuesday to make changes, so if you have any recommendations for revisions or additions — or if you’re interested or expert in any of the topics we’re addressing and you’d like to stop by the class or propose that we visit you — please write to let me know!

Now, on to my Urban Media Archaeology syllabus and website

Categories
Uncategorized

The Tenure Verdict: My Own Personal Closing Ceremony

Ori Gersht, Blow Up, via http://bit.ly/EBIb1

…WITH TENURE
Nature is constipated the sap doesn’t flow
With tenure the classroom is empty
et in academia ego

the ketchup is stuck inside the bottle
the letter goes unanswered the bell doesn’t ring.

David Lehman’s “With Tenure

Don’t burst my bubble, David. Let me enjoy this for a little while.

*   *   *   *   *

Last summer I was in the midst of compiling my tenure dossier. I posted about the challenges of writing the Statement of Purpose, the components that made it into my 4″ dossier binder (I also included a copy of my Table of Contents), and the elation I felt at handing over five print and five digital copies of that 882-page *@!%&. In the latter post, I added what was, in my estimation at least (undoubtedly compromised by weeks of mental anguish), a pretty clever poem — a pseudo-cento — about the whole process.

Categories
Blog

Figure/Ground Asked Me About Academic Careers, Teaching, and Other Stuff

I was interviewed by Figure/Ground Communication, a great grad-student-run website featuring media studies-related academic resources, including interviews with scholars in media and related fields. It’s a particularly useful resource for graduate students who are considering pursuing PhDs and/or work in the academy or, more generally, in education. My New School colleagues Peter Haratonik and Simon Critchley were also interviewed, as were my pal Jussi Parikka, lots of OOO-ers, and a bunch of of other folks I greatly admire (e.g., Ann Blair, Elisabeth Eisenstein, etc.).

Categories
Blog

Miscellaneous Miscellany

Menno Aden, Untitled (Corner Shop II), 2011: http://bit.ly/NvEOx7 (via but does it float)

Since returning from Korea early last week, I spent a few days back in New York before heading off for another short trip. I dedicated one of my New York afternoons to catching up on several exhibitions I had missed while I was away. Perhaps it’s simply because I had seen so much art in Seoul that I was quite underwhelmed by the handful of shows I saw in Chelsea, the Ecstatic Alphabets show at MoMA (which I had been looking forward to for sooo long!), and even the Ghosts in the Machine exhibition at The New Museum. I had so wanted to be blown away by the latter — especially after reading provocative previews for months; and finding uncannily similar, and equally effusive, reviews in the WSJ and The New Yorker early last week. But, unfortunately, it didn’t quite work for me. Maybe it’s because I had encountered some of the work in the exhibition in other contexts. Or maybe it was just the way it was all put together. I appreciated that the curators regarded the show not as a coherent unit offering a tight thematic statement, but instead as an “encyclopedic cabinet of wonders: bringing together an array of artworks and non-art objects to create an unsystematic archive of man’s attempt to reconcile the organic and the mechanical.” Those ideas were just so big — and the show’s modes of address were so varied, resembling at times a pedantic science exhibition and at others an art-historical survey — that I left this “cabinet” overwhelmed not with a sense of “wonder,” but a little bit of frustration…with myself, for not being able to find a gestalt here.

One of the highlights for me was the recreation of Stan VanDerBeek’s 1963 Movie-Drome, which reminded me a bit of Le Corbusier, Xenakis, and Varèse’s 1958 Philips Pavilion:

And speaking of potentially underwhelming miscellanies: I was invited to list the first five websites I visit every morning for Tamsyn Gilbert’s fantastic Tumblr, The First Five, which offers insights into lots of interesting folks’ web diets. I cheated big time and managed to squeeze in several dozen references, but my big five were Google Reader, Design Observer, Atlantic Tech, HiLoBrow, and Pitchfork.