Categories
Blog

Reading Effectively

You already know how to read, I’m sure. You’ve been reading since first grade — or maybe even before. “Although the words, syntax, and ideas are more complex, isn’t reading in graduate school fundamentally like reading in first grade?,” Gary Alan Fine and Shannon K. Fitzsimons ask in a January 2011 article in the Chronicle Review.

It isn’t, of course. Not only is reading Foucault more intellectually challenging than reading Goodnight Moon (although the two have quite a bit in common, both emphasizing omnipresent surveillance), but the application of reading differs. For the most part, earlier reading is an attempt to grasp the meaning of a text so that one can repeat it to an authority, who then judges whether one “got” the ideas. At that level, reading is regurgitation.

In graduate school, reading and the ability to discuss and interpret that reading are simultaneously a means by which a student asserts an academic identity and the basis on which a student can produce new knowledge. And while assignments before graduate school are meant to be read in full, the wise graduate student must learn how to skim in order to manage impossible demands. It is the ability to not read everything—while still reading enough—that represents success in graduate school. [I recommend taking a closer look at Fine and Fitzsimons’s article.]

So, we’re going to take a little time to talk about reading effectively. This is not meant to diminish your enjoyment of reading, to spoil the experience of getting lost in a good novel or to disregard the value of engaging deeply, and patiently, with a challenging but rewarding text. Instead, what follows is meant to help you make decisions about what’s worth that intensive investment of time and energy, and what kinds of texts can be read more efficiently.

*     *     *     *     *

After I locate relevant resources (see my previous lesson on “Finding Sources“), I decide what’s worth a skim, and what requires more patient consideration. I start reading by scanning the chapter titles and index or, if it’s an essay or article, the abstract and subheads. I read the article’s or book’s main introduction and conclusion, then return to the beginning and scan through, focusing on the introductory and concluding paragraphs in each chapter of the book or section of the article. As you read more and more, you’ll have a better and better sense of what you’re looking for, and you’ll start to read more purposefully and efficiently. Design researchers Carole Gray and Julian Malins, authors of Visualizing Research: A Guide to the Research Process in Art and Design, recommend developing a manageable list of keywords or research descriptors, and keeping those concepts in mind as you read to help you focus and select sections that are worth your time (p. 45). (These same keywords should also help to structure your note-taking.) You should of course scan a work in its entirety to enable to you appreciate the author’s overall argument, but your keywords can help you to identify particularly relevant sections that warrant a closer reading. If you highlight – either on hard copies, or via Acrobat Reader Pro, Diigo, or some other software – consider using color-coding to track your keywords. This color-coding system might even extend to your paper files or folders on your desktop, to help you keep track of which resources apply to which themes.

Historian Michelle Murphy, of the University of Toronto, offers other potentially helpful strategies for approaching a difficult text. First, she says, expect not to “get it” on the first pass. In your first reading, “identify what you understand about the texts, and mark what you do not understand”; “periodically pause and rephrase” your understanding of the argument; list crucial concepts, and look up words you are not familiar with (n.b.: their “theoretical meaning” might be quite different from the dictionary definition; you’ll need to know where to look to understand how these words are being used in context); and “ask yourself if the style [in which] the text is written is important to the argument.”

The second time through, pay close attention to difficult passages, reread them – and if you still don’t get them, look for secondary texts that explicate these passages, or discuss them with a colleague or advisor. Take notes, and once you’ve reached the end, make a summary or map of the argument.

“Ph.D. Reading,” via RLHyde on Flickr 

There’s no substitute for working slowly and methodically through a text. But there are supplemental resources – big kids’ CliffsNotes, if you will – that you might reference after you’ve taken your best stab at a text, to confirm or revise your understanding. I find the following especially useful:

All are reputable sources, by noted scholars – which is important. See also the meta-discussion in the key media and cultural studies anthologies: Michael Ryan’s Cultural Studies: An Anthology, Meenakshi Gigi Durham’s and Douglas Kellner’s Media and Cultural Studies: Keyworks, and Noah Wardrip-Fruin’s and Nick Montfort’s The New Media Reader. The texts introducing each of the books’ sections provide useful synopses of the anthologized texts – all seminal works in their respective fields. And check out the dictionaries: Raymond Williams’ Keywords: A Vocabulary of Culture and Society and Tony Bennett et al.’s revision, New Keywords: A Revised Vocabulary of Culture and Society.

But, as I’ve said, you’ve got to grapple with the source texts before any of these support texts can help to confirm your understanding. And, indeed, many academic and theoretical texts make for slow and laborious reading. Sometimes this is because the author is working through complex ideas, and occasionally he or she might be writing under the assumption that the reader is already an insider to the field, and thus familiar with the field’s linguistic conventions. And sometimes reading is hard because the writing’s overblown or downright awful (see Orwell’s “Politics and the English Language“, Gerald Graff’s “Scholars and Sound Bites“).

Peter Barry, author of Beginning Theory, offers some guidelines to keep in mind when we encounter intimidating readings:

Firstly, we must have some initial patience with the difficult surface of the writing. We must avoid the too-ready conclusion that [academic writing] is just meaningless, pretentious jargon (that is, that the theory is at fault). Secondly, on the other hand, we must, for obvious reasons, resist the view that we ourselves are intellectually incapable of coping with it (that is, that we are at fault). Thirdly, and crucially, we must not assume that the difficulty of theoretical writing is always the dress of profound ideas – only that it might sometimes be, which leaves the onus of discrimination on us. To sum up this attitude: we are looking, in [theory or other academic writing], for something we can use, not something which (sic) will use us. We ought not to issue theory with a blank cheque to spend our times for us… Do not, then, be endlessly patient with theory (pp. 7-8).

In other words, we need to be patient with difficult reading – but there’s a limit to that patience. It might take you a while – you might need to go through a few graduate seminars and write a few graduate papers – before your “crap detector,” to borrow Hemingway’s phrase, turns on: before you learn to discern which texts are worth your time, and which are not; which are difficult but potentially rewarding texts, and which are overblown hokum (see also Neil Postman on “crap detection”). It’s best to give the benefit of the doubt to those texts that are frequently cited or have achieved “canonical” status, and those that your instructors have vetted and listed on a syllabus — but you’ll need to cultivate your own criteria for discernment when you’re conducting research independently.

And particularly in regard to reading theory: you’ll eventually have to develop your own relationship with theory — how you read it, how you use it, etc. I think it’s particularly important for graduate students to be conscious of the “political economy” of theory — how it’s made, who gets to make it, how it circulates and gains traction, etc. — and to assess the consonances and dissonances between the form and content of various theoretical approaches and movements (see these posts on “Theoretical Humility” and “The Cultural Techniques (and Political Economy) of Theory-Making). Make sure to consider, too, how theory can be creative and generative — how theory’s utility, or application, can go beyond analysis and critique to inspire creation (e.g., media-making, artistic work, entrepreneurship); and how the realms of “making,” “activism,” or “practice” more generally might be a means of shaping or doing theory (see Jussi Parikka, What Is Media Archaeology? (Malden, MA: Polity Press, 2012); Jussi Parikka, Interview with Garnet Hertz, “Archaeologies of Media ArtCTheory (April 1, 2010)).

Other Tips

Marc Giai-Miniet, Three machines that want to know, 60 x 45 x 13 cm, 2012
Marc Giai-Miniet, Three machines that want to know, 60 x 45 x 13 cm, 2012
  • Alex Galarza, in “How to Read a Book” (August 29, 2011), talks about identifying and using various “clues to decide your time commitment and your goals for…reading” assignments.
  • Galarza also references Paul N. Edwards’ own “How to Read a Book,” another immensely useful guide, which Galarza calls a “gem for all academics, but especially for first years who often feel anxious or confused about what exactly they need to get out of the one thousand pages they were assigned their first week.”
  • Marie desJardins, computer scientist and electrical engineer, proposes that, “to really understand a paper, you have to understand the motivations for the problem posed, the choices made in finding a solution, the assumptions behind the solution, whether the assumptions are realistic and whether they can be removed without invalidating the approach, future directions for research, what was actually accomplished or implemented, the validity (or lack thereof) of the theoretical justifications or empirical demonstrations, and the potential for extending and scaling the algorithm up.” In considering the author’s motivation, we might ask about the historical, social, cultural, or professional context from which the author is writing, and to which he or she is speaking. What other ideas or texts is the author in dialogue with? We might also ask how the author would have answered the “so what?” question; how would he or she have explained to a reader why he or she should care about the argument in the text? Not all theory has to do things in the world – but we might consider what the theory might allow us to do, materially or symbolically, with it. What does it allow us to think through, to think with? What power does it wield? Choices might refer to methods, or the sample the researcher chooses to draw from, or the theoretical framework he or she uses. What does the author identify as potential future directions for research? Are you following any of his/her leads?
  • There are lots of other online resources that encourage you to ask particular questions of each text you encounter.
  • James F. Klumpp, “How to Read Theory
  • See also the interviews with various scholars, who often elaborate on their own reading practices, in Figure/Ground.
  • J.K., an English graduate student, shares her own strategies for tackling “the impossible reading load” (February 23, 2014).
  • And what about the overall volume of stuff you feel compelled to read? In an interview with Figure/Ground, communication scholar John Durham Peters offers some words of wisdom: “Though there is too much to read, many minds will light on common truths.  So instead of angsting about how to encompass it all, find an angle and start digging and you will soon discover roots and branches that connect you with other perspectives.  Dig into Weber far enough, and you’ll be able to figure out Marx and Durkheim.  This is the wormhole principle: the key thing is to figure out how to access the network.  So instead of dictating a canon of specific titles, I would encourage people to find their scripture, their text that can help interpret the world for them, and then read and reread it.  It is essential to dig into something at great length that was written before you were born, if only to refute the pervasive cognitive bias that current thinking is smarter than old.  (Why should anyone be amazed that dead thinkers were just as smart as we are?  They are often actually smarter at least in terms of their effect, since their work often laid the infrastructures for ours.)  It is a better investment of time and effort to master texts that will remain in style.  Perhaps in thirty years people will still read Foucault (I have my doubts–I think he could be the Herbert Spencer of our time, the thinker who seemed to offer the key to our perplexities about sex and power that later generations will ignore, although Foucault is a more sympathetic figure than Spencer) but you can be sure that they will read Heidegger or Marx or Freud (who Foucault read)–or Moses, Plato, Jesus or Confucius.”
Categories
Blog

Finding Sources: Where to Look, and How to Decide What’s Worth Your Time

Pardon the pedanticism, but we’re going to start out with a little review. We’ll begin at a place that should look familiar to you: the New School Library‘s website. The New School is part of a consortium of schools — including NYU, the New York Academy of Art, Cooper Union — that share access to the BobCat catalog and to one another’s libraries. Start here in your search for books (remember them?) and multimedia materials (and check out the library’s tutorial videos).

Sure, you can check Google Books, too, but keep in mind that some publishers’ books are excluded, and Google rarely offers full copies of books. Typically, only those publications that are in the public domain are offered in-full. Sometimes, the book passages that are available on Google Books are sufficient for your needs, but other times, the Google excerpt might serve as a “teaser,” enticing you to locate a complete copy of the book elsewhere. The process I’m about to describe will help you do this.

Of course researchers in our field are often looking for audio-visual material. There’s obviously lots of material now available online, through YouTube, Vimeo, UbuWeb, the Internet Archive, etc., But again, not everything has been — or will be — digitized, so it’s important to know how to track down physical copies of materials. Fortunately, NYU has a fantastic audio-visual library, the Avery Fisher Center for Music and Media.

If you can’t find what you’re looking for on Bobcat, try one of the other libraries in New York — including the New York, Brooklyn, and Queens public libraries. If you’re not in New York — and I’m aware that many of you are not — try your local public and university library catalogs. And if you still can’t find it, try WorldCat and place a request through Interlibrary Loan. Materials loaned by another institution may take weeks to arrive — so it’s always best to start your resource search as early as possible so that you can build in time for material delivery.

Casting Your Net Wider

If you’re not already familiar with World Cat, you’ll be amazed by what a wonderful resource it is. It’s the “world’s largest network of library content and services,” connecting you to thousands of libraries around the world.

You can search for popular books, music CDs and videos—all of the physical items you’re used to getting from libraries. You can also discover many new kinds of digital content, such as downloadable audiobooks. You may also find article citations with links to their full text; authoritative research materials, such as documents and photos of local or historic significance; and digital versions of rare items that aren’t available to the public (WorldCat).

In order to make sure I’m conducting an exhaustive search for book resources relevant to a particular research project, I often visit WorldCat and try every keyword combination I can think of. If, for instance, I’m looking for books on music and architecture, I search for “music” + “architecture,” “music” + “space,” “sound” + “architecture,” “sound” + “space”…. You get the picture. Once I’ve collected a list of titles, I try to locate each of those titles in the catalogs listed above.

I also scan the bibliographies of books and articles that have proven useful, or that I’ve particularly enjoyed. Often, this is a great way to gather leads to hard-to-find primary sources and archival collections. In addition, if I’m reading a text and I’m particularly taken by a quotation or idea that the author attributes to someone else, I’m sure to locate the footnote, endnote, or bibliographic citation for the referenced work.

If I think I might want to buy a copy of a book, so I can mark it up, dog-ear it, make it mine, etc., I conduct further vetting by checking for excerpts on Google Books and looking for book reviews in academic journals, in one of the highly regarded book review journals (e.g., Choice, The New York Review of Books, the London Review of Books, Bookforum; sometimes — rarely — even Amazon reviews can be useful). Reviews in the aforementioned review periodicals are often comparative — the reviewer compares and contrasts two or more books on a particular topic — so I’m able to determine which books have the “cast” I’m looking for. In order to locate these and other book reviews, you can either search the review publications’ websites, search online periodicals databases (searching for the book’s title + “book review” in subject-specific databases), or search Google Scholar for the book title + “review”; this should reveal the location of various reviews in academic publications, and you can then track down the appropriate issues.

When I’m approaching a body of literature or a field of study that’s relatively new to me, and I don’t quite know where to begin, I often search syllabi posted online to see what texts faculty commonly assign for courses in those fields. For instance, if I wanted to find our more about contemporary feminist theory, about which I know very little, I’d try a few Google searches to locate syllabi that might offer some valuable leads. So, I’d search for “contemporary femin*” + “syllabus,” then maybe broaden out to “feminist theory” + “syllabus” to compile a list of books, articles, chapters, and web resources that faculty commonly assign in courses on contemporary feminism. I’ve found this technique particularly helpful because I can usually rely on my academic colleagues to have already screened these resources for me.

Colleagues, by the way, are excellent resources. Professors, librarians (don’t forget them!), co-workers, and fellow students are invariably chock-full of great reading recommendations, research leads, etc. This is why an “academic community” exists — so that we can share our knowledge and experience and, in the process, make a greater collective contribution to the field, and the world, than we could individually.

Finding Periodicals

Now, let’s switch gears from books to periodicals. For these materials, I usually start with Google Scholar. Once I find promising resources here, I search for them in the library’s electronic resources. If I know the particular periodicals I’m looking for, I search for the title in the “Books & More” field, to find which databases might contain full-text copies of these journals and magazines. Occasionally, there are no full-text copies of particular periodicals — which means that you need to check Bobcat to see if any consortium libraries hold the publication in hard-copy. Check the date ranges in the Bobcat catalogue entry to make sure the library holds the particular issue you’re after; if the article isn’t to be found in any local collections, you can request it via Interlibrary Loan, and you’ll be notified by Bobst Library when a pdf of your requested article is ready for download.

Remember that Google Scholar is not an exhaustive list of all scholarly publications. Nor do we know how Google’s algorithm chooses to highlight particular publications and bury others. As the Modern Language Association explained (in an article that’s no longer online, and is thus available only via the Internet Archive’s Wayback Machine),

…many important resources are not even indexed by Google: most of the rich, fee-based databases to which many academic libraries subscribe remain untouched and unavailable to Google‘s Web-crawling spiders. These databases, along with many others that are freely available, are known as “the Deep Web.” Although containing many trustworthy, well-edited, and scholarly resources, the Deep Web is frequently invisible to search engines. However valuable these resources may be, they are often difficult to access…

For these reasons, you also need to search subject-specific databases. On the library website’s “Databases” tab, you can click the “Browse Subjects” link to find a list of fields; “Media and Film Studies” and “Music” are most directly related to your work, but you’re likely to find media- and communication-related sources in lots of these subject categories — from “Anthropology” to “Gender and Sexuality Studies” to “Sociology.” Scroll down through the list of communication-related resources and read the “blurb” for each listing. Note the kinds of resources cataloged in each, the dates available — and which services offer full-text.

Here are a few resources of note: Communication Abstracts provides abstracts to an impressive list of journals relevant to our field. JStor and Project Muse offer access to full-text humanities and social science articles, and are particularly strong in their cultural studies offerings. ProQuest offers several services, including a database of dissertations and theses — two resources that should not be overlooked.

Searches + Reference Management

One great challenge is knowing what keywords to search for. It’s always best to try various keyword combinations to ensure that you’re being as inclusive as possible in your search. Work that’s relevant to your project won’t necessarily be framed the same way you intend to frame yours, and researchers may very well use terminology quite different than that which you’re using. Librarians can offer valuable help in your hunt for resources.

Also worth noting: through the “Databases” tab, you also have access to the whole Oxford English Dictionary, the definitive dictionary, which provides exhaustive definitions, etymological information, etc. Check out the “Visual Resources” tab for art, architecture, design, and photography resources, too.

Note also the library services listed across the bottom of the page; you can sign up here for workshops, tutorials, one-on-one research assistance, etc. And check out Refworks, a free, web-based citation management program. Some people prefer to maintain a local database (i.e., one housed on their own computers) of their bibliographic material; these folks often use EndNote, a useful but expensive program. Other programs — some open-source, some not; some focused solely on citation management, others incorporating file management, too — include Zotero and Mendeley. These web resources compare several available platforms.

But of course, not all knowledge is to be had through the New School Library’s website. The web’s full of excellent faculty and research institute websites, and a growing body of peer-reviewed online journals (e.g., the International Journal of Communication, First Monday, Invisible Culture, M/C, Amodern).

It’s important to remember, though, that not all that has been digitized is worth knowing! It’s important to be able to assess the credibility of online sources so that you’re not caught basing your research hypothesis on something you read in some high school student’s blog. Cornell University identifies several criteria for evaluating web resources: authorship, publishing body, point of view or bias, referral to other sources, verifiability, and currency.

How might you assess the “cast” or “slant” of a media research website if you didn’t know of the hosting organization’s political or religious affiliation? The Breitbart News Network makes it easy for you. But what about The Weekly Standard, or The Wall Street Journal? Imagine you’re an international student, and you’re not aware of these publications’ reputations. Or, imagine yourself accessing online archives of foreign publications: how might you assess their objectivity?

Remember: It’s Not All on Google, and It’s Not All Digital

“Film Archives at the Cinematique”; via Katcha on Flickr 

I’ve said it before; I’ll say it again: it’s important to recognize that Google does not provide impartial, comprehensive access to all the world’s knowledge. As Alison J. Head and John Wihby write in the Chronicle of Higher Education,

…because our web experience will increasingly be personalized through algorithms that key off of everything from geolocation to our prior digital traces, students must learn to recognize the limits of their online environment and to seek information creatively outside of channels that serve up results skewed by Internet companies and other paternalistic, biased, or profit-driven gatekeepers.

Furthermore, not all that is worth knowing has been digitized! There is much to be said for the value of accessing — and handling — original materials. There are archival collections worth exploring and human resources worth tapping. The Whitney has its Andy Warhol Film Project and its research library; MoMA, its research library and circulating film and video library; and Electronic Arts Intermix, its collection of video art. And there are thousands more exciting, eclectic, but underused, collections out there (check out the Library of Congress’s list, Archive Finder, and Archive Grid). It takes a creative and resourceful researcher to seek out these sources — but such effort is invariably repaid many times over.

Commercial Media

I want to say a few words about researching with commercial media content. What if you want to track down cable tv shows or talk radio content that’s relevant to your proposed project. Where do you start? Well, the Paley Center for Media is a great place to start. You might also check the network’s or channel’s or station’s website; some offer extensive programming archives online. In other cases, you may have to contact the network’s librarian or archivist for help, and he or she may send you to the production company that made the content. You could also do a web search to determine the production company, and contact them directly. Check old tv listings or programming schedules to determine when things aired, so that you can use this information to help others help you to track down the material you’re looking for. As much as I wish there were, there’s no easy, foolproof way to go about this kind of “content hunt.” It’s a matter of following leads, and diligently following up. The contact people will vary between organizations, as will access policies.

Content is a commodity, which, unfortunately, means that you often have to pay (dearly!) for it. Yet the recent arrival of Critical Commons — “a non-profit advocacy coalition that supports the use of media for scholarship, research and teaching, providing resources, information and tools for scholars, students, educators and creators” — is incredibly promising. And don’t forget the Internet Archive, where you can find video, audio, software, even archived web content. See also Footage.net, Stock Footage Online, ITN Archive, Getty Images and Getty’s Archive Films (see also Getty’s Rights & Clearances page).

Categories
Blog

Evaluating Multimodal Student Work

Old School Evaluation: Scantron by Obscure Associate on Flickr: http://www.flickr.com/photos/obscureassociate/4088365078/

As part of my job, I’m often expected to critique works for which there is no clear-cut evaluative rubric — I grade student papers, review journal submissions, serve on selection committees — yet because I’m confident in my knowledge in these contexts, the “squishiness” of evaluation doesn’t bother me. I know a good thing when I see it, and a not-so-good one, too. That’s not how I justify my grades to students, or explain an article rejection to an author, of course. I typically offer comments within the body of a text, which allows me to address micro issues, and then I write a short “cover letter” that offers big-picture, summary comments. I almost never have to deal with disputes — and in some cases, even if I’ve had to award a not-so-hot grade or deliver the news of a rejection, people have thanked me for my helpful comments. That’s good. I want to be helpful. That’s why I dedicate all that time to offering constructive, critical feedback.

Since I began teaching graduate students at The New School I’ve given students the option to complete work in formats that lie outside my own realm of experience. One class curated an exhibition. Another produced an online journal. And even in my more traditional classes, I give students the option of completing creative projects that critically address the subject matter of the course. I also ask them to submit a short written supplement in which they explain the ideas at the center of their work, what “argument” they hoped to make through the production, and how successful they think they were. This supplemental paper allows me to base my evaluation primarily on content (after all, my courses aren’t intended to teach them production skills; I’m simply allowing them to use their existing production skills to work though the ideas central to my courses), but also to address how the form of the project serves its content.

As I’m about to embark on two brand new fall classes that will result in the creation of collaborative, research-based interactive projects — one, an exhibition of “material media,” the other, an map of historical urban media networks — I think it’s time to develop a more formal (though not rigid) evaluation rubric. I’m doing this not only for myself, to guide the assignment of grades, but also to aid students: to help them figure out how to evaluate their own scholarly production work and other multimodal projects — critical skills that will help them navigate through a map-crazy and dataviz-obsessed popular media and design culture.

Our faux scantron wedding RSVPs.  By our friend Dan Richardson.

So, I’m investigating “how to evaluate multimodal work.” I’m still thinking through this, but here’s some of what I’ve gathered thus far (I’m including only the stuff that’s relevant to evaluating student projects; visit the original sources for how these standards can be applied to peer-reviewed faculty work. My own comments are in red.):

From the MLA’s “Short Guide to Evaluation of Digital Work” (the following are direct — but, in some cases, abridged — quotations):

  • “Is it accessible to the community of study? The most basic question to ask of digital work is whether it is accessible to its audience be it students (in the case of pedagogical innovation or users in the case of a research resource.) A work that is hidden and not made available is one that is typically not ready in some fashion. It is normal for digital work to be put up in “beta” or untested form just as it is normal for digital work to be dynamically updated (as in versions of a software tool.)... [Our projects will be hosted on either Parsons or Media Studies servers, and will be made publicly accessible. In addition, our creation process will be made public; all students will be asked to keep blogs on which they chronicle their research and production processes and offer feedback to one another. These individual blogs will be aggregated on a central project blog, to which I will add summary comments.]
  • Have there been any expert consultations? Has this been shown to others for expert opinion? Given the absence of peer review mechanisms for many types of digital work candidates should be encouraged to plan for expert consultations, especially when applying for funding…. [Our mapping tool is currently in development with expert designers at Parsons. Our exhibition platform is developed through an open-source community, to which I will ask the students to contribute. In addition, I’ll be asking faculty experts to attend class and offer constructive criticism at key points throughout the semester.]
  • Has the work been reviewed? Can it be submitted for peer review?Has the work been presented at conferences?… Have papers or reports about the project been published? [I’m currently seeking opportunities to highlight these projects, and critically assess their success/failure as pedagogical experiments, in peer-reviewed publications and at conferences. I also hope to fold particular portions of the mapping project into my next book project.]
  • Do others link to it? Does it link out well? …One indication of how a digital work participates in the conversation of the humanities is how it links to other projects and how in turn, it is described and linked to by others. With the advent of blogging it should be possible to find bloggers who have commented on a project and linked to it. While blog entries are not typically careful reviews they are a sign of interest in the professional community. [We’ll be drawing on the material housed in several local archives and special collections. Proper attribution will be a top priority in both of my student projects; we’ll provide links to relevant collections and finding aids. In addition, through the students’ “process blogs,” we’ll link out to other resources that informed their projects’ development. Finally, a data mining platform will lie behind the interactive mapping project, allowing the map to draw connections to relevant courses, relevant faculty publications, relevant student projects completed outside the context of the class, etc.]
  • If it is an instructional project, has it been assessed appropriately? A scholarly pedagogical project is one that claims to have advanced our knowledge of how to teach or learn. Such claims can be tested and there is a wealth of evaluation techniques including dialogical ones that are recognizable as being in the traditions of humanities interpretation. Further, most universities have teaching and learning units that can be asked to help advise (or even run) assessments for pedagogical innovations from student surveys to focus groups. [Ha!] …Evaluators should not look for enthusiastic and positive results – even negative results (as in this doesn’t help students learn X) are an advance in knowledge. A well designed assessment plan that results in new knowledge that is accessible and really helps others is scholarship, whether or not the pedagogical innovation is demonstrated to have the intended effect. // That said, there are forms of pedagogical innovation, especially the development of tools that are used by instructors to create learning objects, that cannot be assessed in terms of learning objectives but in terms of their usability by the instructor community to meet their learning objectives. In these cases the assessment plan would resemble more usability design and testing…. [Everything I’m posting right here pertains to my attempt to develop appropriate models for assessment. I’ll have to build in mechanisms for evaluating both individual students’ contributions and the collective class effort.]

More from the MLA: “Best Practices in Digital Work” (the following are direct — but, in some cases, abridged — quotations)

  • Appropriate Content
  • Enrichment (Has the data been annotated, linked, and structured appropriately?) One of the promises of digital work is that it can provide rich supplements of commentary, multimedia enhancement, and annotations to provide readers with appropriate historical, literary, and philosophical context. An electronic edition can have high resolution manuscript pages or video of associated performances. A digital work can have multiple interfaces for different audiences from students to researchers. Evaluators should ask about how the potential of the medium has been exploited. Has the work taken advantage of the multimedia possibilities? If an evaluator can imagine a useful enrichment they should ask the candidate whether they considered adding such materials. // Enrichment can take many forms and can raise interesting copyright problems. Often video of dramatic performances are not available because of copyright considerations. Museums and archives can ask for prohibitive license fees for reproduction rights which is why evaluators shouldn’t expect it to be easy to enrich a project with resources, but again, a scholarly project can be expected to have made informed decisions as to what resources they can include. Where projects have negotiated rights evaluators should recognize the decisions and the work of such negotiations. // In some cases enrichment can take the form of significant new scholarship organized as interpretative commentary or essay trajectories through the material. Some projects like NINES actually provide tools for digital exhibit curation so that others can create and share new annotated itineraries through the materials mounted by others…. [This is a primary concern of both of my classes. Rather than uploading data and expecting it to stand on its own, my students will be charged with contextualizing it, and linking their individual data points together into a compelling argument. I’ve already made special arrangements with several institutions for copyright clearances and waiver of reproduction fees. In other cases, students will have to negotiate (with the libraries’ and my assistance) copyright clearances; this will be a good experience for them!]
  • Technical Design (Is the delivery system robust, appropriate, and documented?)In addition to evaluating the decisions made about the representation, encoding and enrichment of evidence, evaluators can ask about the technical design of digital projects. There are better and worse ways to implement a project so that it can be maintained over time by different programmers. A scholarly resource should be designed and documented in a way that allows it to be maintained easily over the life of the project. While a professional programmer with experience with digital humanities projects can advise evaluators about technical design there are some simple questions any evaluator can ask like, “How can new materials be added?”, “Is there documentation for the technical set up that would let another programmer fix a bug?”, and “Were open source tools used that are common for such projects?” // It should be noted that pedagogical works are often technically developed differently than scholarly resources, but evaluators can still ask about how they were developed and whether they were developed so as to be easily adapted and maintained. [Project developers are focusing on this, and they’re documenting the process through a “ticket” system. We’ll ask our students for technical feedback at various points throughout the semester. Their suggestions — which they’ll elaborate upon in their blogs — will inform the development of the platforms even after the end of the semester.]
  • Interface Design and Usability (Is it designed to take advantage of the medium? Has the interface been assessed? Has it been tested? Is it accessible to its intended audience?) …Now best practices in web development suggest that needs analysis, user modeling, interface design and usability testing should be woven into large scale development projects. Evaluators should therefore ask about anticipated users and how the developers imagined their work being used. Did the development team conduct design experiments? Do they know who their users are and how do they know how their work will be used?… // It should be noted that interface design is difficult to do when developing innovative works for which there isn’t an existing self-identified and expert audience. Scholarly projects are often digitizing evidence for unanticipated research uses and should, for that reason, try to keep the data in formats that can be reused whatever the initial interface. There is a tension in scholarly digital work between a) building things to survive and be used (even if only with expertise) by future researchers and b) developing works that can be immediately accessible to scholars without computing skills. It is rare that a project has the funding to both digitize to scholarly standards and develop engaging interfaces that novices find easy. Evaluators should look therefore for plans for long term testing and iterative improvement that is facilitated by a flexible information architecture that can be adapted over time… // Finally, it should be said that interface design is itself a form of digital rhetorical work that should be encouraged. Design can be done following and innovating on practices of asking questions and imagining potential… Evaluators should look expect candidates presenting digital work to have reflected on the engineering and design, even if they didn’t do it, and evaluators should welcome the chance to have a colleague unfold the challenges of the medium.

Cheryl Ball discusses the MLA’s recommendations here, in the discussion forum for her “Evaluating Digital Scholarship” workshop (2010).  I especially appreciate her comments about the wide variety of projects that constitute “digital scholarship,” and which require dynamic criteria for evaluation. She also talks about a fantastic “peer review” exercise she designed for her undergrad “Multimodal Composition” class. They began with Virginia Kuhn’s “components of scholarly multimedia” — conceptual core (“controlling ideas, productive alignment with genre”); research component; form/content (do the formal elements serve the concept?); and creative realization (“does the project use appropriate register? could this have been done on paper?”) — then added two criteria of their own: audience and timeliness. In each of my fall classes we’ll spend a good deal of time examining other online exhibitions and mapping projects and assessing their strengths and weaknesses. I think asking the student to write a formal “reader’s report” — after we’ve generated a list of criteria for assessment — could push their critiques beyond the “I like it,” “I don’t like it,” “There’s too much going on,” or “This wasn’t clear” feedback they usually offer. I attribute the limitations of their feedback not to any lack of serious engagement or interest, but to the fact that they (me included!) don’t always know what criteria should be informing their judgment, or what language is typically used in or is appropriate for such a review.

The Institute for Multimedia Literacy has created a handout on “multimedia scholarship grading parameters” that also starts from, and expands upon, Kuhn’s criteria (the following are direct — but, in some cases, abridged — quotations):

  • Conceptual Core: Is the project’s thesis clearly articulated?Is the project productively aligned with one or more of the multimedia genres outlined in the IML program? Does the project effectively engage with the primary issues raised in the project’s research?
  • Research Competence: Does the project display evidence of substantial research and thoughtful engagement with its subject? Does the project use a variety of types of sources (i.e., not just Web sites)? Does the project deploy more than one approach to its topic?
  • Form and Content: Do structural and formal elements of the project reinforce the conceptual core in a productive way? Are design decisions deliberate and controlled? Is the effectiveness of the project uncompromised by technical problems?
  • Creative Realization: Does the project approach its subject in creative or innovative ways? Does the project use media and design principles effectively? Does this project achieve significant goals that could not have been realized on paper?

Here are their additions:

  • Coherence: First and foremost, academic multimedia projects should be coherent, effectively spanning the gap between “tradition” (text) and “innovation” (multimedia) and ultimately balancing their components. A successful multimedia project, in other words, would clearly suffer if translated into a traditional essay, or, conversely, into a “purely” multimedia experience with little or no connection to the broader field within which it participates. The strong multimedia project is not merely a well-written paper with multimedia elements “pasted in”; neither is it merely a good multimedia project with more familiar textual elements “tacked on.” Coherence, then, refers to the graceful balance of familiar scholarly gestures and multimedia expression which mobilizes the scholarship in new ways.
  • Self-reflexivity: A second quality accounts for the authorial understanding of the production choices made in constructing the project. Because these may be difficult or impossible to discern by engaging with the project, we advocate post-production reflection, offering students the opportunity to reflect on and to justify the choices and decisions made during the creation of the project. We also recognize that in many instances it may be more significant for students to reckon with the process of production rather than an end product; again, reflexivity through reflection helps manifest the evolution, and gives instructors a means for gauging learning. [Students will be encouraged to use their process blogs to address these issues.]
  • Control: By control, we mean the extent to which a project demonstrates authorial intention by providing the user with a carefully planned structure, often made manifest through a navigation scheme and a design suited to the project’s argument and content. Control has to do with authorial tone / voice / cuing as well as with the quality of the project’s interactivity if it calls for user interaction. If, for example, it is the student’s intention to confuse a user, it is perfectly appropriate to build that confusion into the project’s navigation scheme; such choices, however, must always be justified in the project’s self-reflexivity.
  • Cogency: Cogency refers to the quality of the project’s argument and its reflection of a conceptual core. Cogency is not a function of an argument’s “rightness” or “wrongness.” With most assignments, students are free to take any position they like; cogency is reflected in the way the argument is made, not in what the argument is.
  • Evidence: What is the quality of the data used to support the project’s argument? Is it suited to the argument? Further, the project should reflect fundamental research competency as understood and dictated by evolving standards of multimedia research and expression.
  • Complexity: Multimedia projects often suffer in being considered somehow outside a larger discourse or context. Complexity refers to the ways in which the project acknowledges its broader context, contributes to a larger discussion and generally participates in an academic community.
  • Technique: Strong scholarly multimedia projects should exhibit an understanding of the affordances of the tools used to create the project.
  • Documentation: Finally, with a nod toward the dramatic technological shifts that characterize contemporary media practices and the fact that formats come and go with alarming rapidity, we advocate a documentation process that describes the project, its formal structure and thematic concerns, with attention to the project’s attributes and the particular needs required for either the student’s own archival process, or those of an instructor, program, or other entity. This, too, offers another stage for assessment, inviting students to consider their work within a larger context, and offering instructors a site for understanding the learning that has occurred [Our individual and aggregated class blogs will serve this purpose].

I’d have to adapt this for graduate classes — but it’s a great starting point.

There’s also this “Grading 2.0: Evaluation in the Digital Age” discussion thread on HASTAC, which, although I haven’t been able to wade through all the comments yet, seems to advocate for a portfolio approach. I think my students’ “process blogs” will function much like a portfolio.

Finally, I’ve found Steve Anderson’s “Regeneration: Multimedia Genres and Emerging Scholarship” essay extremely helpful in addressing my concerns about the evaluation of “self-expressive” projects. I plan to ask everyone to read Anderson’s piece early in the semester. I had been concerned that some students would assume that, because our projects make use of the same tools they use to create their (often self-expressive or experimental) student films and psychogeographic maps and impressionistic audio pieces, our multimodal scholarly projects could be narrative-based and purely expressive, too. I imagine that at least a few students are unfamiliar with using production as a research methodology: how many have conceived of geotagging as more than a means of “placing” their Flickr uploads or recording their “sensory memories” of particular places in the city? I’m not denigrating these activities — there’s definitely a place for them (including in some of my other classes) — but this fall, I want to focus on multimodal scholarship, and developing appropriate criteria for evaluating it. As Anderson says, “narrative may productively serve as an element of a scholarly multimedia project but should not serve as an end in itself.”

Categories
Blog

Chicago Sound

Chicago friend Eric to the right, blonde who’s *not* Leah to the left; Cap’n Jazz in the back

It’s an awful picture, but it captures pretty accurately the way I saw the scene through tear-blurred eyes. I had waited 16 long years! And after a few failed attempts, I finally found myself standing before the Brothers Kinsella, Davey Bohlen, Victor Villareal, and Sam Zurick on their home turf, in Chicago.

I discovered Cap’n Jazz when I started college in 1994, a year before the band’s demise yet too late to see them play live. Having missed out on the ur-band itself, I settled for — and was really into, actually — its offspring: Joan of Arc, American Football, the Promise Ring, Owen, Owls, Make Believe, even the many, many superfluous Kinsella projects. When my brother-in-law, Patrick (who made an impulsive one-night trip this past January to catch their “secret” reunion show at the Bottom Lounge; I wish I had been so impulsive), told us of their rumored reunion late last year, I wasn’t exactly stoked. How could a bunch of thirty-something-year-olds manage to do the disheveled, kinetic, rhythmically unpredictable, adorably off-key thing that charmed my socks off back in the early 90s?

I don’t know how, but they did it. The Wicker Park show was totally awesome (as have been all of their reunion shows, as far as I can tell). Totally worth the wait. Worth suffering the indignity, the week before, of being denied entrance to their show at Brooklyn Bowl, while undeserving (I jest!) 16-year-olds, who weren’t even alive during the “Chicago sound” halcyon days, got in.

The idea of a “geographic sound” dawned on me shortly after I started working at Blue Train, an independent music store in State College, PA, at 18, and initiated a weekly habit of trading CDs. My then-boyfriend was on the university’s independent concert committee, which often brought in “caravans” of bands from the same region, who were traveling together between DC, New York, Philadelphia, and Pittsburgh. I discovered that my musical tastes were concentrating in particular geographic areas: Glasgow (Mogwai, Arab Strap), Chapel Hill (Superchunk & Merge Records), DC (Fugazi and almost all things Dischord), Austin (Mineral, Stars of the Lid, and, later on, all those “big sky” bands), Louisville (Slint, Rodan), and, most of all, Chicago (Tortoise, Shellac, all the Kinsellas’ projects). (These cities, with the exceptions of Chapel Hill and DC — and with the addition of Brooklyn and ReykjavĂ­k — still give rise to the music I like most.)

Back then, not having ventured much past the East Coast, I started to wonder about the connection between place and sound. Why did particular cities generate such vibrant music scenes? And why did particular scenes generate such distinctive “sounds”? What was it about Glasgow that engendered such melancholy? Why all the clippedness and exasperation in DC? Why all the alternating angularity and fluidity in Chicago?

August 27, 1994, via Metro
August 27, 1994, via Metro 

Will Straw, Alan O’Connor and Holly Kruse have written about the political, economic, and cultural factors that together mold and sustain a music scene (for more “music scenes” literature, see my “City and Sound” syllabus). Marc Faris addresses the defining characteristics of “that Chicago sound” of the early to mid-90s: loyalty to a “workingman persona,” the centrality of Steve Albini and his commitment to “material authenticity” (my term) in recording, an emphasis on rhythm over melody and harmony, and a distinctive musical “visual culture” that is, too, tied to material authenticity.”

I guess I’m looking for something more, though — something more than human actors, aesthetic choices, ideological loyalties. Perhaps because Chicago feels so materially distinctive to me — so solid, so securely rooted to the earth, so broad-shouldered (if you’ll pardon the cliche) and, simultaneously, towering (Wright and Mies, lake water and steel) — I want to think that this physicality somehow informs the city’s culture, including its music. Tony Mitchell writes, in an article I unfortunately can’t recommend, of Sigur RĂłs’s connection to Iceland: their music “could be said to express sonically both the isolation of their Icelandic location and to induce a feeling of hermetic isolation in the listener through the climactic and melodic intensity of their sound.” I like this idea. True or not, it’s a satisfying thought. Whether or not there’s a there there, in Mitchell’s argument, I like that he finds a “there” in Sigur RĂłs’s sound.

I still can’t explain how the spatial-to-sonic translation works, but I know I hear Chicago in Thrill Jockey and Drag City. And I especially enjoyed hearing that “there” — the ur-“Chicago Sound” of the 90s — there, in Wicker Park, on a lovely Saturday night, with two long-time friends.