Categories
Classes

Archived Course: Media Research Methods

Required graduate seminar

Although humans have been thinking about and theorizing about media since antiquity, we have only recently – within the past century – begun to systematically, even scientifically, study the media. We now consider everything from the media’s role in society to its psychological “effects” on those who consume it; from the content of the messages it disseminates to the ideologies underlying its production and consumption. In this course we will look at the past, present, and future of media research: what do researchers think worthy of study, and what methods do they use to study it? We’ll ask ourselves similar questions: What, in our mediated environment, deserves study? What can and should we study, and why should anybody care? How can we match our own intellectual and creative interests to particular research subjects and methodologies? What does “research” mean in this digital age, this era of ubiquitous information? What tools can we use to study the media, and what kinds of information and knowledge can those tools yield? How do we determine the credibility of a source or generate our own data? Furthermore, how can we use the media themselves in the study of various social or psychological phenomena? And, conversely, how can we use research to help guide our media production? Our consideration of these questions throughout the semester will prepare us to create a grant proposal for either a media studies research project or a research-based media production project.

Spring 2006: Syllabus

Fall 2005: Syllabus

Summer 2005

.

Archived Lessons for Summer 2005 Online Course

Exploring Topics and Beginning Research
Finding Funding
Production and Culture Industry Research
Historical Research
Critical Approaches
Discourse Analysis
Qualitative Methods
Media as Research Instruments
Media in Ethnography
Quantitative Methods

Categories
Presentations

Response to Alberto Corsín Jiménez’s “Ethnography: A Prototype”

As part of GIDEST’s March 3, 2017, “Our Own Devices” workshop on ethnographic tools and techniques, I responded to Alberto Corsín Jiménez’s paper, “Ethnography: A Prototype.” Here are my comments:

An impassioned political preamble seems obligatory in even the most modest of academic addresses these days. I’m going to skip it – but I will say this: As so much of the world is wondering what forms of collective action and communication resonate amidst so much political and epistemological upheaval, Alberto Corsín Jiménez and Adolfo Estalella offer a model for thinking recursively about how we constitute and act as publics – particularly as publics in cities, which are commonly our stages for political action and are now, some believe, the only remaining spatial scale at which we can work to maintain “sanctuaries” for democratic ideals.

Drawing on work in the free and open-source software community, Jiménez and Estalella propose that the community’s commitment to sharing, to the commons, and to the “democratizing potential of technology” can be productively, if not seamlessly, transferred to the urban realm. The software developer’s operational unit, the “prototype,” is also of potential utility for ethnographers and other researchers, whether they’re studying software and urbanism, or not.

The prototype is a proof-of-concept meant to be built upon. It’s a model from which we can construct things, ideas, publics, and politics. It’s a technical form and a social form encompassing a methodology, an epistemology, an ontology, and even an ideology. In free culture communities, the prototype embodies openness and adaptability, and it calls for iteration and transference. Our authors describe the Inteligencias Colectivas, for instance, who are interested in “evolutionizing” urban prototypical forms and knowledges. They acknowledge the “architectural intelligences behind mundane objects,” then imagine their “resonances, extensions, and analogies” in other contexts and environments. The portability of the prototype renders it more widely accessible, thereby potentially democratizing design – but only if the design is effectively communicated, rendered intelligible and actionable, to other communities. Thus, Jiménez notes, the archive is an integral ingredient of the prototype; it’s the “ur-design,” the “infra-ontology” of the prototype. The archive captures not only a prototype’s composition, but also its “biography”: its historical contexts, its evolution, its social relations of production and use.

Different kinds of objects and practices call for different forms of documentation. To be rendered “fully legible,” Jiménez says, “some intelligences require a multi-layered combination of iconographic techniques,” like photographs, sketches, and video recordings. The choice of particular files, formats, and languages depends not only on their representational affordances and pedagogical potential, but also their politics: proprietary software and restrictive file formats, for instance, would limit a prototype’s accessibility and mutability and contradict the whole open-source ethos. The ethnographer’s experimentation with such a range of modalities in his or her own work likewise represents an aesthetic and political choice – to extend ethnographic work into what Michael Fischer calls “third spaces of articulation.”

While we learned from our MakerBot fetish phase that prototyping doesn’t always elicit criticality, it does have the potential to engender self-reflexivity, to create what Christopher Kelty calls “recursive publics”: publics that are “vitally concerned with the material and practical maintenance and modification of the technical, legal,… and conceptual means of [their] own existence as a public.” In their conscious choices of democratic, egalitarian modes of action and communication, he says, they “speak to existing forms of power through the production of actually existing alternatives.” One would like to think that scholars and reflective practitioners are also “vitally concerned” with the material conditions of their own knowledge and cultural production, but this of course isn’t always the case: we turn a blind eye to our underpaid adjuncts, indebted graduate students, and the free editorial labor and exorbitant subscription fees that sustain our scholarly publishing systems. Yet Jiménez and Estalella found that their fieldwork with free culture activists in Madrid required a “form of ethnography that takes its own changing infrastructure as an object of inquiry.” We all would do well to consider how the evolving technological and social infrastructures of the academy, of our disciplines – and the larger culture within which they exist – necessitate new knowledge infrastructures, new methods and modes of dissemination. Jiménez and Estalella felt compelled to transform their study of free culture prototypes into “a prototype for free culture itself.” Through their “Taking Critique Out for a Walk” series, they talked about the city while talking through it, and they sought means to “open-source the very architecture of education.” Such recursive thinking generated for them new modes of scholarly practice and publicity.

I’d argue that recursion should involve “vital concern” not only with the methods and political-economic conditions of one’s own practice – but also with the temporal depth of that recursivity. What’s the history of recursion’s loop? What’s the prototype of the prototype? We tend to metaphorize complicated systems – like cities and brains – in terms of the prevailing technologies of the time. At various points we’ve likened cognition and urban operations to the workings of hydraulic or electrical systems, or computers. And we often draw parallels between these two ur-metaphors: cities seem to work an awful lot like computers, and computer programmers draw inspiration from architecture. When we see free and open culture in our cities, it bears a resemblance to open-source software.

Over the past two decades, we’ve seen several iterations – prototypes, we might say – of open-source architecture and urban design. Paperhouses and Wikihouses offer freely available, modifiable plans. Pritzker Prize winner Alejandro Aravena has released four of his “half-a-house” designs into the public domain, allowing for their unrestricted use and adaptation. Carlo Ratti and Matthew Claudels proposed their own model of “open source architecture” in 2011, and, before them, Architecture for Humanity’s Cameron Sinclair aimed to bring open-source principles to humanitarian design. In the early aughts, Usman Haque experimented with open-source architecture using inflatables, and then he and Matthew Fuller joined forces to prototype an “Urban Versioning System.” In 2003, Dennis Kaspori proposed an “open source [design] practice” that allows for the “collective,” iterative and evolutionary “development of solutions for spatial issues involving housing, mobility, greenspace, urban renewal, and so on.” He’s speaking free culture’s language.

Even well before the age of open-source, in the 1970s, Cedric Price prototyped his anticipatory architecture, and Christopher Alexander offered up his “pattern language,” which was also built on principles of democratic (albeit moralistic), evolutionary design. Stewart Brand, meanwhile, supplied a whole host of prototypes for living in his Whole Earth Catalog. And having been raised in Amish country in Pennsylvania, and having attended a few barn raisings in my time, I’d say the Amish have been prototyping free and open-source design for a few centuries. Without autoCAD. Rahul Mehrotra tells of similarly-minded design principles at the Kumbh Mela Hindu pilgrimage, which involves the construction of a massive, modular temporary city every several years – and which has, for well over a millennium, embraced evolutionary, recombinant, accessible, recursive practices.

It’s also helpful to recall that the widespread use of architectural and urban plans are a relatively recent phenomenon, as architectural historian Mario Carpo argues. Before the rise of print, designers were also craftsmen, and they typically spread ideas orally and learned their trade through apprenticeships. The idea of the architect as a professional wielding specialized drawings is a product of new professional organizations and curricula, like that at the École des Beaux Arts, founded in the 19th century. As Michael Guggenheim argues, throughout much of history, “people could invent products at home, or produce ad-hoc solutions to practical problems…with a piece of wood and some nails. The problem,” he says, “is rather, that there are few historical sources and…little historical interest in these processes, since they do not lend themselves to the writing of histories.”[1]

Recognizing this long history of prototypes to the prototype serves not only to remind us of the historical specificity of our contemporary metaphors, like the city-as-software, but also to highlight the way those metaphors shape particular urban practices and epistemologies and politics. Those metaphors also determine how knowledges are documented and transformed into historical sources for future archival researchers – and into manuals and “instructables” for contemporary practitioners. If a city is a computer, and if its urban practices are executed like software, the archive of those urban intelligences is more likely to adopt a computational logic, too.

The Ciudad Escuela web platform invites free culture projects to “open the ‘sources’ of their own technical, legal, pedagogical, associative and political capacities,” to render them legible through those “multi-layered…iconographic techniques” we discussed earlier. They’re encouraged to “legitimize their practices vis-à-vis local authorities and neighboring communities” by “explicating and standardizing [their] tacit urban knowledge,” and by “verifying” their skills with Mozilla’s Open Badges technology. But what does it mean to tie legitimation to standardization? What happens when particular cultures – embodied, situated, perhaps performative or oral, or governed by codes of privacy – translate their knowledge into the archival logics of the web and the credentialing economies of civic tech. Do we restrict what constitutes urban knowledge and its “repertoire” if it has to make itself iconographic: YouTube-able, diagram-able, data-visualizable?[2]

I’d encourage us to also think recursively about the technological metaphors we use to make sense of things like urban cultures, or to explain the methods and media we employ as scholars and practitioners. Those metaphors embody epistemologies and politics that recursively reinscribe themselves in the archive. If culture is software, our cultural institutions and infrastructures – from universities to urban “laboratories” – seem like computers. And any knowledges that happen to be in the wrong file format just might not compute.


[1] Free urban culture has been around for quite some time, too: consider the centuries’-long history of public libraries, mechanics’ institutions, athenaeums – many of which promoted the democratization of productive knowledge, itself a prototype for “maker culture.”

[2] We’ve come to recognize that universal transparency and openness are not universal goods – particularly for vulnerable populations, indigenous groups, and marginalized communities. Visibility, openness can offer legitimation, but it can also invite exploitation.

 

Categories
Publications

Mission Control: A History of the Dashboard

Mission Control: A History of the Urban Dashboard,” Places (March 9, 2015)

Reprinted in Rob Kitchin, Tracey Lauriault & Matthew Wilson, Understanding Spatial Media (Thousand Oaks, CA: Sage, forthcoming 2016)

Reprinted in Simon Marvin and Andres Luque-Ayala, Eds., Control Room: Nodes in the Networked City (Routledge, forthcoming 2018)

Categories
Presentations

Interfacing Urban Intelligence (2014)

I spoke about “Interfacing Urban Intelligence” at the “Code + the City” workshop, which took place in Ireland on September 3-4, 2014. My talk was drawn from my article of the same title, which I published in Places last year. You can watch a video of my talk here. I have a habit of giving talks with wet hair, it seems.

See also the videos of my fellow panelists: Rob Kitchin, Adrian Mackenzie, Sophia Maalsen and Sung-Yueh Perng.

Categories
Blog

Interface Critique, Revisited: Thinking About Archival Interfaces

via LEOL30 on Flickr

Several days ago I posted drafts of a few sections of an article I’m writing for Places. I’m exploring speculative interfaces to the “smart city” — the windows that supposedly allow us to peer into, and potentially interact with, our future-cities’ operating systems. The methodological part of that work may or may not appear in the final publication — but it’ll certainly prove useful for the “Digital Archives” studio I’m teaching this semester. I’ve asked students to critique existing interfaces to archival collections as part of their preparation for our work, which involves proposing “platforms for highlighting and recontextualizing noteworthy…material [in The New School’s archives] – particularly material regarding the history of media study and media-making at [the university].”

So, here’s a revision, and “archival customization,” of my post from January 10. First, I explain how we might determine what constitutes an interface, and then I propose a methodology for critiquing interfaces — particularly archival interfaces.

IDENTIFYING INTERFACES

the-nest-thermostat

In his 1997 Interface Culture, Stephen Johnson explains that an interface is “software that shapes the interaction between user and computer. The interface serves as a kind of translator, mediating between the two parties, making one sensible to the other.”[1] He specifies that the interface is more semantic than concretely technological. Branden Hookway, whose own book on the topic is forthcoming from MIT Press, agrees that the interface does its work “not as a technology in itself but as the zone or threshold that must be worked through in order to be able to relate to technology.”[2] Alexander Galloway, too, in his Interface Effect, specifies that the interface is not a thing, but a “process or a translation” – one that draws its qualities from the “things” it’s translating between, but which also has its own properties that are independent from those things.[3] 

Media scholar Johanna Drucker picks up on Hookway’s spatial “zone” and “threshold” metaphors; she regards the interface as an environment, a “space of affordances and possibilities” that informs how people interact with it. It’s a “set of conditions, structured relations, that allow certain behaviors, actions, readings, events to occur.”[4] Drucker, like Hookway, is focused on the human-computer interface; both scholars emphasize how the interface, through its affordances, structures the user’s agency and identity, and how it constructs him or her as a “subject,” which is different from a mere “user,” in that the subject’s identity is informed by historical, cultural, linguistic, political forces, and that identity shifts in response to contextual variations. An individual might be one “subject” when controlling her home Nest thermostat from her smartphone at work, another when interacting with an ATM at her bank, and yet another when annotating archival objects in a “participatory archive.”

via Novell
via Novell 

But the zone between the machine and the person – that perceptible, manipulable skin – isn’t the only zone of interface. Computers, for instance, are commonly modeled as a “stack” of protocols of varying concreteness or abstraction – from the physical Ethernet hardware to the abstract application interface.[5] There are interfaces between the various layers of this stack. As Galloway explains, “the interface is a general technique of mediation evident at all levels”; that “technique” might be graphical, sonic, motion-tracking, gestural (using hands or mice), tangible/embodied (involving the physical embodiment of data, their embeddedness in real spaces, and users’ bodily interaction), or of another variety.[6] Regardless of its means of operation, Galloway continues, the interface “facilitates the way of thinking that tends to pitch things in terms of ‘levels’ or ‘layers’ in the first place.”

Much, if not all, of what’s “beneath” or “behind” the graphical user interface (GUI) is “black boxed,” inaccessible and unintelligible to us. And that obfuscation is in large part intentional and necessary. As I write this, for instance, I’m focusing my attention on the words on-screen, on the GUI, rather than bothering myself with the chatter between my TCP/IP transport software and my Ethernet hardware. And even the ubiquity and familiarity of computer screens like the one before me, and the one I carry around in my pocket – and the intuitive means by which I interact with them – tend to naturalize and “disappear” the interface itself. That obfuscation, while necessary, is also risky; we forget just how much these layered interfaces are structuring our communication and sociality, how they’re delimiting our agency and defining our identities. As Galloway reminds us, it’s crucial to consider “the translation of ideological force into data structures and symbolic logic”; the user interface, the code, the protocols, and the physical infrastructure “beneath” them are all political.[7] 

That process of translation can call attention to itself when, say, something breaks – or when, say,  the NYPL updates its catalog and we have to learn a new visual language and means of navigation. When Johnson wrote his book in 1997, he investigated the desktop, windows, links, text, and intelligent agents as interfacing elements. But even then – before this age of smartphones and smart cities – he acknowledged that it was becoming “more and more difficult to imagine the dataspace at our fingertips.”[8] 

Representing all that information is going to require a new visual language… We can already see the first stirrings of this new form in recent interface designs that have moved beyond the two dimensional desktop metaphor into more immersive digital environments: town squares, shopping malls, personal assistants, living rooms. As the infosphere continues its exponential growth, the metaphors used to describe it will also grow in both scale and complexity.

Today, the bazaar-as-interface isn’t merely a computing metaphor; it’s not merely a trope for conceptualizing and graphically modeling an online store or a discussion board. Media facades, sensor-embedded pathways and thresholds, responsive architecture, public interactives and the like have transformed our physical environments into interfaces in their own right. But interfaces to what? What is this “city” that we’re supposed to relate to? And how “deep” does that relation go? What technical operations are taking place down the “stack” of networked urban infrastructures that we could possibly interface with?

~~~~~~~~~~~~~~~~~~

[1] Stephen Johnson, Interface Culture: How New Technology Transforms the Way We Create and Communicate (New York: Harper Edge, 1997): 14.

[2] Branden Hookway, The Interface, Dissertation (Princeton University, 2011): 14).

[3] Alexander R. Galloway, The Interface Effect (Malden, Ma: Polity, 2012): 33.

[4] Johanna Drucker, “Performative Materiality And Theoretical Approaches To Interface” Digital Humanities Quarterly 7:1 (2013): 31.

[5] Rory Solomon, one of my advisees at The New School, wrote a brilliant thesis – The Stack: A Media Archaeology of the Computer Program – on the history of the stack metaphor.  Part of his work appears in “Last In, First Out: Network Archaeology of the Stack” Amodern 2 (October 2013).

[6] Galloway 54. See also Paul Dourish, Where the Action Is: The Foundations of Embodied Interaction (Cambridge, MIT Press, 2001); Eva Hornecker & Jacob Buur, “Getting a Grip on Tangible Interaction: A Framework on Physical Space and Social Interaction” Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems (2006): 437-446.

[7] Galloway 70.

[8] Johnson 18.

[ …In the article from which these passages are drawn, I talk here about “urban interfaces”… ]

X

INTERFACE CRITIQUE

Of course folks concerned with “usability” in interface design have a list of criteria that make for an “effective” and “efficient” interface. Jacob NIelsen offers ten heuristics, which I’ve collapsed into eight:

  1. Flexibility, control, and efficiency of use. Does the interface allow users to progress efficiently in pursuit of her goal, whatever it may be? Does it present its “content” clearly, and via an organizational scheme that makes sense? Does the user feel as if she’s in control of her experience? Does she feel free to explore the interface? Does she feel “trapped” if she takes a “wrong turn”? Is the interface flexible and efficient for both novice and experienced users?
  2. Intuitive design. Does the site clearly communicate its goals, functions, and affordances? Does it provide labels and instructions for use, or are those instructions “embodied” in the platform’s design? Is it clear how the user can interact with the platform (e.g., where to click)? Does the platform employ concepts, terminology, graphics, workflows, etc., that are familiar to the user, and that help her to easily understand whether and how the platform can enable her to achieve her goals? Does the interface “follow real-world conventions, making information appear in a natural and logical order”?
  3. Consistency and standards. This is related to “intuition”: if your site uses terminology, graphics, processes, etc., consistently throughout, and if those variables are consistent with other familiar applications, the user will more likely be able to engage with the platform intuitively.
  4. Recognition rather than recall. Are instructions for how to use the system consistently visible, or easily retrieved? Does the user have to recall cues from other parts of the platform in order to navigate through a particular page or passage?
  5. Visibility of system status. Does the system provide adequate feedback about its functioning? Does the user know when something is “processing,” and how long she’ll have to wait? Does she have a sense of where she is oriented within the “grand scheme” of the system? Does she know how to get “back home”?
  6. Aesthetics. Do the platform’s “look and feel” support its functionality? Is its design as simple as possible (assuming simplicity is a desirable goal)? Are there extraneous elements that could be either eliminated or hidden? Is the platform legible, are its other sensory outputs easily discerned, and is it and “accessibly” designed?
  7. Error prevention and recovery. Does the platform include “guide rails” to keep users from going down wrong paths? Does it help users recognize errors, via non-specialized language; develop a clear diagnosis; consider possible solutions; and recover relatively painlessly?
  8. Help and documentation. Can users find help — or contextual information about the platform and its “contents” — easily?

And here’s how I’ve tailored these criteria for your Archival Interface Critique assignment: I’ve asked you to examine

  • your chosen site’s composition, organization, and aesthetics;
  • how it structures the user’s experience and navigation, and how intuitive and “seamless” that interaction is;
  • furthermore, how desirable would “seamless” interaction be in this instance (perhaps it would be helpful and instructive to show some seams?);
  • how the site contextualizes the archival material (e.g., does it provide or link to robust metadata, does it “animate” the material?);
  • how the site “hierarchizes” the presentation of information (e.g., does it allow users to “dig deeper” for more data if they want it?);
  • the availability of documentation and help for users who want or need it.

Consider the needs of various user groups and user scenarios, and try to put yourself in their positions as you navigate through your site. 

[I also strongly recommend that you check out Edward Tufte’s critique of National Gallery kiosks and the old iPhone interface.]

As Drucker explains, such ways of thinking about human-computer interaction (HCI) are framed by values central to engineering.”[9] The evaluation of interfaces involves “scenarios that chunk tasks and behaviors into carefully segmented decision trees” and “endlessly iterative cycles of ‘task specification’ and ‘deliverables’”; and it tends to equate the “human” in “human-computer interaction” I with an efficiency-minded “user.” Drucker proposes instead a humanities-oriented interface theory that embraces other values and experiences – ambiguity, serendipity, productive inefficiency – and draws on insights from “interface design, behavioral cognition, and ergonomics, approaches to reading and human processing,” and the history of graphic design and communication, with particular attention to “the semantics of visual form.”[10]

"Chunking," via Raw
“Chunking,” via Raw

Yet while Drucker proposes that we move away from engineering-oriented methods of critique, we do have to acknowledge that the engineering of our material interfaces does factor into how those interfaces structure “human [and machine] processing.” We need to take into consideration the materiality, scale, location, and orientation of the interface. For instance, where is the screen sited; how big is it; is it oriented in landscape or portrait mode; what kinds of viewing practices does it promote; does it allow for interactivity, and if so, in what form? Where are the speakers, what is their reach, and what kind of listening practices to they foster? Or, where are the sensors that read our gestures, how sensitive are they, and how do they condition our movements? Furthermore, what are our possible modalities of interaction with the interface? Do we merely look at dynamically presented data? Can we touch the screen and make things happen? Can we speak into the air and expect it to hear us, or do we have to press a button to awaken Siri? Can we gesticulate “naturally,” or do we have to wear a special glove, or carry a special wand, in order for it to recognize our movements?

Now, returning to Drucker’s recommendations: we can learn a lot from comics in regard to the semantics of visual form. Scott McCloud’s canonical Understanding Comics offers a useful model for thinking about graphic reading practices. In examining interfaces, too, we should attend to variables of basic composition (e.g. the size, shape, position, etc., of elements on the screen), as well as how they work together across time and space: how we read across panels and pages, and how we trace themes and topics as we travel through the graphic interface. The temporal and spatial dimensions of our navigation could be sign-posted for us via “bread-crumb trails that mark [our] place in a hierarchy or a sequence or moves or events,” or devices that allow us to shift scales and levels of granularity, and, all the while, maintain awareness of how closely we’re “zoomed in” and how much context the interface is providing.[11] This sense of orientation – of understanding where one is within the “grand scheme” of the interface, or the landscape or timeframe it’s representing – plays a key role in determining our user-subject’s identity and agency. Margaret Hedstrom describes the archival interface as a mediating and orienting structure:

[It is] both a metaphor for archivists’ roles as intermediaries between documentary evidence and its readers[,] and a tangible set of structures and tools that place archival documents in a context and provide an interpretative framework.[12]

Speaking of frameworks: Drucker also recommends that we employ “frame analysis,” which would address how the various boxes, buttons, and applications – as well as the different modalities of presentation (audio, visual, textual, etc.) on our interfaces – conceptually and graphically “chunk, isolate, segment, [and] distinguish one activity or application from another.”[13] Ideally, these assemblages will all hang together under a coherent, overarching “conceptual organization, or graphic frame,” and a sufficient number of common reference points.[14] Such cohesion will enable us to read across “a multiplicity of worlds, phenomena, representations, arguments, presentations… and media modalities” – but in critiquing how this cohesion comes about, we should also pay attention to the “nodes, edges, tangents, trajectories, hinges, bends, pipelines, [and] portals” that frame and link – and perhaps create friction between – the components of our interfaces.[15]

via Sonoe Nakasone & Carolyn Sheffield in D-Lib
Data model, via Sonoe Nakasone & Carolyn Sheffield in D-Lib

Reading “beneath” those graphic frames provides insight into the data models structuring our interaction with the technology. Those sliders, dialogue boxes, drop-down menus and other GUI elements indicate how the data has been modeled on the “back-end” – as a qualitative or quantitative value, as a set of discrete entities or a continuum, as an open field or a set of controlled choices, etc. “[C]ontent models, forms of classification, taxonomy, or information organization,” Drucker argues, “embody ideology. Ontologies are ideologies,… as naming, ordering, and parameterizing are interpretive acts that enact their view of knowledge, reality, and experience and give it form.”[16] The design of an interface thus isn’t simply about efficiently arranging elements and structuring users’ behavior; interface design also models – perhaps unwittingly, in some cases – an epistemology and a method of interpretation.

The archival interface, Hedstrom argues, is “a site where power is negotiated and exercised,… consciously or unconsciously, over documents and their representations, over access to them, over actual and potential uses of archives, and over memory”; it’s a “boundary where archivists… negotiate over what constitutes legitimate evidence of the past.” She suggests that archives consider how their interfaces “might serve as devices for exposing, rather than obscuring, the imprint that archivists leave on records through appraisal and descriptive practices” — and, I would add, exposing (where appropriate) user engagement with the archives’ too.[17]

Yet, returning to “the stack,” Galloway reminds us that, while the interface does serve to “translate” between the data model and the GUI, and between other levels of the stack, that translation isn’t inert. He speaks of the “fundamental incommensurability between any two points or thresholds on the continuum of layers”; we thus use allegories or metaphors – the desktop, the file folder, or even our mental image of the city-as-network – to ostensibly “resolve” the “tension between the machinic and the narrative,… the fluid and the fixed, the digital and the analog.”[18] In our interface critique, then, we should also consider what acts of interpretive translation or allegorization are taking place at those hinges or portals between layers of interfaces.[19]

The interface, as we said earlier, also shapes our identities and defines our agency as users, or subjects. We should thus examine how the interface enunciates – what language it uses to “frame” its content into fundamental categories, to whom it speaks and how, what point(s) of view are tacitly or explicitly adopted. Of course there’s an ideology to this enunciation, too: Drucker encourages us to consider “who speaks for whom”; “what is not able to said,” “what is excluded, impossible, not present, not able to be articulated given [the interface’s] structures”?[20] How the interface addresses, or fails to address us – and how its underlying database categorizes us into what Galloway calls “cybertypes” – has the potential to shape how we understand our social roles and expected behavior. We could identify in our critique whom the interface addresses, how it does so, and how those users play into their “cybertype” subjectivities.

Hedstrom suggests that by reflecting the politics of archival practice within the archival interface, the archives could speak to a “larger community of scholars“:

By providing insights into the tensions between theory and practice, supplying information about institutional appraisal policies, and providing means for users to discover the archivists on the other side of the interface, archivists could begin to share power with a larger community of scholars… Users will be able to judge the authenticity, reliability, and weight of documentary evidence for themselves using the tools, norms, and methodologies of their time, if we provide the contextual information about appraisal and description that they will need to make these judgments[21].

Visible Archive Series Browser from Mitchell Whitelaw on Vimeo.

We also, finally, should consider what is not made visible or otherwise perceptible. What is simply not representable through a graphic or gestural user interface, on a zoomable map, via data visualization or sonification? While some content or levels of the protocol stack may be intentionally hidden – for security or intellectual property reasons, for instance – Galloway argues that some things are simply unrepresentable, in large part because we have yet to create “adequate visualizations” of our network culture and control society.[22] There’s been significant experimentation in the visualization of archival material — but particularly in light of our tendency to fetishize the data visualization, we should also consider the possibility that some aspects of our archives, and of archival experience, are simply not, and will never be, machine-readable. In our interface critique, then, we might imagine what dimensions of the historical world, of our historical record, and of human experience simply cannot be translated or interfaced. What do we not want to “make sensible” to the machine?

~~~~~~~~~~~~~~~~~~

[9] Johanna Drucker, “Humanities Approaches to Interface Theory” Culture Machine 12 (2011): 1.

[10] Drucker 2013: 27.

[11] Drucker 2011: 18.

[12] Margaret Hedstrom, “Archives, Memory, and Interfaces with the Past” Archival Science 2 (2002): 21.

[13] Drucker 2011: 15.

[14] Drucker 2011: 18.

[15] Drucker 2011: 14.

[16] Drucker 2013: ¶42.

[17] Hedstrom 22, 26, 33. She proposes that “new interfaces could serve as gateways to structured information about appraisal and selection. To build such interfaces, however, archivists would have to share their insights about how they interpreted appraisal theory, expose their debates and discussions about appraisal values, underline constraints of technology and politics hampering an ideal appraisal decision from implementation, and, most importantly, reveal their uncertainties about, and discomfort with, the choices that confront them” (37). Furthermore, the archival interface could serve as a site for archivists to reflect on how their practices of archival description — their decisions “about which records to describe in greater detail, and which to digitize for remote access,” and what vocabulary to use in describing those materials — generate an “interpretative spin” (38, 40).

[18] Galloway 76.

[19] Even what seem to be purely aesthetic decisions, or matters of style, can function allegorically or rhetorically; Galloway speaks of “windowing,” for instance – of screens dissected into panels or frames that offer multiple perspectives simultaneously, as opposed to the sequenced presentation of filmic montage – as a stylistic embodiment of the “cultural logic of computation.” While his analysis focuses on the television show 24, we can easily see similar modes of presentation on our smartphone screens and in smart cities’ control centers. This window motif represents “the distributed network as aesthetic construction”; it translates the network structure into a form, a look (Galloway 110, 117).

[20] Drucker 2013. Drew Hemment and Anthony Townsend also encourage us to pay attention to disenfranchised populations: “how can we create opportunities to engage every citizen in the development and revitalization Of The Smart City?” (“Here Come The Smart Citizens” In Hemment & Townsend, Eds., Smart Citizens (Future Everything Publications, 2013): 3).

[21] Hedstrom 37, 43.

[22] Galloway 91.

Categories
Uncategorized

Interface Critique

via
via CitySDK 

I’m writing a new piece for Places on prospective/speculative “interfaces to the smart city” — or points of human contact with the “urban operating system.” As I explained to the editors,

I’d like to consider these prototyped urban interfaces‘ IxD — with outputs including maps, data visualizations, photos, sounds, etc.; and inputs ranging from GUIs and touchscreens to voice and gestural interfaces — and how that interactive experience both reflects and informs urban dwellers’ relationships to their cities (and obfuscates some aspects of the city), and shapes their identities as urban “subjects.” I’m particularly interested in our single-minded focus on screens (gaaaahh!): are there other, non-“glowing rectangle” / “pictures under glass“-oriented platforms we can use to mediate our future-experiences of our future-cities?

Categories
Publications

Methodolatry and the Art of Measure: The New Wave of Urban Data Science

Methodolatry and the Art of Measure: The New Wave of Urban Data Science,” Places (November 5, 2013).

Categories
Blog

New Places Article on Data and Methodolatry in Urban Research

MAP Architects, Svalbard Architectural Expedition, 2013. [Photo by MAP Architects]
MAP Architects, Svalbard Architectural Expedition, 2013. [Photo by MAP Architects]

I published a new article in Places on data science, aesthetics, and politics, and the fetishization of method in urban research. Check out “Methodolatry and the Art of Measure: The New Wave of Urban Data Science.”

Categories
Publications

Infrastructural Tourism

Infrastructural Tourism,” Places (July 1, 2013)

on multisensorial means of “experiencing” infrastructure

Categories
Blog

Cartographic Excess

Address Is Approximate from The Theory on Vimeo.

Last week we drew to a close our second year of Urban Media Archaeology, a graduate studio in which my 15 students; my Technical Associate, the ever capable Rory Solomon; and I work together to map historic media networks. Last fall, in the inaugural section of the class, our students mapped everything from the history of walking tours, to newspaper company headquarters, to Daily News delivery infrastructure, to the social lives of East Village zines, to key sites in carrier pigeon history. This semester the projects were no less innovative; we mapped “media actors” in the debate over the Atlantic Yards development; data-driven systems of graffiti removal; the spatial history of the Young Filmmakers Foundation (intended to seed a larger map of youth media organizations in New York); the evolution of street signs in Manhattan since the  late 19th century; the old West Side Cowboys of Chelsea (this project, one of my favorites, involved “ontography“; see below); the changing landscape of independent bookstores in Manhattan and Brooklyn; the social networks of the Soho Fluxus community; 100 years’ history of theaters around Union Square; key individuals and places in the history of subway graffiti; the spatial history of the Bell Telephone system;  the forgotten histories of official memorials and murals in East Harlem; surveillance networks in Corona, Queens; locations in Woody Allen’s films; and historic jazz performance venues.

via Jonathan’s Last of the West Side Cowboys: http://urt.parsons.edu/urt/research/record/938
Duncan’s Media Actors of Atlantic Yards: http://urt.parsons.edu/urt/research/project/urban-media-archaeology/atlantic-yards-media-actors

We learned this year, as we did last year, about media archaeology, about maps as media, about the spatial- and digital humanities, about archival research, and about design methods and prototyping strategies. And this year we added a new lesson on “spatial data modeling” to help students translate their conceptual models into “database language.”

We also learned quite a few things that could never be spelled out in the obligatory “learning goals” section of a syllabus. I’ll try to describe a few of those hard-to-articulate lessons here:

Learning Doesn’t Happen in 15-Week Chunks. Many students commented that they had a hard time knowing when to stop researching. They had a tough time gauging when they had enough archival images, enough data to discern a spatial pattern of some sort, enough contextual information for each of the records they plotted on the map. Many of my students spent weeks sorting through official data sets or in various archives, either frustrated that they hadn’t yet tracked down the “magic data set” or the “magic box” of archival treasure, or thrilled to have found much great material — and in many cases, eventually overwhelmed by the volume of material they gathered. Whatever their individual experiences, they almost invariably felt incomplete at the end of the semester. “If I had another week, I would’ve….”

We had to come to terms with the fact that learning — the most natural, meaningful kind — doesn’t stop at the end of the semester. The most exciting projects, with the most potential for future development, will inevitably remain undone — much to the benefit of those who come after us, who’ll take inspiration from our work and build upon the foundations we’ve laid. DH projects in particular require that we recalibrate our internal self-critics to take into account that fact that our work is often only one small part of a larger, longer-term endeavor. At the same time, this “recalibration” doesn’t diminish our sense of personal accountability; knowing that others — our contemporary and future collaborators — are counting on us, and knowing that our audience is larger than our professors and ourselves, we appreciate that there’s a lot more at stake than an end-of-semester grade.

Learning Can Be Deeper, and More Rewarding, When It Pushes Us Out of Our Comfort Zone. Some students commented that venturing into new research venues and employing new research skills; having to gather the pieces to construct a “multimodal,” spatial argument; and realizing that they needed to have something to show for all their work, resulted in an unprecedentedly deep level of engagement. “I’ve never been this invested in, or learned this much from, a research project before.” I suggested in our last class that most folks can BS their way through a 20-page seminar paper, but when you have to show stuff to back up your claims — when you have to plot records to support a spatial argument — your research will require getting your hands dirty.

Some students also learned not to fear the error message. We created our own mapping system, and asked students to construct their own data models, so they could see what’s behind the social media systems that they regularly use — systems that have been naturalized and seamlessly integrated into their everyday lives. Opening the black box, if you’ll pardon the cliche, requires that we test its limits, that we often push the system until it breaks. And when we do break something — when we encounter one of those ugly “TemplateSyntaxError” messages — rather than panic or give up, we can actually learn to hear what the system is telling us, and work with others in class — most likely those with a different set of technical skills than our own — to fix the problem. These small defeats and victories tell us a lot about how a system works. And ultimately we learn more from these error-pitted processes, uncomfortable though they might be, than from those that proceed perfectly smoothly.

Even the “Objective” Calls for Reflexivity. Many students came to realize that the primary materials they were gathering were determined primarily by choices they made — which streets to travel, which times of day to visit, which people to talk to, etc. Even data — either self-generated or pulled from an “open data” bank — aren’t immune to researcher bias or subjectivity. We came to be keenly aware of how data and other research materials come into being, and are discovered by ourselves and other researchers — and many students decided to build themselves, through self-reflexive methodology maps, into their own projects. As David Bodenhamer writes in “The Potential of the Spatial Humanities” (In The Spatial Humanities, Indiana University Press, 2010, “A humanities GIS-facilitated understanding of society and culture may ultimately make its contribution in this way, by embracing a new, reflexive epistemology that integrates the multiple voices, views, and memories…” (29).

Mapping Isn’t Always About Big Data — Or, Mapping ≠ GIS. Several students began their projects looking for the data “motherlode” that would reveal clear temporal and spatial patterns and allow them to make big, profound, earth-shattering claims. “I intend to correlate huge changes in socioeconomic data to movements in these massive infrastructures.” “I plan to develop a comprehensive map of all the people and places involved in this social movement.” When, by mid-semester, they hadn’t experienced the “data epiphanies” they were waiting for, many were either apologetic (for not looking hard enough or in the right places), frustrated, or defeated.

I wondered if perhaps, influenced by the prevalence of GIS and “data fetishization,” and by the way so many of us tend to use the terms “mapping” and “data visualization” interchangeably, my students assumed that their maps had to show large-scale patterns in quantitative data. Many of them had forgotten that the personal and the partial, the subjective and the speculative, are also mappable — and worthy of being mapped. The “GIS mindset” was stifling to some students. As Bodenhamer puts it, GIS can appear “reductionist in its epistemology. It forces data into categories; it defines space in limited and literal ways instead of the metaphorical frames that are equally reflective of human experience” (24).

Eventually coming to terms with the “non-systematicity” of their conclusions, accepting that they wouldn’t be creating a heat map showing conclusive evidence of quantifiable macro-scale changes, they recognized the breadth and flexibility of mapping as a method. We can map the qualitative, the necessarily incomplete and inconclusive, the fuzzy. And we can even infuse a little poetry into our data models (as many of my students did by developing creative many-to-many relationships) to capture the nuance and nebulousness of our subjects.

Our Maps Can Contain an Implicit Critique of Mapping Itself. Despite whatever opportunities we might have to detourn the map and its underlying database, we sometimes run up against the operative or epistemological limitations of these systems. Not all stories are spatial. Not everything can be plotted to a point, line, or area on a map. And not everything can be translated into a data model — at least not without losing something. Many of my students offered amazingly insightful reflections on the values and limitations of mapping as a method and a mode of presentation in their own projects:

I think proximity is a point to be made, but not the whole point, and it might push users to get caught up in spatial observations. (via)

I’ve noticed that all the presentations involved navigation tasks that would seem obscure without the author walking us through them. Why do the maps come so alive when we have a guide walking us through them? (via)

At its most basic, my conceptual point about Atlantic Yards is to look at as much as you can. When you see my map from far enough away, it looks like all of Brooklyn is covered in green circles, but zoom in further and there are gaps begging to be filled in. And I think for now at least, that’s how it’s supposed to look. (via)

They’ve come to accept that some gaps are supposed to be there, that their projects will be defined by holes and incompleteness. In recognizing what maps can and can’t do well, we’ve been able to look at them more critically as media, and at mapping as a method — as only one of myriad media and methods at our disposal.

Bodenhamer advocates for the integration of multiple media formats — “a letter, memoir, photograph, painting, oral account, video” — and types of research material — “oral testimony, anthology, memoir, biography, images, natural history and everything you might ever want to say about a place” — into what he calls “deep maps,” maps that are “visual, time-based, and structurally open” (26-8).

They are genuinely multimedia and multilayered. They do not seek authority or objectivity but involve negotiation between insiders and outsiders, experts and contributors, over what is represented and how. Framed as a conversation and not a statement, deep maps are inherently unstable, continually unfolding and changing in response to new data, new perspectives, and new insights” (26-7).

Whether we regard mapping as the “umbrella” strategy encompassing these other methods and modalities, or mapping as only one component of a “deep” spatially-oriented methodology, it’s important that we think critically about each component of our “toolbox” — that we resist the temptation to fetishize the data or the map, that we appreciate what each of our tools can and can’t do, that we devise a strategy by which these various tools can work in a complementary fashion to do justice to the rich spatial and temporal dimensions of our subjects of inquiry.