Inter-Institutional Undergraduate Intensive, with M.E. Luka and Annette Markham
The acronym IRL, or “in real life,” purports to distinguish our “real,” unmediated, bodily existence from everything “unreal” that happens online. But the distinction isn’t quite so tidy; there’s hardly any aspect of our social and material worlds that remains untouched by digital technologies. How can we deploy the methods and sensibilities of ethnography, anthropology’s signature method, to better understand how the digital shapes our relationships, our institutions, our economies, our selves, etc? How might we deploy digital tools *in* that investigation? And how can we supplement anthropological methods with those from media studies, critical data studies, infrastructure studies, design, creative technology, and a variety of other fields? In this intensive intersession workshop, we’ll join with the Digital Ethnography Research Centre at RMIT in Melbourne, Australia, and the University of Toronto to engage in the globally networked exploration and application of digital ethnography. Students will be invited to complete course readings and screenings, small ethnographic exercises, and an individual or collaborative final project: a multimedia documentary of, or a field guide to, a digital environment or community or phenomenon. For the first two weeks in January, we’ll meet intensively for lecture, discussion, and collaborative exercises; students should expect to dedicate roughly five hours each day to either class meetings or asynchronous engagement, plus light homework. Students will then apply their learning through independently designed and executed digital fieldwork, which they’ll complete during the first half of the spring semester. The sensibilities and skills developed in this course will be highly relevant in a variety of fields, as most institutions and industries in the post-pandemic world will have to reimagine themselves to more integrally incorporate digital technologies.
Maps reveal, delineate, verify, orient, navigate, anticipate, historicize, conceal, persuade, and, on occasion, even lie. From the earliest spatial representations in cave paintings and on clay tablets, to the predictive climate visualizations and crime maps and mobile cartographic apps of today and tomorrow, maps have offered far more than an objective representation of a stable reality. In this hybrid theory-practice studio we’ll examine maps as artifacts, as texts, as media; and mapping as a method useful in the social sciences, humanities, arts, and design. We’ll explore the past, present, and future – across myriad geographic and cultural contexts – of our techniques and technologies for mapping space and time. In the process, we’ll address various critical frameworks for analyzing the rhetorics, poetics, politics, and epistemologies of spatial and temporal maps. Throughout the semester we’ll also experiment with a variety of critical mapping tools and methods, from techniques of critical cartography to indigenous practices to sensory mapping to time-lining, using both analog and digital approaches. Students are encouraged to use the course, which will be supported by a skilled cartographer teaching assistant, to supplement their fieldwork, to develop their own thesis / dissertation projects, or to advance other personal research and creative pursuits. Course requirements include: individual map critiques; lab exercises; and individual research-based, critical-creative “atlases” composed of maps in a variety of formats.
Although humans have been thinking about and theorizing about media since antiquity, we have only recently – within the past century – begun to systematically, even scientifically, study the media. We now consider everything from the media’s role in society to its psychological “effects” on those who consume it; from the content of the messages it disseminates to the ideologies underlying its production and consumption. In this course we will look at the past, present, and future of media research: what do researchers think worthy of study, and what methods do they use to study it? We’ll ask ourselves similar questions: What, in our mediated environment, deserves study? What can and should we study, and why should anybody care? How can we match our own intellectual and creative interests to particular research subjects and methodologies? What does “research” mean in this digital age, this era of ubiquitous information? What tools can we use to study the media, and what kinds of information and knowledge can those tools yield? How do we determine the credibility of a source or generate our own data? Furthermore, how can we use the media themselves in the study of various social or psychological phenomena? And, conversely, how can we use research to help guide our media production? Our consideration of these questions throughout the semester will prepare us to create a grant proposal for either a media studies research project or a research-based media production project.
An impassioned political preamble seems obligatory in even the most modest of academic addresses these days. I’m going to skip it – but I will say this: As so much of the world is wondering what forms of collective action and communication resonate amidst so much political and epistemological upheaval, Alberto Corsín Jiménez and Adolfo Estalella offer a model for thinking recursively about how we constitute and act as publics – particularly as publics in cities, which are commonly our stages for political action and are now, some believe, the only remaining spatial scale at which we can work to maintain “sanctuaries” for democratic ideals.
Drawing on work in the free and open-source software community, Jiménez and Estalella propose that the community’s commitment to sharing, to the commons, and to the “democratizing potential of technology” can be productively, if not seamlessly, transferred to the urban realm. The software developer’s operational unit, the “prototype,” is also of potential utility for ethnographers and other researchers, whether they’re studying software and urbanism, or not.
The prototype is a proof-of-concept meant to be built upon. It’s a model from which we can construct things, ideas, publics, and politics. It’s a technical form and a social form encompassing a methodology, an epistemology, an ontology, and even an ideology. In free culture communities, the prototype embodies openness and adaptability, and it calls for iteration and transference. Our authors describe the Inteligencias Colectivas, for instance, who are interested in “evolutionizing” urban prototypical forms and knowledges. They acknowledge the “architectural intelligences behind mundane objects,” then imagine their “resonances, extensions, and analogies” in other contexts and environments. The portability of the prototype renders it more widely accessible, thereby potentially democratizing design – but only if the design is effectively communicated, rendered intelligible and actionable, to other communities. Thus, Jiménez notes, the archive is an integral ingredient of the prototype; it’s the “ur-design,” the “infra-ontology” of the prototype. The archive captures not only a prototype’s composition, but also its “biography”: its historical contexts, its evolution, its social relations of production and use.
Different kinds of objects and practices call for different forms of documentation. To be rendered “fully legible,” Jiménez says, “some intelligences require a multi-layered combination of iconographic techniques,” like photographs, sketches, and video recordings. The choice of particular files, formats, and languages depends not only on their representational affordances and pedagogical potential, but also their politics: proprietary software and restrictive file formats, for instance, would limit a prototype’s accessibility and mutability and contradict the whole open-source ethos. The ethnographer’s experimentation with such a range of modalities in his or her own work likewise represents an aesthetic and political choice – to extend ethnographic work into what Michael Fischer calls “third spaces of articulation.”
While we learned from our MakerBot fetish phase that prototyping doesn’t always elicit criticality, it does have the potential to engender self-reflexivity, to create what Christopher Kelty calls “recursive publics”: publics that are “vitally concerned with the material and practical maintenance and modification of the technical, legal,… and conceptual means of [their] own existence as a public.” In their conscious choices of democratic, egalitarian modes of action and communication, he says, they “speak to existing forms of power through the production of actually existing alternatives.” One would like to think that scholars and reflective practitioners are also “vitally concerned” with the material conditions of their own knowledge and cultural production, but this of course isn’t always the case: we turn a blind eye to our underpaid adjuncts, indebted graduate students, and the free editorial labor and exorbitant subscription fees that sustain our scholarly publishing systems. Yet Jiménez and Estalella found that their fieldwork with free culture activists in Madrid required a “form of ethnography that takes its own changing infrastructure as an object of inquiry.” We all would do well to consider how the evolving technological and social infrastructures of the academy, of our disciplines – and the larger culture within which they exist – necessitate new knowledge infrastructures, new methods and modes of dissemination. Jiménez and Estalella felt compelled to transform their study of free culture prototypes into “a prototype for free culture itself.” Through their “Taking Critique Out for a Walk” series, they talked about the city while talking through it, and they sought means to “open-source the very architecture of education.” Such recursive thinking generated for them new modes of scholarly practice and publicity.
I’d argue that recursion should involve “vital concern” not only with the methods and political-economic conditions of one’s own practice – but also with the temporal depth of that recursivity. What’s the history of recursion’s loop? What’s the prototype of the prototype? We tend to metaphorize complicated systems – like cities and brains – in terms of the prevailing technologies of the time. At various points we’ve likened cognition and urban operations to the workings of hydraulic or electrical systems, or computers. And we often draw parallels between these two ur-metaphors: cities seem to work an awful lot like computers, and computer programmers draw inspiration from architecture. When we see free and open culture in our cities, it bears a resemblance to open-source software.
Over the past two decades, we’ve seen several iterations – prototypes, we might say – of open-source architecture and urban design. Paperhouses and Wikihouses offer freely available, modifiable plans. Pritzker Prize winner Alejandro Aravena has released four of his “half-a-house” designs into the public domain, allowing for their unrestricted use and adaptation. Carlo Ratti and Matthew Claudels proposed their own model of “open source architecture” in 2011, and, before them, Architecture for Humanity’s Cameron Sinclair aimed to bring open-source principles to humanitarian design. In the early aughts, Usman Haque experimented with open-source architecture using inflatables, and then he and Matthew Fuller joined forces to prototype an “Urban Versioning System.” In 2003, Dennis Kaspori proposed an “open source [design] practice” that allows for the “collective,” iterative and evolutionary “development of solutions for spatial issues involving housing, mobility, greenspace, urban renewal, and so on.” He’s speaking free culture’s language.
Even well before the age of open-source, in the 1970s, Cedric Price prototyped his anticipatory architecture, and Christopher Alexander offered up his “pattern language,” which was also built on principles of democratic (albeit moralistic), evolutionary design. Stewart Brand, meanwhile, supplied a whole host of prototypes for living in his Whole Earth Catalog. And having been raised in Amish country in Pennsylvania, and having attended a few barn raisings in my time, I’d say the Amish have been prototyping free and open-source design for a few centuries. Without autoCAD. Rahul Mehrotra tells of similarly-minded design principles at the Kumbh Mela Hindu pilgrimage, which involves the construction of a massive, modular temporary city every several years – and which has, for well over a millennium, embraced evolutionary, recombinant, accessible, recursive practices.
It’s also helpful to recall that the widespread use of architectural and urban plans are a relatively recent phenomenon, as architectural historian Mario Carpo argues. Before the rise of print, designers were also craftsmen, and they typically spread ideas orally and learned their trade through apprenticeships. The idea of the architect as a professional wielding specialized drawings is a product of new professional organizations and curricula, like that at the École des Beaux Arts, founded in the 19th century. As Michael Guggenheim argues, throughout much of history, “people could invent products at home, or produce ad-hoc solutions to practical problems…with a piece of wood and some nails. The problem,” he says, “is rather, that there are few historical sources and…little historical interest in these processes, since they do not lend themselves to the writing of histories.”
Recognizing this long history of prototypes to the prototype serves not only to remind us of the historical specificity of our contemporary metaphors, like the city-as-software, but also to highlight the way those metaphors shape particular urban practices and epistemologies and politics. Those metaphors also determine how knowledges are documented and transformed into historical sources for future archival researchers – and into manuals and “instructables” for contemporary practitioners. If a city is a computer, and if its urban practices are executed like software, the archive of those urban intelligences is more likely to adopt a computational logic, too.
The Ciudad Escuela web platform invites free culture projects to “open the ‘sources’ of their own technical, legal, pedagogical, associative and political capacities,” to render them legible through those “multi-layered…iconographic techniques” we discussed earlier. They’re encouraged to “legitimize their practices vis-à-vis local authorities and neighboring communities” by “explicating and standardizing [their] tacit urban knowledge,” and by “verifying” their skills with Mozilla’s Open Badges technology. But what does it mean to tie legitimation to standardization? What happens when particular cultures – embodied, situated, perhaps performative or oral, or governed by codes of privacy – translate their knowledge into the archival logics of the web and the credentialing economies of civic tech. Do we restrict what constitutes urban knowledge and its “repertoire” if it has to make itself iconographic: YouTube-able, diagram-able, data-visualizable?
I’d encourage us to also think recursively about the technological metaphors we use to make sense of things like urban cultures, or to explain the methods and media we employ as scholars and practitioners. Those metaphors embody epistemologies and politics that recursively reinscribe themselves in the archive. If culture is software, our cultural institutions and infrastructures – from universities to urban “laboratories” – seem like computers. And any knowledges that happen to be in the wrong file format just might not compute.
 Free urban culture has been around for quite some time, too: consider the centuries’-long history of public libraries, mechanics’ institutions, athenaeums – many of which promoted the democratization of productive knowledge, itself a prototype for “maker culture.”
 We’ve come to recognize that universal transparency and openness are not universal goods – particularly for vulnerable populations, indigenous groups, and marginalized communities. Visibility, openness can offer legitimation, but it can also invite exploitation.
I spoke about “Interfacing Urban Intelligence” at the “Code + the City” workshop, which took place in Ireland on September 3-4, 2014. My talk was drawn from my article of the same title, which I published in Places last year. You can watch a video of my talk here. I have a habit of giving talks with wet hair, it seems.
Several days ago I posted drafts of a few sections of an article I’m writing for Places. I’m exploring speculative interfaces to the “smart city” — the windows that supposedly allow us to peer into, and potentially interact with, our future-cities’ operating systems. The methodological part of that work may or may not appear in the final publication — but it’ll certainly prove useful for the “Digital Archives” studio I’m teaching this semester. I’ve asked students to critique existing interfaces to archival collections as part of their preparation for our work, which involves proposing “platforms for highlighting and recontextualizing noteworthy…material [in The New School’s archives] – particularly material regarding the history of media study and media-making at [the university].”
So, here’s a revision, and “archival customization,” of my post from January 10. First, I explain how we might determine what constitutes an interface, and then I propose a methodology for critiquing interfaces — particularly archival interfaces.
In his 1997 Interface Culture, Stephen Johnson explains that an interface is “software that shapes the interaction between user and computer. The interface serves as a kind of translator, mediating between the two parties, making one sensible to the other.” He specifies that the interface is more semantic than concretely technological. Branden Hookway, whose own book on the topic is forthcoming from MIT Press, agrees that the interface does its work “not as a technology in itself but as the zone or threshold that must be worked through in order to be able to relate to technology.” Alexander Galloway, too, in his Interface Effect, specifies that the interface is not a thing, but a “process or a translation” – one that draws its qualities from the “things” it’s translating between, but which also has its own properties that are independent from those things.
Media scholar Johanna Drucker picks up on Hookway’s spatial “zone” and “threshold” metaphors; she regards the interface as an environment, a “space of affordances and possibilities” that informs how people interact with it. It’s a “set of conditions, structured relations, that allow certain behaviors, actions, readings, events to occur.” Drucker, like Hookway, is focused on the human-computer interface; both scholars emphasize how the interface, through its affordances, structures the user’s agency and identity, and how it constructs him or her as a “subject,” which is different from a mere “user,” in that the subject’s identity is informed by historical, cultural, linguistic, political forces, and that identity shifts in response to contextual variations. An individual might be one “subject” when controlling her home Nest thermostat from her smartphone at work, another when interacting with an ATM at her bank, and yet another when annotating archival objects in a “participatory archive.”
But the zone between the machine and the person – that perceptible, manipulable skin – isn’t the only zone of interface. Computers, for instance, are commonly modeled as a “stack” of protocols of varying concreteness or abstraction – from the physical Ethernet hardware to the abstract application interface. There are interfaces between the various layers of this stack. As Galloway explains, “the interface is a general technique of mediation evident at all levels”; that “technique” might be graphical, sonic, motion-tracking, gestural (using hands or mice), tangible/embodied (involving the physical embodiment of data, their embeddedness in real spaces, and users’ bodily interaction), or of another variety. Regardless of its means of operation, Galloway continues, the interface “facilitates the way of thinking that tends to pitch things in terms of ‘levels’ or ‘layers’ in the first place.”
Much, if not all, of what’s “beneath” or “behind” the graphical user interface (GUI) is “black boxed,” inaccessible and unintelligible to us. And that obfuscation is in large part intentional and necessary. As I write this, for instance, I’m focusing my attention on the words on-screen, on the GUI, rather than bothering myself with the chatter between my TCP/IP transport software and my Ethernet hardware. And even the ubiquity and familiarity of computer screens like the one before me, and the one I carry around in my pocket – and the intuitive means by which I interact with them – tend to naturalize and “disappear” the interface itself. That obfuscation, while necessary, is also risky; we forget just how much these layered interfaces are structuring our communication and sociality, how they’re delimiting our agency and defining our identities. As Galloway reminds us, it’s crucial to consider “the translation of ideological force into data structures and symbolic logic”; the user interface, the code, the protocols, and the physical infrastructure “beneath” them are all political.
That process of translation can call attention to itself when, say, something breaks – or when, say, the NYPL updates its catalog and we have to learn a new visual language and means of navigation. When Johnson wrote his book in 1997, he investigated the desktop, windows, links, text, and intelligent agents as interfacing elements. But even then – before this age of smartphones and smart cities – he acknowledged that it was becoming “more and more difficult to imagine the dataspace at our fingertips.”
Representing all that information is going to require a new visual language… We can already see the first stirrings of this new form in recent interface designs that have moved beyond the two dimensional desktop metaphor into more immersive digital environments: town squares, shopping malls, personal assistants, living rooms. As the infosphere continues its exponential growth, the metaphors used to describe it will also grow in both scale and complexity.
Today, the bazaar-as-interface isn’t merely a computing metaphor; it’s not merely a trope for conceptualizing and graphically modeling an online store or a discussion board. Media facades, sensor-embedded pathways and thresholds, responsive architecture, public interactives and the like have transformed our physical environments into interfaces in their own right. But interfaces to what? What is this “city” that we’re supposed to relate to? And how “deep” does that relation go? What technical operations are taking place down the “stack” of networked urban infrastructures that we could possibly interface with?
 Stephen Johnson, Interface Culture: How New Technology Transforms the Way We Create and Communicate (New York: Harper Edge, 1997): 14.
 Branden Hookway, The Interface, Dissertation (Princeton University, 2011): 14).
 Alexander R. Galloway, The Interface Effect (Malden, Ma: Polity, 2012): 33.
 Rory Solomon, one of my advisees at The New School, wrote a brilliant thesis – The Stack: A Media Archaeology of the Computer Program – on the history of the stack metaphor. Part of his work appears in “Last In, First Out: Network Archaeology of the Stack” Amodern 2 (October 2013).
 Galloway 54. See also Paul Dourish, Where the Action Is: The Foundations of Embodied Interaction (Cambridge, MIT Press, 2001); Eva Hornecker & Jacob Buur, “Getting a Grip on Tangible Interaction: A Framework on Physical Space and Social Interaction” Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems (2006): 437-446.
 Galloway 70.
 Johnson 18.
[ …In the article from which these passages are drawn, I talk here about “urban interfaces”… ]
Of course folks concerned with “usability” in interface design have a list of criteria that make for an “effective” and “efficient” interface. Jacob NIelsen offers ten heuristics, which I’ve collapsed into eight:
Flexibility, control, and efficiency of use. Does the interface allow users to progress efficiently in pursuit of her goal, whatever it may be? Does it present its “content” clearly, and via an organizational scheme that makes sense? Does the user feel as if she’s in control of her experience? Does she feel free to explore the interface? Does she feel “trapped” if she takes a “wrong turn”? Is the interface flexible and efficient for both novice and experienced users?
Intuitive design. Does the site clearly communicate its goals, functions, and affordances? Does it provide labels and instructions for use, or are those instructions “embodied” in the platform’s design? Is it clear how the user can interact with the platform (e.g., where to click)? Does the platform employ concepts, terminology, graphics, workflows, etc., that are familiar to the user, and that help her to easily understand whether and how the platform can enable her to achieve her goals? Does the interface “follow real-world conventions, making information appear in a natural and logical order”?
Consistency and standards. This is related to “intuition”: if your site uses terminology, graphics, processes, etc., consistently throughout, and if those variables are consistent with other familiar applications, the user will more likely be able to engage with the platform intuitively.
Recognition rather than recall. Are instructions for how to use the system consistently visible, or easily retrieved? Does the user have to recall cues from other parts of the platform in order to navigate through a particular page or passage?
Visibility of system status. Does the system provide adequate feedback about its functioning? Does the user know when something is “processing,” and how long she’ll have to wait? Does she have a sense of where she is oriented within the “grand scheme” of the system? Does she know how to get “back home”?
Aesthetics. Do the platform’s “look and feel” support its functionality? Is its design as simple as possible (assuming simplicity is a desirable goal)? Are there extraneous elements that could be either eliminated or hidden? Is the platform legible, are its other sensory outputs easily discerned, and is it and “accessibly” designed?
Error prevention and recovery. Does the platform include “guide rails” to keep users from going down wrong paths? Does it help users recognize errors, via non-specialized language; develop a clear diagnosis; consider possible solutions; and recover relatively painlessly?
Help and documentation. Can users find help — or contextual information about the platform and its “contents” — easily?
And here’s how I’ve tailored these criteria for your Archival Interface Critique assignment: I’ve asked you to examine
your chosen site’s composition, organization, and aesthetics;
how it structures the user’s experience and navigation, and how intuitive and “seamless” that interaction is;
furthermore, how desirable would “seamless” interaction be in this instance (perhaps it would be helpful and instructive to show some seams?);
how the site contextualizes the archival material (e.g., does it provide or link to robust metadata, does it “animate” the material?);
how the site “hierarchizes” the presentation of information (e.g., does it allow users to “dig deeper” for more data if they want it?);
the availability of documentation and help for users who want or need it.
Consider the needs of various user groups and user scenarios, and try to put yourself in their positions as you navigate through your site.
As Drucker explains, such ways of thinking about human-computer interaction (HCI) are framed by values central to engineering.” The evaluation of interfaces involves “scenarios that chunk tasks and behaviors into carefully segmented decision trees” and “endlessly iterative cycles of ‘task specification’ and ‘deliverables’”; and it tends to equate the “human” in “human-computer interaction” I with an efficiency-minded “user.” Drucker proposes instead a humanities-oriented interface theory that embraces other values and experiences – ambiguity, serendipity, productive inefficiency – and draws on insights from “interface design, behavioral cognition, and ergonomics, approaches to reading and human processing,” and the history of graphic design and communication, with particular attention to “the semantics of visual form.”
Yet while Drucker proposes that we move away from engineering-oriented methods of critique, we do have to acknowledge that the engineering of our material interfaces does factor into how those interfaces structure “human [and machine] processing.” We need to take into consideration the materiality, scale, location, and orientation of the interface. For instance, where is the screen sited; how big is it; is it oriented in landscape or portrait mode; what kinds of viewing practices does it promote; does it allow for interactivity, and if so, in what form? Where are the speakers, what is their reach, and what kind of listening practices to they foster? Or, where are the sensors that read our gestures, how sensitive are they, and how do they condition our movements? Furthermore, what are our possible modalities of interaction with the interface? Do we merely look at dynamically presented data? Can we touch the screen and make things happen? Can we speak into the air and expect it to hear us, or do we have to press a button to awaken Siri? Can we gesticulate “naturally,” or do we have to wear a special glove, or carry a special wand, in order for it to recognize our movements?
Now, returning to Drucker’s recommendations: we can learn a lot from comics in regard to the semantics of visual form. Scott McCloud’s canonical Understanding Comics offers a useful model for thinking about graphic reading practices. In examining interfaces, too, we should attend to variables of basic composition (e.g. the size, shape, position, etc., of elements on the screen), as well as how they work together across time and space: how we read across panels and pages, and how we trace themes and topics as we travel through the graphic interface. The temporal and spatial dimensions of our navigation could be sign-posted for us via “bread-crumb trails that mark [our] place in a hierarchy or a sequence or moves or events,” or devices that allow us to shift scales and levels of granularity, and, all the while, maintain awareness of how closely we’re “zoomed in” and how much context the interface is providing. This sense of orientation – of understanding where one is within the “grand scheme” of the interface, or the landscape or timeframe it’s representing – plays a key role in determining our user-subject’s identity and agency. Margaret Hedstrom describes the archival interface as a mediating and orienting structure:
[It is] both a metaphor for archivists’ roles as intermediaries between documentary evidence and its readers[,] and a tangible set of structures and tools that place archival documents in a context and provide an interpretative framework.
Speaking of frameworks: Drucker also recommends that we employ “frame analysis,” which would address how the various boxes, buttons, and applications – as well as the different modalities of presentation (audio, visual, textual, etc.) on our interfaces – conceptually and graphically “chunk, isolate, segment, [and] distinguish one activity or application from another.” Ideally, these assemblages will all hang together under a coherent, overarching “conceptual organization, or graphic frame,” and a sufficient number of common reference points. Such cohesion will enable us to read across “a multiplicity of worlds, phenomena, representations, arguments, presentations… and media modalities” – but in critiquing how this cohesion comes about, we should also pay attention to the “nodes, edges, tangents, trajectories, hinges, bends, pipelines, [and] portals” that frame and link – and perhaps create friction between – the components of our interfaces.
Reading “beneath” those graphic frames provides insight into the data models structuring our interaction with the technology. Those sliders, dialogue boxes, drop-down menus and other GUI elements indicate how the data has been modeled on the “back-end” – as a qualitative or quantitative value, as a set of discrete entities or a continuum, as an open field or a set of controlled choices, etc. “[C]ontent models, forms of classification, taxonomy, or information organization,” Drucker argues, “embody ideology. Ontologies are ideologies,… as naming, ordering, and parameterizing are interpretive acts that enact their view of knowledge, reality, and experience and give it form.” The design of an interface thus isn’t simply about efficiently arranging elements and structuring users’ behavior; interface design also models – perhaps unwittingly, in some cases – an epistemology and a method of interpretation.
The archival interface, Hedstrom argues, is “a site where power is negotiated and exercised,… consciously or unconsciously, over documents and their representations, over access to them, over actual and potential uses of archives, and over memory”; it’s a “boundary where archivists… negotiate over what constitutes legitimate evidence of the past.” She suggests that archives consider how their interfaces “might serve as devices for exposing, rather than obscuring, the imprint that archivists leave on records through appraisal and descriptive practices” — and, I would add, exposing (where appropriate) user engagement with the archives’ too.
Yet, returning to “the stack,” Galloway reminds us that, while the interface does serve to “translate” between the data model and the GUI, and between other levels of the stack, that translation isn’t inert. He speaks of the “fundamental incommensurability between any two points or thresholds on the continuum of layers”; we thus use allegories or metaphors – the desktop, the file folder, or even our mental image of the city-as-network – to ostensibly “resolve” the “tension between the machinic and the narrative,… the fluid and the fixed, the digital and the analog.” In our interface critique, then, we should also consider what acts of interpretive translation or allegorization are taking place at those hinges or portals between layers of interfaces.
The interface, as we said earlier, also shapes our identities and defines our agency as users, or subjects. We should thus examine how the interface enunciates – what language it uses to “frame” its content into fundamental categories, to whom it speaks and how, what point(s) of view are tacitly or explicitly adopted. Of course there’s an ideology to this enunciation, too: Drucker encourages us to consider “who speaks for whom”; “what is not able to said,” “what is excluded, impossible, not present, not able to be articulated given [the interface’s] structures”? How the interface addresses, or fails to address us – and how its underlying database categorizes us into what Galloway calls “cybertypes” – has the potential to shape how we understand our social roles and expected behavior. We could identify in our critique whom the interface addresses, how it does so, and how those users play into their “cybertype” subjectivities.
Hedstrom suggests that by reflecting the politics of archival practice within the archival interface, the archives could speak to a “larger community of scholars“:
By providing insights into the tensions between theory and practice, supplying information about institutional appraisal policies, and providing means for users to discover the archivists on the other side of the interface, archivists could begin to share power with a larger community of scholars… Users will be able to judge the authenticity, reliability, and weight of documentary evidence for themselves using the tools, norms, and methodologies of their time, if we provide the contextual information about appraisal and description that they will need to make these judgments.
We also, finally, should consider what is not made visible or otherwise perceptible. What is simply not representable through a graphic or gestural user interface, on a zoomable map, via data visualization or sonification? While some content or levels of the protocol stack may be intentionally hidden – for security or intellectual property reasons, for instance – Galloway argues that some things are simply unrepresentable, in large part because we have yet to create “adequate visualizations” of our network culture and control society. There’s been significant experimentation in the visualizationof archival material — but particularly in light of our tendency to fetishize the data visualization, we should also consider the possibility that some aspects of our archives, and of archival experience, are simply not, and will never be, machine-readable. In our interface critique, then, we might imagine what dimensions of the historical world, of our historical record, and of human experience simply cannot be translated or interfaced. What do we not want to “make sensible” to the machine?
 Margaret Hedstrom, “Archives, Memory, and Interfaces with the Past” Archival Science 2 (2002): 21.
 Drucker 2011: 15.
 Drucker 2011: 18.
 Drucker 2011: 14.
 Drucker 2013: ¶42.
 Hedstrom 22, 26, 33. She proposes that “new interfaces could serve as gateways to structured information about appraisal and selection. To build such interfaces, however, archivists would have to share their insights about how they interpreted appraisal theory, expose their debates and discussions about appraisal values, underline constraints of technology and politics hampering an ideal appraisal decision from implementation, and, most importantly, reveal their uncertainties about, and discomfort with, the choices that confront them” (37). Furthermore, the archival interface could serve as a site for archivists to reflect on how their practices of archival description — their decisions “about which records to describe in greater detail, and which to digitize for remote access,” and what vocabulary to use in describing those materials — generate an “interpretative spin” (38, 40).
 Galloway 76.
 Even what seem to be purely aesthetic decisions, or matters of style, can function allegorically or rhetorically; Galloway speaks of “windowing,” for instance – of screens dissected into panels or frames that offer multiple perspectives simultaneously, as opposed to the sequenced presentation of filmic montage – as a stylistic embodiment of the “cultural logic of computation.” While his analysis focuses on the television show 24, we can easily see similar modes of presentation on our smartphone screens and in smart cities’ control centers. This window motif represents “the distributed network as aesthetic construction”; it translates the network structure into a form, a look (Galloway 110, 117).
 Drucker 2013. Drew Hemment and Anthony Townsend also encourage us to pay attention to disenfranchised populations: “how can we create opportunities to engage every citizen in the development and revitalization Of The Smart City?” (“Here Come The Smart Citizens” In Hemment & Townsend, Eds., Smart Citizens (Future Everything Publications, 2013): 3).
I’m writing a new piece for Placeson prospective/speculative “interfaces to the smart city” — or points of human contact with the “urban operating system.” As I explained to the editors,
I’d like to consider these prototyped urban interfaces‘ IxD — with outputs including maps, data visualizations, photos, sounds, etc.; and inputs ranging from GUIs and touchscreens to voice and gestural interfaces — and how that interactive experience both reflects and informs urban dwellers’ relationships to their cities (and obfuscates some aspects of the city), and shapes their identities as urban “subjects.” I’m particularly interested in our single-minded focus on screens (gaaaahh!): are there other, non-“glowing rectangle” / “pictures under glass“-oriented platforms we can use to mediate our future-experiences of our future-cities?