Categories
Publications

Digital Frictions Series @ Urban Omnibus

I guest-edited the Digital Frictions series for the Architectural League of New York’s Urban Omnibus. With the expert assistance of editors Mariana Mogilevich and Josh McWhirter, we released 11 installments between September 2019 and January 2020. Here’s my introduction: “Where Code Meets Concrete,” Urban Omnibus (September 4, 2019)

Categories
Presentations

Mapwashing: Co-Opting Civic Design (2019)

Mapwashing: Co-Opting Civic Design,” Columbia Graduate School of Architecture, Planning and Preservation Lectures in Planning, September 17, 2019

“Mapwashing,” Keynote Address, “Radical Cartography Now: Digital, Artistic, and Social Justice Approaches to Mapping,” Brown University, September 27, 2019

“Mapwashing,” Department of Urban Studies and Planning, MIT, November 22, 2019

Mattern Mapwashing2 by shannonmattern on Scribd

Categories
Publications

Networked Dream Worlds (on 5G)

Networked Dream Worlds,” Real Life Magazine (July 8, 2019).

I wasn’t able to include acknowledgments in this piece, but lots of folks deserve thanks:

Categories
Presentations

Mapping Urban Media Infrastructures (2019)

PhD Workshop with Rebecca Ross, Birkbeck, University of London, May 30, 2019

Categories
Presentations

Data Fantasies and Operational Facts: 5G’s Infrastructural Epistemologies (2019)

Society for Cinema and Media Studies Conference, March 17, 2019

Designing for the Unknown Series, CONNECT Research Centre, Trinity College, Dublin, Ireland, May 26, 2019

Data Materiality Series, Birkbeck, University of London, May 31, 2019

Mattern 5GTalk DublinLondon Updated by shannonmattern on Scribd

Categories
Blog

Sidewalk Labs’s Material Co-Design

In 2014 Google debuted Material Design, a set of user-interface design guidelines “inspired by the physical world and its textures, including how they reflect light and cast shadows. Material surfaces reimagine the mediums of paper and ink” by presenting layers of content as if they’re solid surfaces of equal thickness, stacked atop one another. Unifying the user experience across all Google products, Mark Wilson wrote in Fast Company, “Google has become a second reality inside touch-screen devices – complete with its own rules of logic and physics – and if Google has its way, it will eventually break free of touch screens to quite literally shape the world around us.”

Well, Google has been having its way. The following year, before it metamorphosed into Alphabet, Inc., Google launched Sidewalk Labs, an “urban innovation” division dedicated to solving urban problems with technology. Sidewalk adopted its own form of Material Design for the “physical world,” but their materials were concrete and cameras rather than paper and ink. Sidewalk’s leaders broadcast their intention to reimagine cities “from the Internet up.” I speculated back in 2016 that Sidewalk might be involved in the master-planning of New York’s new Hudson Yards development, home to the company’s headquarters, but the following year, Sidewalk shifted its attention to Toronto, where they’re currently working with Waterfront Toronto, a government organization, to build a new district infused with Alphabet tech (from tall timber buildings to sensor-embedded streets).

That effort has faced significant resistance. Critics have challenged the terms of the public-private partnership, the opacity of the development process, and the lack of clarity over data governance. Given the obfuscatory nature of Sidewalk’s closed-door deals and proprietary technologies, it might be surprising that they’ve adopted relatively transparent, intelligible, accessible tools of self-defense: paper and ink. Material Design as rhetorical redoubt.

Cards and Chipboard

Consider Sidewalk’s Toronto HQ 307, where the company tests out and solicits ideas for the smart neighborhood taking shape nearby. This “experimental workspace” features an assortment of unassuming analog media – post-its, index cards, markers (not to mention face-to-face communication between trained liaisons and visitors) – amidst scaffoldings of unfinished plywood, chipboard, and cork. Paper signage is affixed to walls, tacked to bulletin boards, or hung from movable metal dividers. These layered sheets of white and pastel – which bear resemblance to a Material Design interface – mix simple stencil and san serif typefaces to tell us about everything from the project concept, job creation, and Ontario’s timber industry, to Sidewalk’s proposals for data use, environmental sustainability, accessibility, and social infrastructures – including myriad hypothetical health and wellness spaces.

All 307 photos by me

Embedded among the printed text and still images we find the occasional screen, where we can explore maps, animations, and Sidewalk’s design tools. Humble sawhorse tables host an array of annotated cards, each offering feedback or questions from fellow 307 “co-designers.” A nearby stack of blank cards and pile of markers call upon us to add our own voices: “A question I have about facial recognition is: how will sensors and cameras be used wrt safety / security in data collection,” one visitor noted. An “efficient unit prototype,” a sample apartment made of cardboard, invites visitors to annotate its features with colored post-its: “corridor too narrow for wheelchairs,” “I like looking at something while I cook.”

Designers Daily tous les jours says they’ve created a “set of colorful, rolling, modular, stackable, playful interactive tools” that turn 307 into a place for “input + discussion + experimentation + new ideas + action.” Not everything is interactive, however. Some features simply announce their presence and expect us to deal with it. Perched high in a corner is a pair of gadgets, a white cylinder and a grey box, which, a stencil-and-san-serif sign-on-a-stand informs us, is a Numina sensor: “Hello,” it says. “A low-resolution image of you may be taken in this area. This image is immediately de-identified and is only used to calibrate and validate pedestrian and vehicle movement operating in the area.” No opt-in or out. No solicitation of your opinion or request for consent. Numina simply is, as-is, and it may have already taken your photo – before you even knew of its existence.

The open room is painted primer-white, and at the center of its concrete floor is a platform composed of tessellated chipboard hexagons, each with an embedded LED light. Those hexagons reappear outside, as pavers made from concrete infused with powerderized post-consumer glass (thus, the signs say, keeping glass waste out of landfills, and decreasing the CO2 emissions generated through concrete production). Inspired by the work of the French Institute of Science and Technology for Transport, Development and Networks, these pavers deliver “four key features”: modularity, which allows for easy removal and replacement; heating, which hastens the melting of snow and ice; lighting, which can be programmed to offer real-time cues to regulate traffic and communicate street use; and permeability, which aids with stormwater management. A sign affixed to the side of a shipping container – the modern model of modularity – identifies these hexagons as the “holy grail of pavement.” It’d be sacrilegious not to exploit their virtues.

307’s humble material objects – concrete and chipboard, post-its and powderized glass – are rhetorical tools suggesting that, here, cities are “workshopped,” urban futures and data plans are co-created from a mix of proprietary platforms and public knowledge. Except for that Numina sensor: that’s registering your presence and taking your picture, whether you like it or not.

Cells and Sensors

“Sensors have become a part of our daily lives. CCTVs. Traffic cameras. Transit card readers. Bike lane counters. Wi-Fi access points. Occupancy sensors that activate lights or open doors. These are all examples of how digital technologies can integrate into our physical world to help make our public spaces more comfortable, responsive, and efficient.” So acknowledged Sidewalk Labs in its Designing for Digital Transparency in the Public Realm project, developed through Spring of 2019 and formally released on April 19.

Open government advocate Bianca Wylie decried Sidewalk’s “passive language, without agency.” They speak “as though the sensors and cameras just sprouted into the world, without creators or purchasers, without contracts or decisions.” Of course Sidewalk, its parent company Alphabet, and partners like Numina have put – and are putting – many of those sensors in place. This act of “subtle [sensor] normalization,” Wylie says, “works to “guarantee that a quantified city becomes a social norm.” It’s already happening, Sidewalk reminds us; we might as well accept the facts: “Look around you as you go about your day, and you’ll start to see how much sensing and data collection infrastructure is already all around you — but there is very little transparency around what data is being collected, by whom, and for what purpose.”

Think back to that Numina privacy notice inside 307. “Hello! A low-resolution image of you may be taken in this area.” Or it may already have been taken by the time you read this sign. What if we applied a similar tell-don’t-ask / declaration-without-consent strategy to the thousands of outdoor sensors around Quayside – and all across Toronto and other cities and towns. Boston and London are already posting signage that alerts people to the presence of digital technologies in the public realm, but those signs are often text-heavy, and they require visiting a website to gather more information. As Sidewalk team members Jacqueline Lu, Patrick Keenan and Chelsey Colbert explain, user research shows that “few users read long, jargon-filled privacy and data collection policies.” SL asks: “What if you could quickly communicate what technology was in use in the public realm in a way that was transparent and clear without being overwhelming?”

[April 8 – DTPR Release Shareout / Creative Commons Attribution 4.0 International (CC BY 4.0). All images are linked to their sources.]
Sidewalk’s team (which contains lots of talented folks with impressive design, civic tech, and government experience) responded to this question by, first, examining existing labels and signage – from nutrition labels to street signs to surveillance camera notices – and the relevant International Organization of Standardization guidelines. They then convened co-design sessions in Toronto, New York, San Francisco, and London, which made use of cards and forms like those available in 307 to assess participants’ questions and concerns about existing technology (as much recent research has shown, those tech concerns vary widely in relation to users’ race and class, which raises questions about the demographic diversity of workshop participants). Sidewalk’s facilitators first asked participants what questions they had about existing technologies, like Bluetooth beacons and infrared sensors, then identified concerns shared within the group. They then asked participants to imagine signage systems and icons that would provide useful information about those technologies. Sidewalk polished those user proposals into their own set of icons, which they further refined through user-testing. They held periodic online “shareouts,” too, to present the evolving design.

Through this iterative user research process (undertaken in collaboration with the Soofa signage team), Sidewalk learned that the questions most important to people are: the purpose of various digital infrastructures (e.g., emergency, mobility, accessibility, enforcement, energy efficiency, and so forth; it’s not the most ontologically consistent classification system), the entity accountable for each technology, and whether that technology collects identifiable information. The co-design process seemed to begin by assuming that the presence of these technologies is a given; [edit:] while some resistance was apparently permitted, workshops seemed to be structured such that little time was spent debating whether particular digital infrastructures belonged in the public realm in the first place. The co-design process ultimately frames sensing technologies as a necessity, an inevitability. Might as well normalize them by getting to know them.

Sidewalk developed a set of icons (derived from Google’s Material icons) for each variable and decided to set those icons within a hexagon badge. “We chose the hexagon,” they explained, “because this perfect shape that occurs naturally” – from honeycombs to snowflakes – “is the most efficient way to fill a space with the least amount of material.” Plus, it’s “currently unused in our vocabulary of signage shapes and slightly resembles a stop sign – giving users a slight ‘Hey, check this out’ without forcing a stop.” (edit: Alex Gekker also noted hexagonal resonances in sci-fi gaming.) And as with the hexagonal concrete pavers planned for use on Toronto’s streets, these hexagonal icons tessellate easily, in standard patterns (again, in accordance with Material Design principles, and as James Birch notes, in a fashion quite similar to that of the NFPA 704 hazard placard).

The “purpose” and “accountable entity” cells are black and white, while the “identifiability” cell is in color: if its captured data is de-identified, (e.g., by blurring recognizable information in photos) the cell is blue; if users are identifiable (as in surveillance camera footage), the cell is yellow. If a particular technology collects no identifiable data, there’s no colored cell. I must admit: I’m not quite sure what it means to be “air-” or “light-de-identified” (a breeze could dissipate any identifiable chemicals?).

A fourth cell offers a QR code and URL through which users can access, via their mobile devices, a “digital channel” that offers more detail. This transition from physical signage to digital interface again exploits the cross-platform continuity of Material Design. Online, the information is organized linearly, via a set of chained icons identifying, first, the responsible organization, the technology’s purpose, and the tech type – the same information embedded in hexagons on public signage. The next set of icons, enclosed in circles, addresses the data and its processing: the type of data, whether it’s identifiable, how it’s processed. The final set, framed in squares, pertains to data storage and access: how long, where, and how it’s stored, and who has access to it. Plus, as the Sidewalk Team explained in their April 8, 2019, shareout, “You’ll see in the digital channel [tha]
there’s always an area to give feedback, … a way for people to express their opinions about the technology. And hopefully that feedback actually goes to someone who can make a difference.” Yes, hopefully the platform facilitates the registration of concerns to an accountable entity, and that that feedback prompts real dialogue – rather than merely enabling the performance of “co-design.”

“We’re not planning this just for Quayside,” the team says. “The intention from the beginning was that this would be implemented well before Quayside, and that this can really forward provocations about digital literacy and understanding the public realm at large.” They intend for their Digital Transparency in the Public Realm (DTPR) project designs – and the whole design process – to extend well beyond Toronto, too. All workshop materials, icons, and design standards are available on Github under a Creative Commons license. Sidewalk wants folks in other towns and cities to adopt and adapt them – and thus further advance the cause of “digital literacy.” “Our goal is to co-develop meaningful design patterns that will eventually be adopted by cities, private partners, and other institutions that are interested in improving digital transparency in public spaces,” they noted in their March 4, 2019, shareout.

Transparency implies disclosing the presence of urban technologies and the harvesting of what Sidewalk calls “urban data,” which encompass data collected in public spaces, private spaces accessible to the public, and private spaces that aren’t controlled by their occupants (what a conveniently wide jurisdiction). By “providing transparency,” Sidewalk helps people “increase their awareness” – their “literacy” – “of how digital technology in the public realm works.” That’s not an ignoble goal. Transparency, Lu, Keenan, and Colbert explain, “can empower users to meaningfully engage in what is fast-becoming a critical conversation of our time. And, equally important, transparency can nudge both users and data collectors towards best practices.” Yet I can’t help but wonder in whose interest “best” is defined here.

DTPR, an acronym likely to evoke GDPR – a recent landmark EU regulation on data protection and privacy – isn’t really about privacy or consent, or about fundamental questions regarding the presence of extractive and surveillant technologies in the public realm. Instead, DTPR, like the 307 experience, marshals Material Co-Design – its stacked colored cards and tessellated hexagons, minimalist icons and ISO standards – to aestheticize and rationalize coercion (co-optation? hegemonic persuasion? I can’t find the right word!), to frame “performative ethics” as political action. As Rob Kitchin argues, “citizen-centric” design, or co-design, “has largely acted as an empty signifier, designed to silence detractors or bring them into the fold while now altering the technocratic workings, profit-driven orientation, or ethos of stewardship … and civic paternalism … of smart city schemes.”

By becoming literate in design methods, by framing deliberation within a totalizing Material Design system, by learning how to spot urban technology and celebrating our “awareness” of its operations (undoubtedly valuable skills!), we can lose sight of the bigger questions: about whether we want cities built in the image of a corporate internet, whether all digital infrastructures are necessarily in the public interest, and whether a public’s acts of citizenship should be reduced to filling out co-design cards and training Alphabet’s algorithms.

Categories
Presentations

Sidewalk’s Sidewalk

On March 4 I joined Bianca Wiley, Beth Coleman, and Sarah Sharma — three women I admire greatly — at the McLuhan Centre for Culture and Technology at the University of Toronto for “A Pedestrian View of Sidewalk Toronto” (what’s Sidewalk Toronto, you ask?).

Here are my slides and text:

Mattern McLuhanCentrePPT by shannonmattern on Scribd

Sidewalk’s Sidewalk: A Pedestrian View

[2] When the motorcar was new, it exercised the typical mechanical pressure of explosion and separation of functions. It broke up family life, or so it seemed, in the 1920s. It separated work and domicile, as never before. It exploded each city into a dozen suburbs, and then extended many of the forms of urban life along the highways until the open road seemed to become nonstop cities. It created the asphalt jungles, and caused 40,000 square miles of green and pleasant land to be cemented over… Streets, and even sidewalks, became too intense a scene for the casual interplay of growing up. As the city filled with mobile strangers, even next-door neighbors became strangers.

“This is the story of the motorcar,” Marshall McLuhan explained in 1964, “and it has not much longer to run.” [3] He noted in Understanding Media a “growing uneasiness about the degree to which cars have become the real population of our cities, with a resulting loss of human scale, both in power and in distance. The town planners are plotting ways and means to buy back our cities for the pedestrian from the big transportation interests.”

[4] And the arrival of a new medium was making it possible for those planners to conceive of a new, less auto-centric form of urbanity. Television – that “hot, explosive medium of social communication” – would render meaningless such phrases as “going to work” or “going shopping,” McLuhan predicted, because we would soon be able to do these things from our own homes, via video-telephone or two-way TV. Television was collapsing distance in such a way that the car’s extension of our corporeal mobility was no longer necessary – or desirable. “The tide of taste and tolerance has turned, since TV,” he explained, “to make the hot-car medium increasingly tiresome. [5] Witness the portent of the crosswalk, where the small child has power to stop a cement truck.”

[6] If we update McLuhan’s 1964 vision for Toronto of 2028, when networked, ambient technologies will have supplanted the television, that small child needn’t even worry about finding a crosswalk. Autonomous vehicles, with their lidar and sonar and multidirectional cameras, will constantly be watching for his approach, ready to stop on a dime. The crosswalk could potentially migrate, too, depending upon traffic patterns and time of day. [7] Those shifting functions would be signaled by colored LEDs embedded in the pavers. [8] And our small child’s caretakers needn’t worry about shielding him from the curb, either – because there won’t be one. Quayside’s “shared streets,” inspired by the Dutch woonerf principle, will call on pedestrians, drivers, cyclists, wheelchair users, skaters, and other moving bodies to share space and negotiate around one another’s presence. Such negotiations – of power and privilege, publicity and privacy – have long distinguished the sidewalk as a species of space. [9] And as we examine Sidewalk Labs from a pedestrian perspective, I thought it might be useful to think critically about the pedestrian commitments implied in their name.

I must admit that I’ve adopted this conceptual approach in part because I’m not an expert on the Quayside project. It’s too fast-moving a target, whose trajectory is informed by too many local politics and legacies, for me to claim any expertise from my station in New York. [10] Yet I have written quite a bit over the past several years about smart cities and urban infrastructure and, on a few occasions, Sidewalk Labs and Alphabet’s other geospatial initiatives. [11] And in Spring of 2016 I published a big piece about New York’s version of Quayside, Hudson Yards, where Sidewalk Labs was one of the first tenants, and where I speculated that the Labs might be playing some role in developing urban technology for the 28-acre, $25-billion-U.S.-dollar project, the largest private development project in U.S. history. [12] I took my Urban Intelligence class on fields trips there in 2017 and 2018, where our tour guides confirmed that Sidewalk Labs was developing “digital master plans” for Hudson Yards and other cities.

[13] While Sidewalk ultimately chose you over us to build their city “from the Internet up,” New York is dealing with its own tech-urbanism drama. As you might know, last month, after facing weeks of opposition from activists, union leaders, and lawmakers, Amazon pulled out of a deal to build one of its two HQ2 campuses in Long Island City, Queens. [14] Then, just a little over a week ago, Governor Cuomo, Mayor de Blasio, and dozens of legislators and CEOs published a full-page open letter in the Times, begging Amazon to come back. Like any passionate courtship, break-up, and attempted reconciliation, the New York-Amazon negotiation has migrated across platforms – from Twitter tirades and international press, to letter-writing campaigns, [15] paper flyers and sidewalk protests.

The sidewalks here served as a medium. I’m sure McLuhan would approve of such a designation. And while this spatial medium might’ve played a critical role in quelling Amazon’s project, [16] they’re a source of inspiration for Sidewalk CEO Dan Doctoroff, who said that “Sidewalk Labs is a nod to both the rich mix of personal intersections that give great city streets their vitality and also the incredible ability throughout the history of cities to solve local challenges through innovation.”[1] He thus acknowledges that sidewalks themselves are a medium for sociality, but also suggests that, like other media technologies, sidewalks and the cities they compose are maintained and improved through disruption.

Doctoroff’s observation is, of course, nothing novel. [17] Jane Jacobs, another Torontonian, regarded streets and sidewalks as “the main public places of a city,” its “most vital organs,” whose sociality helped to maintain safety and stability. [18] And the rudimentary spatial form has been around for quite some time, as Anastasia Loukaitou-Sideris and Renia Ehrenfeucht write in their book Sidewalks.[2] [19] The first sidewalks reportedly appeared around 2000 B.C. in central Anatolia, or modern Turkey. Ancient Corinth supposedly had sidewalks, and the Romans their semita, too. [20] Yet in medieval cities and towns, pedestrians “mingled with horses, carts, and wagons.” These were among the original “shared streets,” an innovation for which we typically credit 20th century Dutch traffic engineer Hans Monderman.

[21] By the mid-18th century, some Parisian streets had foot pavements and trottoirs, and elevated walkways lined many boulevards. [22] A few decades later, “when sidewalks were increasingly common in London,” a “border territory” emerged “between the footway and the carriageway”: the gutter, a textbook liminal space, became the place for all those who don’t belong.[3] [23] And by the early 19th century, many large cities, as part of larger public works projects, were paving and legislating sidewalks and occasionally including tax assessments for their provision. Wood and gravel eventually turned into concrete. “[S]idewalks had become important elements of the urban infrastructure, and thousands of miles of [them] were paved in American cities.”[4]

[24] The sidewalk is indeed a concrete site with its own materiality and sociality, which are shaped largely by its in-between-ness – and this very liminality demonstrates just how much is negotiated, or mediated, in this zone. A sidewalk is, and has been, home to sandwich board-hawkers and newsstands, window displays and stock tickers, graffiti and surveillance cameras. But it also mediates between ontological conditions and political positions that parallel those central to the Sidewalk Labs debate – and data-driven urbanism and private development more generally.

[25] In the nineteenth century, because street improvements were known to increase property value, they were typically paid for by abutting property owners.[5] This is why one would commonly see breaks in the sidewalk along a given block, reflecting crests and troughs of entrepreneurialism. Those sidewalks also often served as extensions of commercial space, where grocers could display their produce, peddlers could lay out their wares, and plyers of licentious trades could advertise their services. [26] But even now, when sidewalks are commonly regarded as public works, abutting property owners are still responsible for keeping them clear and in good repair. Some of those owners privatize the sidewalks, restricting particular uses and limiting access to particular people (typically those who can pay). [27] And in many cities, particularly in the developing world, sidewalks are the conduits for vibrant shadow economies and cultural networks. Annette Kim documents such uses in Ho Chi Minh City in her fantastic book, Sidewalk City.[6] For these reasons, Loukaitou-Sideris and Ehrenfeucht argue, “sidewalks are simultaneously public and parochial – open to all and yet a space of which a group feels ownership.”[7]

[28] We see similar tensions at Sidewalk Toronto, where critics and champions debate the proper division of labor between public and private – not only in the physical realm, but in the virtual one, too. There’s concern over the ownership and monetization of private data – and proposals to create data trusts or commons, [29] or to involve a trusted public institution like the Toronto Public Library. I won’t say much more about this because I assume(d) that one of my fellow panelists would be focusing on these issues.

[30] One of the primary concerns over data management is the risk incurred by already-marginalized populations. Urban sidewalks have also historically harbored their own forms of risk. They’ve mediated between subjects’ identities and either reinforced, or allowed for challenges to, traditional social hierarchies. As Loukaitou-Sideris and Ehrenfeucht describe the 19th-century sidewalk: “Acts of deference or domination were negotiated as people passed each other.”[8] People of color and low socioeconomic class were expected to step aside, into the gutter, so as to avoid impeding the smooth passage of the elite. Yet the streets were also where the oppressed practiced small acts of resistance or engaged in political demonstrations to demand equality. The sidewalks were sites of contestation and media for resistance.

[31] Meanwhile, 19th-century “women who wished to maintain middleclass propriety were relegated to private realms.” If they wished to walk the sidewalk, they typically needed an escort. But by the middle of the century, some cities had passed ordinances against sexual harassment. [32] Geographer Ayona Datta is still working today to highlight and mitigate gender-based violence on the streets of the informal settlements in India’s smart cities.[9] She’s not alone; sidewalks everywhere are still home to hate crimes and harassment – regardless of how “smart” a city is. [33] Here and elsewhere, the presence or absence of curb cuts and other accessibility measures can open up or close off a city to someone in a wheelchair, or a caretaker pushing a stroller, or a homeless person pushing all of her belongings in a cart. Sidewalk politics are issues of access and equity – and, for those in underserved areas, they’re emblems of environmental justice, too. These issues of equity and accessibility and security pertain to the virtual realm, too – and to the cloud of data that hovers over our networked urban neighborhoods.

[34] These are just a few of the ways in which, as McLuhan might say, a sidewalk serves as a medium. [35] And by considering the history and politics of this old-school medium, we might help to contextualize the ways in which Sidewalks Labs has adopted both the opportunities and risks of its namesake. Sidewalk’s sidewalk is one of modular pavers and movable street furnishings, timber canopies and traffic patterns that respond to immediate needs. None of it’s all that innovative – all of these individual technologies have existed before. [36] What’s novel here is not the TV, as McLuhan proclaimed in his time, but the data harvested and processed both to render these streets and sidewalks so hospitable and responsive, and to render their inhabitants trackable and targetable.

[37] The new crosswalk portent in tomorrow’s city is perhaps not the small child with the power to stop a cement truck, but the sovereign subject – or the collective – with the power to curtail surveillance capitalism disguised in timber and sustainable concrete, [38] pin-up boards and post-its. [39]


[1] Quoted in Alissa Walker, “The Case Against Sidewalks,” Curbed (February 7, 2018): https://www.curbed.com/2018/2/7/16980682/city-sidewalk-repair-future-walking-neighborhood.

[2] Anastasia Loukaitou-Sideris and Renia Ehrenfeucht, Sidewalks: Conflict and Negotiation Over Public Space (MIT Press, 2009).

[3] ibid.: 16.

[4] ibid.: 17-18.

[5] ibid.: 18.

[6] Annette Kim, Sidewalk City: Remapping Public Space in Ho Chi Minh City (University of Chicago Press, 2015).

[7] Anastasia Loukaitou-Sideris and Renia Ehrenfeucht: 6.

[8] Anastasia Loukaitou-Sideris and Renia Ehrenfeucht: 86.

[9] “Gendering the Smart City: A Subaltern Curation Network on Gender-Based Violence In India” : https://gtr.ukri.org/projects?ref=AH%2FR003866%2F1.

 

Categories
Blog

The Ethics of Automating Design

I was grateful to be invited to write about the “ethics of AI in design” — a huuuuuge topic — for the forthcoming Oxford Handbook of Ethics of Artificial Intelligence, co-edited by Markus Dubber, Frank Pasquale, and Sunit Das. And just last week I was glad to be able to share a draft of this project at the Institute for Sensing and Embedded Network Systems Engineering at Florida Atlantic University, thanks to a kind invitation from Gerald Sim and Jason Hallstrom. I’ll post my slides below, followed by the unedited manuscript, which the Oxford editors have permitted me to share.

Thanks to Kevin Rogan, who aided with all stages of research and writing, and to Ajla Aksamija, David Benjamin, Rune Madsen, and Andrew Witt, who generously responded to our queries about their own practice.

Calculative Composition: The Ethics of Automating Design

Mattern CalculativeComposit… by on Scribd

For as long as fashion designers, graphic artists, industrial designers, and architects have been practicing their crafts – and even before they were labeled as such – those practices and their products have been shaped by the prevailing tools and technologies of their ages, from paper patterns to computer-aided design.[1] Artificial intelligence is merely the latest agitator, and myriad design professionals have already begun exploring its potential to transform the conceptualization, design, prototyping, production, and distribution of their work, whether menswear or modular homes. Fashion labels are mining social media to forecast trends and building intelligent apps to help consumers compare styles. Architects are amassing data – engineering requirements, CAD geometries, building performance data – to automate phases of their work. Likely to the chagrin of many graphic designers, programmers have created web platforms that allow clients to upload text and images, input a few parameters, and, violà! – a website appears! Still other practitioners, from across the disciplines, have employed AI toward more humanitarian or sustainable ends, like custom-designing prosthetic devices, mapping out less energy-intensive supply chains, or prototyping climate-responsive architectures.

While some designers have committed to applying AI toward more ethical ends, they’ve paid comparatively less attention toward the ethical means of its application – precisely those methodological issues that are of concern to organizations like AI Now and FAT (Fairness, Accountability, and Transparency in AI).[2] What, for instance, are the implications of hoovering up architectural and urban data in order to aid in the future design of more efficient buildings and neighborhoods? What are we to make of graphic design tools that normalize particular facial features or allow for the suturing of various images into new composites? And what are the implications for designers’ self-identities as professionals and political subjects when their core creative questions are turned over to the machine? This chapter will examine the ethical ends and means toward which AI-driven design has been, and perhaps could be, applied. In surveying representative design fields – fashion, product, graphic, and architectural design – I’ll examine what ethical opportunities and risks we might face when AI-driven design practice is programmed to serve the needs and desires of laborers, consumers, and clients – and when it’s applied in generating everything from luxury goods to logos to library buildings.[3]

AUTOMATING FASHION, PRODUCT, AND GRAPHIC DESIGN

We’ll start close to the body, with clothing. Fashion designers, manufacturers, and retailers are using artificial intelligence to track trends, to offer shopping advice, to test garments on different body shapes and sizes, and to allow customers to mix and match items in their wardrobes.[4] With Amazon’s Echo Look, users can document their outfits and, via its Style Check service, draw on the combined expertise of human stylists and AI (trained on social media fashion posts) to choose the most flattering options. Champions argue that these developments facilitate the representation of non-standard body types and allow consumers to fully exploit the garments in their drawers and closets, thus (hypothetically) curbing wasteful consumption.

Meanwhile, Amazon’s Lab126 team is using a generative adversarial network, or GAN, to learn about particular styles by scanning lots of examples, so that it can then generate its own rudimentary designs. IBM’s Cognitive Prints, a suite of tools developed for the fashion industry, could likewise enable designers (or even manufacturers who simply bypass human designers) to create textile patterns based on any image data set – snowflakes or rainforests, for instance – or to generate designs based on a set of parameters, whether Mandarin collars or pleats. Such capabilities raise questions about labor displacement, which has long been of concern in fashion manufacturing, where machines have been replacing human workers since the rise of the mechanized loom. Of course labor is, and has long been, a huge issue in popular and scholarly discussions of AI and automation.[5]

While automation has indeed extended from the shop floor to the design studio, few fashion ateliers fear obsolescence. Designer Zac Posen doubts that any GAN could capture the “situational, spontaneous moments of beauty,” or exploit the fortuitous accidents and aesthetic irrationalities, that are part of any organic design process.[6] What’s more, AI technologies, some say, could reinforce the unique contributions of human designers by protecting intellectual property. IBM’s Cognitive Prints, which trained on 100,000 print swatches from winning Fashion Week entries, allows designers both to search for inspiration and to “make sure their inspiration is really their own and not inadvertent plagiarism.”[7] Automated tools could also allow for bespoke design and fabrication – 3D-printed garments that are customized to fit models’ or athletes’ bodies, as well as prosthetics and rehabilitative gear.[8]

Yet of course most fashion is still mass-produced. Labor and environmental advocates argue that, in these contexts, AI could enable brands to better monitor their supply chains and thus hold themselves accountable for where they source their materials and labor. Then again, well monitored and lubricated supply chains could also simply speed up the already-unsustainably speedy world of fast fashion.

Product designers are applying similar techniques – using AI to comb social media to identify trends in sunglasses, toys, and tableware; automating the production of multiple iterations of projects for user-testing; and even exploiting users’ behavioral data to simulate those user tests or quality assurance evaluations. Such applications allow designers and manufacturers to respond to global demands for shorter product cycles and fast-changing consumer needs and desires.[9] In other words, AI helps us generate more stuff, more cheaply and quickly, and more in line with consumers’ perhaps unstated or even unrealized demands.

AI’s influence is even more immediate in the world of digital products, like e-books and apps and chatbots. Like their analog counterparts, digital designers can set particular parameters and create models based on their preferences, and algorithmic tools can churn out hundreds of options, which users can then test and designers can tweak. Seasoned interaction designer Rob Girling imagines a digital-product future in which AI is capable of modeling cultural and psychological variables through all stages of design development and use. He envisions

a future where our personal AI assistants, armed with a deep understanding of our influences, heroes, and inspirations, constantly critique our work, suggesting ideas and areas of improvement. A world where problem-solving bots help us see a problem from a variety of perspectives, through different frameworks. Where simulated users test things we’ve designed to see how they will perform in a variety of contexts and suggest improvements, before anything is even built. Where A/B testing bots are constantly looking for ways to suggest minor performance optimizations to our design work.[10]

For designers and developers aspiring to build digital products that trade in affect, Chris Butler, Director of AI at Philosophie, a software development studio, offers workshops on “problem framing, ideation, empathy mapping for the machine, confusion mapping, and prototyping.”[11] Even emotion is operationalizable in the design process and optimizable in its products.

Those AI-informed digital products then reach the market, where they perform social, cultural, and psychological work. Voice assistants call doctors and hairstylists to make appointments, and chatbots provide therapy and tutoring to clients who can’t afford – or would rather not deal with – human service providers.[12] Yet when Google unveiled its Duplex voice assistant in 2018, some observers were outraged that the technology had little empathy for the product’s human interlocutors: Duplex deceived those on the other end of the line by failing to disclose its artificiality. As Natasha Lomas lamented in TechCruch, Google clearly lacked a “deep and nuanced appreciation of the ethical concerns at play around AI technologies that [can pass] as human – and thereby [play] lots of real people in the process.”[13] Echoing the Institute of Electrical and Electronics Engineers’ general principles for ethically aligned design, Lomas called for digital products that respect human rights and operate transparently, and for developers that hold themselves accountable for the automated decisions their products make.[14] Girling’s utopic wish list implies a whole tangle of potential accountability loopholes; his hypothetical development scenarios rely on an assemblage of simulated subjects, sites, and situations of engagement. It involves fabricated frameworks and imagined futures – each of which presents opportunities for algorithmic bias to set in, for limitations in the training data set to become reified in real-world applications.

Luckily, Girling’s firm, Artefact, recognizes that “the effects of our most celebrated products are not always positive. When you ‘move fast and break things,’ well, things get broken – or worse.”[15] So, Artefact offers a set of tarot cards that helps creators “to think about the outcomes technology can create, from unintended consequences to opportunities for positive change.” We should pause to contrast the epistemologies embedded in tarot and machine learning, to consider what it means to apply esoteric practices to atone for the shortcomings of AI’s positivism.

In the parallel field of graphic design, one of those “unintended consequences” is the potential obsolescence of the web designer altogether. “We have already seen a templatization of digital products” via “design systems,” or coded standards with defined components, like Google’s Material Design, artist-designer Rune Madsen told me. “So what happens when we start to rely on algorithms to make creative decisions?”[16] Platforms like Logojoy and Tailor Brands automate the production of logos, and Wix ADI (Artificial Design Intelligence) churns out websites.[17] Another platform, The Grid, prompts novice users to input text and imagery and to tell “Molly,” its “AI web designer,” about their goals for reach and impact. Molly will then automatically retouch and crop your photos, search through all your media to choose a complementary color palette, select layouts to fit your mix content, and conduct a few A/B tests to assess your preferences. Molly, we’re told, is “quirky, but will never ghost you, never charge more, never miss a deadline”; in all these respects, she’s more reliable and agreeable than a human designer.[18]

But critics have found her design work to be less than inspiring. Because machine-learning algorithms “operate on historic data,” Madsen said, “they always give us more of the same” – or some new hybrid that exists in the “latent space of all existing designs,” a compression of what existed before.[19] Such derivations, he told me, are typically devoid of the affect and aspirations embedded in our most compelling logos and layouts. And they commonly bear the marks of the programs used to create them; you know a Squarespace or Wix site when you see one. For such reasons, most human graphic designers, like their counterparts in fashion, anticipate that, for the foreseeable future at least, machines and people will partner in styling the world’s websites and artbooks.[20]

AI like Google’s Auto Draw can transform designers’ moodboards and diagrams into templates and polished renderings. At Airbnb, technologists are using AI to turn their whiteboard sketches into live code, to “translate high-fidelity mock[ups] into component specifications for our engineers, and… production code into design files for iteration by our designers” – an automation of sequences that not only smooths the workflow between one design specialist and another, but also allows each contributor to spend “less time pushing pixels, more time creating.”[21] Echoing an oft-repeated theme among automation’s humanist-futurists (or are they apologists?), designer Jason Tselentis proposes that AI-driven design tools, rather than obviating human laborers, instead promise better working conditions for them: they give sedentary organic bodies “a chance to step away from the computer, whether to work by hand or just take a break from the screen.” In this second desktop revolution – after the arrival of Aldus PageMaker and other first-wave desktop publishing software in the 1980s – our new-millennium algorithms could “save human designers time and make more room in their lives for reflection and creativity.”[22] Nevertheless, designer Paula Scher predicts that, as more basic skills are automated, “entry level jobs may be lost.”[23]

Yet perhaps those entry-levels skills aren’t quite as rote and rudimentary as they seem. Consider the services provided by several intelligent imaging applications: Tools like Artisto or Prisma use image recognition to identify the content in photos and videos, and then apply matching visual-effects filters. Depending on the specific data sets training our AI assistants, we could very well see a lot more walk-on-the-beach scenes draped in gaussian blur – or many faces of color that simply don’t register as faces at all.[24] Adobe’s Sensei AI is behind product features like Adobe Scene Stitch, which allows users to patch and edit images by swapping in features from similar files in its image library; and its Face-Aware Liquify feature, which uses face recognition to “enhance a portrait or add creative character.”[25] We might question the ethical implications of reinventing photographic scenes in this age Deep Fakes. And we might wonder what faces composed the training set from which Adobe’s AI learned to identify a facial norm. Whose noses and lips set the standard? What facial features are deemed to have “character,” and what sorts of sculpting constitute “enhancement”?

We might also inquire about the ethics of using AI to transform user subjectivities and user behavior into dynamic user experience (UX), which, while seeming to create more personalized products that thoughtfully anticipate user desires, also coerces longer and more predictable user engagement. As Fabricio Teixeira explains, “Websites are getting smarter and taking multiple user data points into consideration to enable more personalized experiences for visitors: time of day, where users are coming from, type of device they are accessing from, day of the week – and an ever-growing list of datapoints and signals users don’t even know about.”[26] “We could extract behavioral patterns and audience segments,” Yury Vertov proposes, “then optimize the UX for them. It’s already happening in ad targeting, where algorithms can cluster a user using implicit and explicit behavior patterns.”[27] We’re a long way from Web 1.0. Today’s websites are designed to be artificially-intelligent, opportunistic, fine-tuned coercion machines.

The application of AI across these disparate design fields raises several categories of recurring questions. First, questions about labor: will AI improve labor conditions by automating rote tasks and make it easier for creative practitioners to protect their intellectual property, or will it facilitate the pirating of others’ creative labor and eliminate jobs? And how might the automation of even “rote” tasks embed particular ideologies and biases – about what constitutes norms and standards, and for whom – and introduce the possibility of manipulation: doctored images that lie, robot voices that deceive? Second, questions about production: will AI allow for the ethical oversight of supply chains, promoting more ethical sourcing and labor; or will it simply speed up the production process, promoting ever more wasteful extraction and manufacturing, and ever more rampant consumption? And third, our survey of these design fields raises recurring questions about users’ agency and protection: do tracked behaviors and simulated testing and “empathy mapping” serve users by better meeting their needs, and even supplying custom products and services for non-normative bodies and tastes; or does such customization constitute exploitation? Are these dichotomous conditions? Or can we find a compromise?

ALGORITHMIC ARCHITECTURES

It shouldn’t be surprising that so much of our virtual experience is designed and choreographed by virtual agents. AI, after all, is the new colonial power, indiscriminate in its invasion of digital terrains. Yet AI’s influence spills over into the physical domain, too. As we saw in the worlds of fashion and product design, designs take shape in AI-informed digital plans, and are then made material in the form of garments and gadgets. Or even buildings and cities. Artificial intelligence scales up to embed its logics in the material world writ large. Such a translation – from invisible, bit-sized algorithmic operations to massive steel-and-glass structures – represents a radical crossing of scales and materialities and ontologies. And because architecture has traditionally been such a slow, visceral medium, it affords us a unique opportunity to observe and assess the translation from digital to physical, the embodiment of artificially intelligent operations in concrete form. In what follows, we’ll examine how AI informs the operations and ethics of architecture’s multiple stages of development – from planning and project management to design and construction.

Planning and Project Management

Gathering information about a design site has traditionally required visiting that site, surveying, photographing, collecting local data, and creating maps. Now, much of that work can be automated by drawing on a vast abundance of available datasets and software – like EcoDesigner STAR and SketchUp plugins – that automate data-processing. Architects Hannah Wood and Rron Bequiri regard such developments as liberating: automated data analysis enables the architect to “simulate the surrounding site without ever having to engage with it physically,” to “do all the necessary building and environmental analysis without ever having to leave our computers.”[28] Designers can take on international commissions that would’ve previously presented logistical challenges. While such disembodied assessments of site might afford new opportunities to smaller, more geographically marginalized firms – and might signal community needs that aren’t empirically observable – we should wonder what spatial knowledges, what localized understandings of place and the people in it, are lost when designers “never have to leave their computers.” Yet perhaps on-site-vs.-remote is a false dichotomy; we might instead ask how vast banks of spatial data and their automated processing could responsibly supplement on-site surveys, interviews, and local ethnographies.

Those spatial databases are the products a great deal of human and computational labor – of individual designers, design firms, tech companies, and professional organizations invested in the accumulation, storage, cross-referencing, and sharing of data about sites and buildings. In a 2018 report for the American Institute of Architects, Kathleen O’Donnell interviewed several designers who corroborated her recommendations to “start accumulating as much [data] as possible,” including data used in Building Information Modeling platforms or post-occupancy evaluations – and to develop platforms for sharing data among architects, contractors, and property owners.[29] In order for those data to serve the purposes of automation, however, they must be rendered interoperable, which is quite a challenge when translating place into data involves different methodologies and epistemologies for different professionals. Public health officials, environmental scientists, and real-estate developers all operationalize “site” differently. Raghav Bharadwaj reports that the Architecture, Engineering, and Construction (AEC) industry is “attempting to leverage ML (machine learning)…to identify and mitigate clashes between the different models” employed by architects, various engineers, and plumbers – not to mention the conceptual and data models of other professionals who think about space differently, and whose insights could inform architecture.[30] Can machine-learning reconcile such diverse conceptions of place? And can it mediate the disparate methodological, epistemological, and ethical frameworks embedded in these different datasets. Even the AEC data enthusiasts, O’Donnell reports, recognize that “regulations, security, and ethics all come into play – and [tha]
there are no major legal standards for data in AEC (Architecture, Engineering, and Construction) yet.” Ajla Aksamija, a building technology specialist who leads Perkins + Will’s Tech Lab, is convinced that a governing body like the American Institute of Architects needs to step in to set standards and institution-wide best practices for the use of data and AI in design.[31]

AI can also help to automate the administrative operations – organizing schedules, managing payroll, overseeing documentation, and even, after a period of careful training, evaluating conformance with safety and zoning guidelines.[32] Architectural historian Molly Wright Steenson notes that, “as early as the 1950s, architects at Skidmore Owings and Merrill (SOM) and Ellerbe & Associates used computers for risk calculations and cost estimates.”[33] Today, too, AI can function as an “’enforcer’ of code and best practices,” keeping human laborers aligned with their own self-imposed algorithm.[34] In short, computers handle the boring work, the rote tasks, the complex calculations, leaving creativity to human experts (and most likely eliminating some of those human laborers in the front office). We’ve heard such promises before, too. In 1964, Bauhaus founder Walter Gropius advocated for architects to use computers as “means of superior mechanical control which might provide us with ever-greater freedom for the creative process of design.”[35] Todays’ computers still “aren’t particularly good at heuristics or solving wicked problems,” Phil Bernstein says, “but they are increasingly capable of attacking the ‘tame’ ones, especially those that require the management of complex, interconnected quantitative variables like sustainable performance, construction logistics, and cost estimations.”[36] Andrew Witt, co-founder of “design science” office Certain Measures, suggested to me that AI could even serve as an “ethical broker” between competing stakeholder interests – which raises questions about the methods and ethics of automating ethical mediation.[37]

Design

AI is already shaping the creative process, too. The flagship architectural design softwares like AutoCAD, Rhino, and Revit have long automated the design process to some degree. For example, a door placed in a wall is just that: not just a collection of lines, planes, or solids, but is known by the program for what it is. Neural networks can then mine the oeuvre of an individual designer or a group of designers, identify “commonly-used sequences of low-level features,” and then “dynamically synthesize purpose-built features” that are relevant to the designer’s task at hand.[38] Nicholas Negroponte, architect and founder of the MIT Media Lab, predicted such functionality in the late 1960s, when, as Steenson explains, AI could allow a system to “[learn] from its users and [develop] in tandem with them, with the idea that the system would evolve from how the computer was originally programmed, and from what both the architect and the user might imagine on their own.”[39]

By the 1980s, software originally created for use in automotive, aeronautical, and industrial design made its way into architecture, inciting the rise of parametric design, in which the architect sets parameters that are then algorithmically translated into a range of forms. Today, software-maker Adobe offers Dreamcatcher, a “generative design system that enables designers to craft a definition of their design problem through goals and constraints” – from material types and manufacturing methods, to performance goals and cost restrictions – which are then used to process multiple data sets and generate thousands of alternative design solutions.[40] Designers can iteratively tweak the parameters and assess the performance data for each proposed option.

WeWork has developed a “suite of procedural algorithms” to automate the planning of its shared workspaces. The company’s research team employs data and social scientists to better understand “how spaces can enhance people’s happiness, productivity, and connection to their community.”[41] Fed data on “functional and experiential considerations, building code requirements, and client expectations,” their planning tool generates all possible desk layouts for each floorplan, even with their quirky columns and other obstructions.[42] Designers found that, 97% of the time, the tool handled such variations, maximizing the ratio between desk count and floor area, as well as humans. In the future the tool is meant to adjust for regional differences, “such as members in China preferring large conference rooms.” Andrew Witt, from Certain Measures, imagined that many designers could eventually use “preference sets, like sentiment analysis databases,” that model “how people consume or relate to architecture.”[43]

As Mark Sullivan explains on WeWork’s company’s blog, their planning tool “does more than save time. It frees up architects to use their creativity in other ways, such as designing an eye-catching central staircase or covered courtyard where members can mix and mingle.”[44] When Autodesk hired design firm The Living to design their new Toronto office, they worked with a similar array of parameters: solo vs. collaborative work style, available views, light, and so forth. The Living’s David Benjamin insists that “it wasn’t the computer telling us what to do. We made the decisions based on human values.”[45]

While designer Hannah Wood predicts that future architects are less likely to be “in the business of drawing and more into specifying [problem] requirements,” there are plenty of AI aficionados ready with reassurance that architects needn’t fear that they’ll be reduced to data entry clerks.[46] AI will “streamline design processes without taking creative control”; “the designer will lead the tool,” Adobe’s Patrick Hebron says.[47] Humans must maintain control because AI, Hebron continues, “has limited purview into the nature and proclivities of human experience.”[48] Any fully-AI-generated environment, we’re reminded, would be unlivable. Yet architects do need to better articulate to clients, and the broader public, why that’s true. As Benjamin explained to Dwell magazine, it’s already the case that most building projects aren’t designed by a trained architect; now, “we have to advocate for why we want the built environment not to be self-driving architecture. Cookie-cutter results are convenient, but we have to argue for why they’re insufficient” – or unjust.[49] Benjamin predicted that developers could create automated designs keyed toward the maximization of profit, resulting in an “automated design of a city that’s both uniform and unequal.”[50] Thus, Witt said, it’s important that we consider the “ethical dimensions of how we train designers” to partner with automated systems.[51]

AI can offer evidence to help humans choose from all those cookie-cutter options and adapt them. For instance, Space Syntax’s depthmapx spatial network analysis software allows designers to assess the “visual accessibility” of a design in its site, or to model pedestrian behavior.[52] Building System Planning’s ClashMEP reads Revit models to detect clashes among a building’s mechanical, electrical, and plumbing systems.[53] (AI could also enable those building systems to communicate with one another in the built structure, Aksamija proposes.[54]) And Unity 3D, originally created as a game engine, can be used to analyze the distance to fire exits – or to generate 3D, augmented or virtual reality models for user-testing. Such modes of presentation have the potential make design legible, and experiential, for users and other stakeholders who might not know how to read a plan or a construction drawing.[55] And they enable designers to test “user experience,” assessing even dynamic variables like light and sound and ergonomics. Designer Jim Stoddart explains:

We can put someone in VR, and they can be inside the space and we can ask them, ‘Is this exciting or not? Is it inviting? Is it beautiful?… Then we can feed that into a machine-learning system as a supervised learning problem and actually have that software help us predict, from the thousands of designs we’re generating, which ones are doing interesting things with high-level spatial and material qualities that are worthy of further investigation.[56]

Michael Bergin from Autodesk proposes that automated technologies will ultimately make architecture “far more inclusive with respect to client and occupant needs and orders of magnitude more efficient when considering environmental impact, energy use, material selection and client satisfaction.”[57]

Perhaps more important than “interesting” and “beautiful” designs are ethical ones – designs aligned with those “human values” that informed Benjamin’s decision-making in Toronto. Values that are of more consequence than optimal desks-per-square-foot. Benjamin has found that, for nearly the last decade, his firm and others have been adding a “bio” framework – bio-processing, bio-sensing, and bio-manufacturing – to computational design, “combining the machine and the natural world” in order facilitate “design with dynamic systems and uncertainty,” to embrace diversity and robustness, to allow for design outside of “master models and complete all-knowingness.”[58] This is one way of infusing computational design with a set of values that’s more oriented toward ethics than efficiency.

Architect Christopher Alexander, whose practice had been informed by AI since the 1960s, long believed that computational patterns had a “moral component,” and, according to Steenson, that “moral goodness was something that could be explicitly defined and empirically tested in architecture.”[59] Alexander offered a vision of the future in which “computers play a fundamental role in making the world – and above all the built structure of the world – alive, humane, ecologically profound, and with a deep living structure.”[60] How might we operationalize such ethical parameters? How might we test for humanity and ecological profundity in our buildings, as Alexander proposes? Such values are often aestheticized and, in the case of some bio-computational generative designs, made performative – through gratuitous breathing facades or kinetic oculi. We can also use building automation systems to monitor HVAC, energy, and lighting systems, which are perhaps proxies for “ecological profundity.” And AI could help building occupants better understand how their uses of a building influence its energy consumption, Aksamija suggests.[61] How else might we “pattern” particular ethical codes into our parametrics? We might be able to monitor the presence of these values in the making of architecture, too.

Fabrication

In 1974, Marvin Minsky predicted that, by the mid-1990s, the machine could “handle not only the planning but the complete mechanical assembly of things as well.”[62] We’re not quite there yet, but we do have robots piecing together brick facades, dispensing concrete, welding, and handling the dangerous work of demolition.[63] We’re 3D-printing those bricks and other much more geometrically complex building materials, too.[64] Yet there are limits to what these automated technologies can do; for instance, they’re not so great with non-uniform, unpredictable materials like low-grade timber or expanding foam.[65] Still, architectural historian Mario Carpo sees great potential environmental and economic benefits in the future of “micro-designing” and precision-installation, which “can save plenty of building material, energy, labor, and money, and can deliver buildings that are better fit to specs.”[66] Certain Measures developed a process that uses pattern recognition to algorithmically generate new structures from scrap material; Witt described it to me as a means of “radical resource reuse.”[67] And of course the buildings generated through intelligent fabrication processes can themselves be made intelligent, too, through the inclusion of smart technologies, responsive furnishings, and kinetic facades – which, again, can purportedly help to optimize energy use.[68]

As with fashion, AI can help to manage architecture’s supply chains, particularly as more and more materials are prefabricated and modularized. AI can optimize project planning and scheduling.[69] Armed with camera and drone images and sensor data harvested from the construction site, automated systems can identify unsafe site conditions and worker behaviors; it can also cross-reference those images with construction models to identify errors and defects.[70] Autodesk’s BIM 360 IQ scans and tags all safety issues on the jobsite and assigns “risk scores” to various subcontractors.[71] The Suffolk contracting firm is using machine learning to scan construction images and identify when workers are wearing hardhats and safety vests, and, eventually, to recognize ladders, clutter, and other safety risks.[72]

Meanwhile, Komatsu, the Japanese heavy-machinery manufacturer, is partnering with NVIDIA, maker of graphics processing units, to incorporate its Jetson AI computing platform into construction equipment, allowing for full-surround vision and real-time video analytics, which can be used to optimize the use of on-site tools and equipment, monitor job progress, and flag risks.[73] Of course such exhaustive data collection – as is commonly advocated during the planning phase, too – presents myriad methodological challenges and privacy risks (not to mention its potential to create a culture of paranoia). We see similar risks in smart buildings, with their ubiquitous cameras and sensors and voice interfaces. We might also wonder if remote, automated data collection will minimize the need for planners and construction foremen to monitor conditions on-site.

We have buildings planned, designed, and fabricated with the aid of artificial intelligence. They’re infused with AI, in accordance with the recurring design dream of buildings that can think for themselves. And at the end of their functional lives, they could very well be demolished by an artificially-intelligent automaton.[74] Through these phases of architectural design, we encounter many familiar questions about the ethics of automation. Will automation liberate designers from the drudgery of drafting and data-crunching, will it eliminate their jobs, or will it allow for a complementary blending of human and machinic skills? When payroll and scheduling are robotized, what happens to the clerical staff? How might designers create automated design tools that balance efficiency and economy with other “human values,” like ecological stewardship and accessibility, in multiple senses of the term? How might AI-generated models promote sensitivity to environmental impact and the sustainable sourcing of materials; allow designers to attend to the full embodied experience of a building, including its acoustic and thermal conditions; and render the design process more open to diverse stakeholders or user groups? And how might contractors deploy robot fabricators to promote resource and energy conservation, while also improving human laborers’ working conditions – that is, if those laborers are still around? Finally, whose values and interests are built into those algorithms – and which bodies do we find in the studio, on the construction site, or in the fabrication lab or factory, altering and actualizing the algorithms’ output in polymers and plasterboard? This final questions – about which and whose intelligences are embedded in AI – pertains to every sector of design we’ve explored here.

To ensure the ethical application of AI in design, we have to make sure we’re both defining responsible parameters and operationalizing those parameters responsibly – and creatively. Where might human designers intervene in an automated workflow? Where might they reassert their agency? Could designers apply their design skills in designing subversive algorithms that generate aberrant aesthetics or embody radical politics? Will we eventually come to regard our Squarespace websites and Dreamcatcher edifices as aesthetically and politically retrograde – a form of AI authoritarianism, machine-learning mannerism, or GAN neo-Gothic?[75] In the calculative composition of our apps and architectures and apparel, we need to carefully consider both the ends and means of automation, to continually audit the algorithms and apparatae through which our material worlds are made.


[1] I am grateful to my research assistant Kevin Rogan, who aided with all stages of research and writing. I also owe a great debt of gratitude to Caroline Sinders, who offered valuable advice as I embarked on this project; to Ajla Aksamija, David Benjamin, Rune Madsen, and Andrew Witt, who generously responded to our queries about their own practice – and to Gerald Sim, Jason Hallstrom, and the Institute for Sensing and Embedded Network Systems Engineering at Florida Atlantic University, who kindly invited me to share my research-in-progress .

[2] AI Now: https://ainowinstitute.org/; FAT: https://fatconference.org/;

[3] Much has been written about the application of AI in urban design and planning, too. See for instance, the voluminous research on “smart cities.” I have written several pieces on the topic. See, for instance, “A City Is Not a Computer,” Places Journal (February 2017): https://placesjournal.org/article/a-city-is-not-a-computer/; and “Databodies in Codespace,” Places Journal (April 2018): https://placesjournal.org/article/databodies-in-codespace/.

[4] See the Stitch Fix online personal shopping service; the Pureple closet organizer and outfit planner; Vue.ai’s suite of AI-generated models and styling applications; and Kim Kardashian’s Screenshop, which allows users to upload photos of looks they like and find where those items are for sale. These fashion examples are drawn from Sissi Cao, “Zac Posen Talks Fashion in the Era of Artificial Intelligence,” Observer (April 13, 2018): http://observer.com/2018/04/zac-posen-fashion-artificial-intelligence; Will Knight, “Amazon Has Developed an AI Fashion Designer,” MIT Technology Review (August 24, 2017): https://www.technologyreview.com/s/608668/amazon-has-developed-an-ai-fashion-designer/; Emily Matchar, “Artificial Intelligence Could Help Generate the Next Big Fashion Trends” Smithsonian Magazine (May 3, 2018): https://www.smithsonianmag.com/innovation/artificial-intelligence-could-help-generate-next-big-fashion-trends-180968952/; Devorah Rose, “Commentary: AI’s Next Victim: Your Closet,” Fortune (March 15, 2018): http://fortune.com/2018/03/15/fashion-ai-artificial-intelligence-future-kim-kardashian/; Arthur Zackiewicz, “AI, Visual Search and Retail’s Next Big Step,” WWD (April 16, 2018): https://wwd.com/business-news/technology/ai-clarifai-retail-brands-1202650318/

[5] See, for instance, Darrell M. West, The Future of Work: Robots, AI, and Automation (Washington, D.C.: Brookings Institution Press, 2018).

[6] Quoted in Sissi Cao, “Zac Posen Talks Fashion in the Era of Artificial Intelligence,” Observer (April 13, 2018): http://observer.com/2018/04/zac-posen-fashion-artificial-intelligence/. See also Maghan McDowell, “Will AI Kill Creativity?” Business of Fashion (March 14, 2018): https://www.businessoffashion.com/articles/fashion-tech/will-ai-kill-creativity.

[7] Emily Matchar, “Artificial Intelligence Could Help Generate the Next Big Fashion Trends” Smithsonian Magazine (May 3, 2018): https://www.smithsonianmag.com/innovation/artificial-intelligence-could-help-generate-next-big-fashion-trends-180968952/.

[8] Western Bonime, “Get Personal, The Future of Artificial Intelligence Design at Bitonti Studios,” Forbes (July 7, 2017): https://www.forbes.com/sites/westernbonime/2017/07/07/get-personal-the-future-of-ai-design-at-bitonti-studios/#1ecb8785b0de

[9] Anand Adhikari, “Titan Experimenting with Artificial Intelligence Led Product Design,” Business Today (December 15 2017): https://www.businesstoday.in/lifestyle/off-track/titan-experimenting-with-artificial-intelligence-led-product-design/story/266111.html; Rob Metheson, “Design Tool Reveals a Product’s Many Possible Performance Tradeoffs,” MIT News (August 15, 2018): https://news.mit.edu/2018/interactive-design-tool-product-performance-tradeoffs-0815; Sergii Shanin, “How Artificial Intelligence Is Transforming Product Development and Design,” eTeam (December 18, 2017): https://eteam.io/blog/ai-and-product-development-design/.

[10] Rob Girling, “AI and the Future of Design: What Will the Designer of 2025 Look Like?” O’Reilly (January 4, 2017): https://www.oreilly.com/ideas/ai-and-the-future-of-design-what-will-the-designer-of-2025-look-like.

[11] “Design Thinking for AI,” Artificial Intelligence Conference, New York, April 29 – May 2, 2018: https://conferences.oreilly.com/artificial-intelligence/ai-ny-2018/public/schedule/detail/65105.

[12] Yaniv Leviathan, “Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone,” Google AI Blog (May 8, 2018): https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html; Clive Thompson, “May A.I. Help You?” New York Times Magazine (November 14, 2018): https://www.nytimes.com/interactive/2018/11/14/magazine/tech-design-ai-chatbot.html;

[13] Natasha Lomas, “Duplex Shows Google Failing at Ethical and Creative AI Design,” TechCrunch (May 10, 2018): https://techcrunch.com/2018/05/10/duplex-shows-google-failing-at-ethical-and-creative-ai-design/.

[14] “The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems,” IEEE Standards Association: https://standards.ieee.org/industry-connections/ec/autonomous-systems.html. See also Alan F.T. Winfield and Marina Jirotka, “Ethical Governance is Essential to Building Trust in Robotics and Artificial Intelligence Systems,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences (October 15, 2018): https://doi.org/10.1098/rsta.2018.0085.

[15] “The Tarot Cards of Tech,” Artefact: https://www.artefactgroup.com/case-studies/the-tarot-cards-of-tech/.

[16] Rune Madsen, personal communication, February 1, 2019. See also Madsen, “The User Experience of Design Systems,” RuneMadsen.com (2017): https://runemadsen.com/talks/uxcampcph/.

[17] “About Wix ADI,” Wix: https://support.wix.com/en/article/about-wix-adi; Logojoy: https://logojoy.com/; Tailor Brands: https://www.tailorbrands.com/. See also Yury Vetrov, “Algorithm-Driven Design”: https://algorithms.design/.

[18] The Grid: https://thegrid.io.

[19] Rune Madsen, personal communication, February 1, 2019.

[20] Chris Constandse, “How AI-Driven Website Builders will Change the Digital Landscape,” UX Collective (October 12, 2018): https://uxdesign.cc/how-ai-driven-website-builders-will-change-the-digital-landscape-a5535c17bbe.

[21] MIX, “Airbnb Built an AI That Turns Design Sketches Into Product Source Code,” The Next Web (October 25, 2017): https://thenextweb.com/artificial-intelligence/2017/10/25/airbnb-ai-sketches-design-code/;

[22] Jason Tselentis, “When Websites Design Themselves,” Wired (September 20, 2017): https://www.wired.com/story/when-websites-design-themselves/.

[23] Quoted in Tselentis.

[24] Simone Browne, Dark Matters: On the Surveillance of Blackness (Durham, NC: Duke University Press, 2015); The Open Data Science Community, “The Impact of Racial Bias in Facial Recognition Software,” Medium (October 15, 2018): https://medium.com/@ODSC/the-impact-of-racial-bias-in-facial-recognition-software-36f37113604c; Tom Simonite, “How Coders are Fighting Bias in Facial Recognition Software,” Wired (March 29, 2018): https://www.wired.com/story/how-coders-are-fighting-bias-in-facial-recognition-software/. See also the work of Joy Buolamwini and Timnit Gebru, including Joy Bulamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research 8:1 (2018): 1-15.

[25] “Adjust and Exaggerate Facial Features,” Adobe (n.d.) [accessed January 15, 2019): https://helpx.adobe.com/photoshop/how-to/face-aware-liquify.html; James Vincent, “Adobe’s Prototype AI Tools Let You Instantly Edit Photos and Videos,” The Verge (October 24, 2017): https://www.theverge.com/2017/10/24/16533374/ai-fake-images-videos-edit-adobe-sensei.

[26] Fabricio Teixeira, “How AI Has Started to Impact Our Work as Designers,” UX Collective (October 31, 2017): https://uxdesign.cc/how-ai-will-impact-your-routine-as-a-designer-2773a4b1728c.

[27] Yury Vetrov, “Algorithm-Driven Design: How Artificial Intelligence Is Changing Design,” Smashing Magazine (January 3, 2017): https://www.smashingmagazine.com/2017/01/algorithm-driven-design-how-artificial-intelligence-changing-design/.

[28] Rron Beqiri, “A.I. Architectural Intelligence,” Future Architecture (May 4, 2016): http://futurearchitectureplatform.org/news/28/ai-architecture-intelligence/; Hannah Wood, “The Architecture of Artificial Intelligence,” Archinect (March 8, 2017): https://archinect.com/features/article/149995618/the-architecture-of-artificial-intelligence.

[29] Kathleen M. O’Donnell, “Embracing Artificial Intelligence in Architecture,” AIA (March 2, 2018): https://www.aia.org/articles/178511-embracing-artificial-intelligence-in-archit. Design agency CEO Nate Miller proposes that “BIM is often positioned as a production tool, a way to generate a deliverable, but these are actually data-rich resources tied to a firm’s particular knowledge base that can be used to make informed decisions about a portfolio or future design prospects.” One existing platform for industry-wide data collection and sharing is the Building Information Research Knowledgebase.

[30] Raghav Bharadwaj, “AI Applications in Construction and Building – Current Use-Cases,” Emerj (November 29, 2018): https://emerj.com/ai-sector-overviews/ai-applications-construction-building/.

[31] Kevin Rogan, personal communication with Ajla Aksamija, February 7, 2019 (Rogan is my research assistant).

[32] Phil Bernstein, “How Can Architects Adapt to the Coming Age of AI?” Architect’s Newspaper (November 22, 2017): https://archpaper.com/2017/11/architects-adapt-coming-ai/.

[33] Molly Wright Steenson, Architectural Intelligence: How Designers and Architects Created the Digital Landscape (Cambridge, MA: MIT Press, 2017): 9.

[34] Sébastien Lucas, “Artificial Intelligence (AI) in Architecture. What are the Practical Applications?” futur archi (July 2017): http://www.futurearchi.org/t/artificial-intelligence-ai-in-architecture-what-are-the-practical-applications/364.

[35] Quoted in Steenson: 13.

[36] Phil Bernstein, “How Can Architects Adapt to the Coming Age of AI?” Architect’s Newspaper (November 22, 2017): https://archpaper.com/2017/11/architects-adapt-coming-ai/.

[37] Shannon Mattern and Kevin Rogan, personal communication with Andrew Witt, February 4, 2018. Witt referenced architect Yona Friedman’s 1967 Flatwriter computer program, which, he says, enabled “sets of people to ethically design an apartment complex,” with each person’s input “creat[ing] a set of trade-offs and choices for other people.” There was a “sociological model encapsulate in the software system,” which creates a “political framework [for] felicitous housing development.” I’m indebted to Bryan Boyer for directing me to Certain Measures’ work.

[38] Patrick Hebron, “Rethinking Design Tools in the Age of Machine Learning,” Artists and Machine Intelligence (April 26, 2017): https://medium.com/artists-and-machine-intelligence/rethinking-design-tools-in-the-age-of-machine-learning-369f3f07ab6c.

[39] Molly Wright Steenson, Architectural Intelligence: How Designers and Architects Created the Digital Landscape (Cambridge, MA: MIT Press, 2017): 9-10.

[40] Adobe Dreamcatcher: https://autodeskresearch.com/projects/dreamcatcher.

[41] Mark Sullivan, “This Algorithm Might Design Your Next Office,” WeWork Blog (July 31, 2018): https://www.wework.com/blog/posts/this-algorithm-might-design-your-next-office.

[42] Carl Anderson, Carlo Bailey, and Andrew Heumann, and Daniel Davis, “Augmented Space Planning: Using Procedural Generation to Automate Desk Layouts,’ International Journal of Architectural Computing 16:2 (2018): 165. The authors write: “Firms do not often treat their collective work as queryable data, and typical contractual models in the architecture, engineering, and construction industry rarely permit the design team to monitor or evaluate post-construction design performance. This is why we believe this type of research is currently best suited to certain architectural types, such as retail, offices, and healthcare: spaces where the designs are consistent, the success metrics clear, and the layouts somewhat repeatable” (175). See also Certain Measures’ Spatial Insight and Spatial Optioneering projects: https://certainmeasures.com/spatial_insight.html; https://certainmeasures.com/spatial_optioneering.html.

[43] Shannon Mattern and Kevin Rogan, personal communication with Andrew Witt, February 4, 2018.

[44] Mark Sullivan, “This Algorithm Might Design Your Next Office,” WeWork Blog (July 31, 2018): https://www.wework.com/blog/posts/this-algorithm-might-design-your-next-office.

[45] Quoted in Sam Lubell, “Will Algorithms Be the New Architects?” Dwell (July 27, 2018): https://www.dwell.com/article/will-algorithms-be-the-new-architects-095c9d41.

[46] Hannah Wood, “The Architecture of Artificial Intelligence,” Archinect (March 8, 2017): https://archinect.com/features/article/149995618/the-architecture-of-artificial-intelligence.

[47] Italics mine. Patrick Hebron, “Rethinking Design Tools in the Age of Machine Learning,” Artists and Machine Intelligence (April 26, 2017): https://medium.com/artists-and-machine-intelligence/rethinking-design-tools-in-the-age-of-machine-learning-369f3f07ab6c.

[48] Quoted in Kathleen M. O’Donnell, “Embracing Artificial Intelligence in Architecture,” AIA (March 2, 2018): https://www.aia.org/articles/178511-embracing-artificial-intelligence-in-archit.

[49] Quoted in Sam Lubell, “Will Algorithms Be the New Architects?” Dwell (July 27, 2018): https://www.dwell.com/article/will-algorithms-be-the-new-architects-095c9d41.

[50] Kevin Rogan, personal communication with David Benjamin, December 17, 2018.

[51] Shannon Mattern and Kevin Rogan, personal communication with Andrew Witt, February 4, 2018.

[52] “depthmapx: visual and spatial network analysis software,” The Bartlett School of Architecture: https://www.ucl.ac.uk/bartlett/architecture/research/space-syntax/depthmapx.

[53] ClashMEP: https://buildingsp.com/index.php/products/clashmep. See also Certain Measures’ Topological Wiring: https://certainmeasures.com/topological_wiring.html.

[54] Kevin Rogan, personal communication with Ajla Aksamija, February 7, 2019.

[55] See the MIT Media Lab’s Materiable haptic interface: https://tangible.media.mit.edu/project/materiable/ and Hannah Wood, “The Architecture of Artificial Intelligence,” Archinect (March 8, 2017): https://archinect.com/features/article/149995618/the-architecture-of-artificial-intelligence.

[56] Quoted in Wasim Muklashy, “How Machine Learning in Architecture is Liberating the Role of the Designer,” Redshift (May 3, 2018): https://www.autodesk.com/redshift/machine-learning-in-architecture/. Matter Design Studio uses computational methods to explore ancient knowledge and sensory experience. See http://www.matterdesignstudio.com/. I am indebted to @KeysWalletPh0ne for recommending their work.

[57] Quoted in Hannah Wood, “The Architecture of Artificial Intelligence,” Archinect (March 8, 2017): https://archinect.com/features/article/149995618/the-architecture-of-artificial-intelligence.

[58] Kevin Rogan, personal communication with David Benjamin, December 17, 2018 (Rogan is my research assistant). See also Cristina Cogdell, Toward a Living Architecture? Complexism and Biology in Generative Design (Minneapolis: University of Minnesota Press, 2018).

[59] Molly Wright Steenson, Architectural Intelligence: How Designers and Architects Created the Digital Landscape (Cambridge, MA: MIT Press, 2017): 61.

[60] Quoted in Molly Wright Steenson, Architectural Intelligence: How Designers and Architects Created the Digital Landscape (Cambridge, MA: MIT Press, 2017): 61.

[61] Kevin Rogan, personal communication with Ajla Aksamija, February 7, 2019.

[62] Quoted in Steenson: 13.

[63] Otis Harley, “The Architecture of Artificial Intelligence,” Archinect (May 8, 2018)

: https://archinect.com/features/article/150062492/a-5-part-video-series-on-the-architecture-of-artificial-intelligence; Niall Patrick Walsh, “Carlo Ratti Associati’s Proposed Milan Science Campus Features Robotically-Assembled Brick Facades,” ArchDaily (August 7, 2018): https://www.archdaily.com/899777/carlo-ratti-associatis-proposed-milan-science-campus-features-robotically-assembled-brick-facades.

[64] Some predict that 3D printing will catalyze a “resurgence of detail and ornamentation.” Hannah Wood, “The Architecture of Artificial Intelligence,” Archinect (March 8, 2017): https://archinect.com/features/article/149995618/the-architecture-of-artificial-intelligence. See also the work of Michael Handmeyer and Benjamin Dillenburger, and Gramazio Fabio and Matthias Kohler.

[65] Richard Moss, “Creative AI: Algorithms and Robot Craftsmen Open New Possibilities in Architecture,” New Atlas (February 23, 2015): https://newatlas.com/creative-ai-algorithmic-architecture-robot-craftsmen/36212/.

[66] Mario Carpo, “Excessive Resolution: Artificial Intelligence and Machine Learning in Architectural Design,” Architectural Record (June 1, 2018): https://www.architecturalrecord.com/articles/13465-excessive-resolution-artificial-intelligence-and-machine-learning-in-architectural-design.

[67] Shannon Mattern and Kevin Rogan, personal communication with Andrew Witt, February 4, 2018; Certain Measures, “Mine the Scrap Installation”: https://certainmeasures.com/mts_installation.html.

[68] See the work of AI SpaceFactory: https://www.aispacefactory.com/; Eric Baldwin, “Architecture Startup AI SpaceFactory Reveals Smart Skyscrapers that Integrate Technology and Design,” ArchDaily (October 17, 2018): https://www.archdaily.com/904163/architecture-startup-ai-spacefactory-reveals-smart-skyscrapers-that-integrate-technology-and-design.

[69] See, for instance, the ALICE scheduling technology, which allows users to optimize their construction schedules, “big more aggressively ,win more bids, and amaze your customers”: Alice: https://alicetechnologies.com/.

[70] Jose Luis Blanco, Steffen Fuchs, Matthew Parsons, and Maria Joao Ribeirinho, “Artificial Intelligence: Construction Technology’s Next Frontier,” McKinsey & Company (April 2018): https://www.mckinsey.com/industries/capital-projects-and-infrastructure/our-insights/artificial-intelligence-construction-technologys-next-frontier; Jenny Clavero, “Artificial Intelligence in Construction: The Future of Construction,” esub: construction software (January 23, 2018): https://esub.com/artificial-intelligence-construction-future-construction/. The SmartVid.io image management platform uses machine learning to review and tag photos and videos of the jobsite, and then suggest safety measures. All this footage is stored and made searchable, rendering it a useful resource in potential lawsuits. SmartVid.io: https://www.smartvid.io/.

[71] Autodesk University, “The Rise of AI and Machine Learning in Construction,” Autodesk University (December 21, 2017): https://medium.com/autodesk-university/the-rise-of-ai-and-machine-learning-in-construction-219f95342f5c; Anand Rajagopal, “The Rise of AI and Machine Learning in Construction,” Autodesk University (December 21, 2017): https://medium.com/autodesk-university/the-rise-of-ai-and-machine-learning-in-construction-219f95342f5c.

[72] Elizabeth Woyke, “AI Could Help the Construction Industry Work Faster — and Keep Its Workforce Accident-Free,” MIT Technology Review (June 12, 2018): https://www.technologyreview.com/s/611141/ai-could-help-the-construction-industry-work-faster-and-keep-its-workforce-accident-free/.

[73] Kevin Krewell and Tirias Research, “NVIDIA and Komatsu Partner on AI-Based Intelligent Equipment for Improved Safety and Efficiency,” Forbes (December 12, 2017): https://www.forbes.com/sites/tiriasresearch/2017/12/12/nvidia-and-komatsu-partner-on-ai-based-intelligent-equipment/#1f0bdc1e665b; Raghav Bharadwaj, “AI Applications in Construction and Building – Current Use-Cases,” Emerj (November 29, 2018): https://emerj.com/ai-sector-overviews/ai-applications-construction-building/.

[74] I’m grateful to Kevin Rogan for the conversations that generated much of this concluding section.

Categories
Publications

A Map That Tracks Everything

A Map That Tracks Everything,” The Atlantic (November 30, 2018)

Categories
Publications

Scaffolding, Hard and Soft: Media Infrastructures as Critical and Generative Structures

Scaffolding, Hard and Soft: Media Infrastructures as Critical and Generative Structures” In Jentery Sayers, Ed., The Routledge Companion to Media Studies and Digital Humanities (Routledge, 2018) [unedited draft with slides available here]