To those who seek refuge from the extravagant windows and ornate arches of Starr, a bright, glass-filled room off to the side of Sterling Memorial Library beckons. Before 2015, this room was yet another austere, old-Yale reading room lined with long tables and cushioned wooden chairs. Now, the room is re-christened the Franke Family Digital Humanities Laboratory, or ‘DHLab,’ a space that, according to the Sterling website, “supports Yale scholars in their pursuit of humanistic questions by way of user-centered digital expression and computational methods.” Eight-person tables and wheeled spinning chairs fill the front space; conspicuous glass cubicles occupy the center of the room. Throughout the day, students flock around these large tables, chattering at a low hum. But the cubicles remain empty. “I’ve never seen anyone work in those ‘collaborative spaces,’” says a friend who clocks about twelve hours a week studying at the tables of the DHLab.
In an attempt to prove my friend wrong, I logged onto the Sterling website one morning to find out when I could participate in this state-of-the-art experience. Instead, I was greeted with the message “In-person office hours returning late spring 2023.” Given that I had already begun drafting my final papers and that the daily cross campus flock had begun to fill every inch of grass between noon and five p.m., I wondered how late “late spring” might be.
I decided to pop in without an appointment. Approaching a long table marked as “reserved for staff use only,” I asked the student working there for help with a project. Without looking up from his laptop, the student said, “I don’t work here.”
According to the DHLab website, Yale has fostered discussions about the role of computation in humanistic research for over fifty years. The lab traces the origin of the “digital humanities” at Yale to two key events in the 1950s and ’60s. The first was a 1956 campus visit by Father Roberto Busa, an Italian Jesuit priest and academic. Nine years before, Busa had transformed the methodology of humanistic scholarship by using machine-readable punch cards to create a searchable index of Saint Thomas Aquinas’ writings, the first attempt of this sort to apply technology to literary analysis.
Records of just what he did at Yale remain vague; the only evidence of his visit within Sterling’s collection is an image of him at a campus talk, laying out an ancient scroll atop an IBM scanlike machine. But there are strong archival records of the second key event listed on the DHLab website: a 1965 conference titled “Computers for the Humanities,” sponsored by Yale on a grant from IBM. With a few clicks on my laptop, I scheduled a time to read the records of the conference in the library’s Manuscripts and Archives room.
For all the effort that went into examining this source—a strict no-pens, no-water rule, a combination lock for my backpack and a pillow for the book, an alarmed door that I accidentally set off when trying to get to the bathroom—the physical text I looked at was unremarkable. Rather than a decaying scroll of parchment or an ancient manuscript, the text was bound and sturdy, looking like any other book I could have gotten from the stacks. White capital letters adorn the spine: “COMPUTERS FOR THE HUMANITIES?”
I flipped open the book and read the forward, written by the history professor and Director of the Humanities G. W. Pierson. “What have the humanities and computers to say to each other? Are they not strangers, perhaps enemies, at heart?” Pierson pondered. A list of 25 featured participants, ranging from the president of Yale to MIT professors and IBM employees, filled the final pages of the book.
The scholars at the conference espoused an impressive variety of views. Their timelines for the progress of computation were comically diverse: Linguist Sidney M Lamb estimated that machine translation would be perfected “in another five or ten years,” while Provost Jacques Barzun purported that “machine translation is forever impossible.” Some hailed technological progress as revolutionary within their academic discipline—musician and professor Ercolinio Ferretti, for example, claimed that computation had ushered in an era of “entirely new principles” in the history of music. But others believed that within their field, the excitement was uncalled for. In his speech titled “Gods in Black Boxes,” Historian of Science Derek J. de Solla Price pointed out that computers are not intuitively “more divorced from humanists than other aids such as typewriters and microfilms.”
In his lecture “Computers and the Testaments,” Professor John W. Ellison espoused a perspective between these extremes. “I am enthusiastic about the use of computers in the humanities,” he opened. “But I seriously question whether, just because a computer was involved, we should be overwhelmed by the results. The computer is stupid. It does exactly what it is programmed to do. It can be no smarter than the man who programs it.” Along this train of logic, Ellison suggests applications of computational techniques for menial tasks: alphabetization, indexing, and word frequency calculations.
But with advances in artificial intelligence and unsupervised learning (a machine learning technique that analyzes data without human intervention), Ellison’s declaration that the computer does merely what it is told no longer rings true. Algorithms can now find information that programmers don’t even know to look for, picking up data patterns at scales far beyond human capabilities. A recent research project at the Stanford Literary Lab, for example, focuses on an entire literary corpus, running analyses on (apparently) all nineteenth-century British literature to analyze the usage of gendered language in descriptions of domesticity. Moreover, the introduction of high-level programming tools like Voyant, Bookworm, and NLTK means that researchers need not understand exactly what it is the computer is doing—one can even ask ChatGPT to write the requisite code for them.
At the end of his forward, Pierson writes, “If what is here recorded may help to develop the useful applications, but restrict the popular abuse of analytical engines in humanistic scholarship, it will have served its purpose.” What should we take from this academic conference almost sixty years ago, and what can it tell us about what the DHLab at Yale gets wrong?
The most powerful musings on the perils of digital scholarship were put forth by Professor Lamb, who, for all his optimism about the timeline of machine translation, expressed doubts about the “barrier that lies between the realms of measurement and of judgment.” It is one thing, Lamb argued, to be able to calculate a certain feature of a song or literary corpus; but to give an explanation of what these results entail is a uniquely human skill.
Lamb reminded his audience that the humanistic disciplines are necessarily relational and communicative. Technology can serve this purpose by, for example, creating clear visualizations of messy data or facilitating communications between professors across the world. But it can also hamper it. Rather than making knowledgeable people available as resources, Yale’s DHLab takes its own message too literally, forcing most intellectual discovery onto the impersonal internet as a result of its opaque scheduling process. While the cubicles remain empty and DHLab’s resources inaccessible, true progress in the digital humanities at Yale remains unlikely.