Skip to content Skip to navigation


You are here: Home » Content » Digital Humanities 2.0: A Report on Knowledge



What is a lens?

Definition of a lens


A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice Digital Scholarship

    This module is included in aLens by: Digital Scholarship at Rice UniversityAs a part of collection: "Emerging Disciplines: Shaping New Fields of Scholarly Inquiry in and beyond the Humanities"

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Recently Viewed

This feature requires Javascript to be enabled.

Digital Humanities 2.0: A Report on Knowledge

Module by: Todd Presner. E-mail the authorEdited By: Frederick Moody, Melissa Bailar, Ben Allen, Mary Ngolovoi, Deborah Fay

Thirty years ago, the French philosopher and literary theorist Jean-François Lyotard published a prescient “report on knowledge” called The Postmodern Condition. Originally commissioned by the Conseil des Universités of the government of Quebec, the report was an investigation of “the status of knowledge” in “computerized societies” (3). Lyotard's working hypothesis was that the nature of knowledge—how we know, what we know, how knowledge is communicated, what knowledge is communicated, and, finally, who “we” as knowers are—had changed in light of the new technological, social, and economic transformations that have ushered in the post-industrial age, what he calls, in short, postmodernism. Much more than just a periodizing term, postmodernism, for Lyotard, bespeaks a new cultural-economic reality as well as a condition in which “grand narratives” or “meta-narratives” no longer hold sway: the progress of science, the liberation of humanity, the spread of Enlightenment and rationality, and so forth are meta-narratives that have lost their cogency. This itself is not an original observation; after all, Nietzsche, Benjamin, Adorno, Horkheimer, Foucault, and others have variously shown where the fully enlightened world ends up. What sets Lyotard apart is his focus on how knowledge has been transformed into many “small” (and even competing and contradictory) narratives and how scientific knowledge in particular has become transformed into “bits of information” with the rise of cybernetics, informatics, information storage and databanks, and telematics, rendering knowledge something to be produced in order to be sold, managed, controlled, and even fought over (3-5). In these computerized societies (remember this is 1979: the web didn't exist and the first desktop computers were just being introduced), the risk, he claims, is the dystopian prospect of a global monopoly of information maintained and secured by private companies or nation-states (4-5). Needless to say, Google was founded about twenty years later, although ostensibly with a somewhat different mission: to make the world's information universally accessible and useful.

Lyotard articulated one of the most significant contemporary struggles—namely, the proprietary control of information technologies, access and operating systems, search and retrieval technologies, and, of course, content, on the one hand, and the “open source” and “creative commons” movement on the other. Beyond that, he drew attention to several other changes that have affected what he considered to be the state of knowledge in postmodernism: first, the dissolution of the social bond and the disaggregation of the individual or the self (15); second, the interrogation of the university as the traditional legitimator of knowledge; and third, the idea that knowledge in this new era can only be legitimated by “little narratives” based on what he calls “paralogy” (a term that refers to paradox, tension, instability and the capacity to produce “new moves” in ever-shifting “language games”). While I will not evaluate Lyotard's argument extensively here, I do think it's worth underscoring these points because, perhaps surprisingly, they apply just as much to 2009 as they did to 1979. After all, the social bond today is fundamentally realized through interactions with distributed and equally abstracted networks such as email, IM, text messaging, and Facebook that are accessed through computers, mobile phones, and other devices connected to “the grid.” It has become impossible to truly “de-link” from these social networks and networking technologies, as the self exists “in a fabric of relations that is now more complex and mobile than ever before . . . located at 'nodal points' of specific communication circuits. . . . Or better [Lyotard says] one is always located at a post through which various kinds of messages pass” (15).

Lyotard’s discussion of the role of the university in postmodernism has become increasingly relevant over the last three decades. The university is no longer the sole, and perhaps not even the privileged, site of knowledge production, curation, stewardship, and storage. Traditionally an exclusive, walled-in institution, the university legitimates knowledge while reproducing rules of admission to and control over discourses. Not just anyone can speak (one must first be sanctioned through lengthy and decidedly hierarchical processes of authorization), and the knowledge that is transmitted is primarily circulated within and restricted to relatively closed communities of knowers (Foucault calls them “fellowships of discourse”). True statements are codified, repeated, and circulated through various kinds of disciplinary and institutional forms of control that legitimize what a “true statement” is within a given discipline: before a statement can even be admitted to debate, it must first be, as Foucault argued repeatedly, “within the true” (224). For an idea to fall “within the true,” it must not only cite the normative truths of a given discipline but—and this is the crux of this essay—it must look “within the true” in terms of its methodology, medium, and mode of dissemination. Research articles can't look like Wikipedia entries; monographs can't be exhibitions curated in Second Life. At least not yet . . .

Thankfully, universities are far from static or monolithic institutions and, as Lyotard and others point out, there is plenty of room for an imaginative reinvention of the university, of disciplinary structures, and research and pedagogical practices. This imaginative investment lies in the ability to “make a new move” or change the rules of the game by, perhaps, arranging or curating data in new ways, thereby developing new constellations of thought that “disturb the order of reason” (61). The next part of this article will address precisely what this might mean for the work of the Digital Humanities today.

For now, I want to articulate the third and final point that I adopt from Lyotard, namely the problem of legitimation. Wikipedia can stand as a synechdoche for the problems of knowledge legitimation: who can create knowledge, who monitors it, who authorizes it, who disseminates it, and whom does it influence and to what effect? Legitimation is always, of course, connected to power, whether the power of a legal system, a government, a military, a board of directors, an information management system, the tenure and promotion system, the book publishing industry, or any oversight agency. Not only are modes of discourse (utterances, statements, arguments) legitimized by the standards established by a given discipline, by its practitioners, and by its history, so are the media in which these discursive statements are formulated, articulated, and disseminated. The normative medium for conveying humanities knowledge (certainly in core disciplines such as literature and literary studies, history, and art history, but also philosophy and the humanistic social sciences, as they have been codified since the nineteenth century) is print: the printed page—linear, paginated prose supported by a bibliographic apparatus—is the naturalized medium, and the knowledge it conveys is legitimated by the processes of peer review, publication, and citation. This is not necessarily a problem—it certainly works, makes sense, and is authoritative. But we should also remember that this medium wasn't always used and won't always be: think, for example, of rhetoric and philosophy, grounded in oral and performative traditions, or of “practice-based” disciplines such as dance, design, film, and music, in which the intellectual product is not a print artifact. In much the same vein, Digital Humanities denaturalizes print, awakening us to the importance of what N. Katherine Hayles calls “media-specific analysis” in order to focus attention on the technologies of inscription, the material support, the system of writing down (“aufschreiben,” as Friedrich Kittler would say), the modes of navigation (whether turning pages or clicking icons), and the forms of authorship and creativity (not only of content but also of typography, page layout, and design). In this watershed moment of transformation, awareness of media-specificity is nearly inescapable.

Far from suggesting that new technologies are better or that they will save us (or resuscitate our “dying disciplines” or "struggling universities"), Lyotard concludes his report with a call for the public to have “free access to the memory and data banks” (67). He grounds the argument dialectically, as technologies have the potential to do many things at once: to exercise exclusionary control over information as well as to democratize information by opening up access and use. This, I would argue, is the persistent dialectic of any technology, ranging from communications technologies (print, radio, telephone, television, and the web) to technologies of mobility and exchange (railways, highways, and the Internet). These technologies of networking and connection do not necessarily bring about the ever-greater liberation of humankind, as Nicholas Negroponte asserted in his wildly optimistic book Being Digital (1995), for they always have a dialectical underbelly: mobile phones, social networking technologies, and perhaps even the hundred-dollar computer will not only be used to enhance education, spread democracy, and enable global communication but will also likely be used to perpetrate violence and even orchestrate genocide in much the same way that the radio and the railway did in the last century (despite the belief that both would somehow liberate humanity and join us all together in a happy, interconnected world that never existed before [Presner]). Indeed, this is why any discussion about technology cannot be separated from one about power, legitimacy, and authority.

Rather than making predictions, I would like to turn to the state of knowledge in the humanities in 2009. My relatively recent arrival in this discussion, after centuries of thought on this topic, constitutes, in fact, a unique vantage point from which I can begin: today, the changes brought about by new communication technologies—including but hardly limited to web-based media forms, locative technologies, digital archives, social networking, mixed realities, and now cloud computing—are so proximate and so sweeping in scope and significance that they may appropriately be compared to the print revolution.1 But our contemporary changes are happening on a very rapid timescale, taking place over months and years rather than decades and centuries. Because of the rapidity of these developments, the intellectual tools, methodologies, disciplinary practices, and institutional structures have just started to emerge for responding to, engaging with, and interpreting the massive social, cultural, economic, and educational transformations happening all around us. Digital Humanities explores a universe in which print is no longer the exclusive or normative medium in which knowledge is produced and/or disseminated; instead, print finds itself absorbed into new, multimedia configurations, alongside other digital tools, techniques, and media that have profoundly altered the production and dissemination of knowledge in the Arts, Humanities, and Social Sciences (see, for example, the Digital Humanities Manifesto, Figs. 1 and 2).

I consider “Digital Humanities” to be an umbrella term for a wide array of practices for creating, applying, interpreting, interrogating, and hacking both new and old information technologies. These practices—whether conservative, subversive, or somewhere in between—are not limited to conventional humanities departments and disciplines, but affect every humanistic field at the university and transform the ways in which humanistic knowledge reaches and engages with communities outside the university. Digital Humanities projects are, by definition, collaborative, engaging humanists, technologists, librarians, social scientists, artists, architects, information scientists, and computer scientists in conceptualizing and solving problems, which often tend to be high-impact, socially-engaged, and of broad scope and duration. At the same time, Digital Humanities is an outgrowth and expansion of the traditional scope of the humanities, not a replacement for or rejection of humanistic inquiry. I firmly believe that the role of the humanist is more critical at this historic moment than ever before, as our cultural legacy as a species migrates to digital formats and our relation to knowledge, cultural material, technology, and society is radically re-conceptualized. As Jeffrey Schnapp and I articulated in various instantiations of the Digital Humanities Manifesto, it is essential that humanists assert and insert themselves into the twenty-first century cultural wars (which are largely being defined, fought, and won by corporate interests). Why, for example, were humanists, foundations, and universities conspicuously—even scandalously—silent when Google won its book search lawsuit and effectively won the right to transfer copyrights of orphaned books to itself? Why were they silent when the likes of Sony and Disney essentially engineered the Digital Millennium Copyright Act, radically restricting intellectual property, copyright, and sharing? The Manifesto is a call to humanists for a much deeper engagement with digital culture production, dissemination, access, and ownership. If new technologies are dominated and controlled by corporate and entertainment interests, how will our cultural legacy be rendered in new media formats? By whom and for whom? These are questions that humanists must urgently ask and answer.

Like all manifestos, especially those that came out of the European avant-garde in the early twentieth century, the Digital Humanities Manifesto is bold in its claims, fiery in its language, and utopian in its vision. It is not a unified treatise or a systematic analysis of the state of the humanities; rather, it is a call to action and a provocation that has sought to perform the kind of debate and transformation for which it advocates. As a participatory document circulated throughout the blogosphere, the three major iterations of the Digital Humanities Manifesto are available in many forums online: Versions 1.0 and 2.0 exist primarily as Commentpress blogs, and Version 3.0 is an illustrated, print-ready PDF file, which, as of this writing, has been translated into four languages and widely cited, cribbed, remixed, and republished on numerous blogs. The rationale for using Commentpress was to make some of the more incendiary ideas in the Manifesto available for immediate public scrutiny and debate, something that is facilitated by the blogging engine's paragraph-by-paragraph commenting feature, resulting in a richly interlinked authoring/commenting environment. In a Talmudic vein, the comments and critiques quickly overtook the original “text,” creating a web of commentary and a multiplication of voices and authors. By Versions 2.0 and 3.0, the authorship of the Manifesto had extended in multiple directions, with substantial portions authored by scholars in the field, students, and the general public. Moreover, since the Manifesto was widely distributed in the blogosphere and on various Digital Humanities listservs, it instantiated one of the key things that it called for: participatory humanities scholarship in the expanded public sphere.

Figure 1
Figure 1 (graphics1.png)

Figure 2
Figure 2 (graphics2.png)

PDF Version 3.0:

Reflecting on the Manifesto nine months later, I believe it is not only a call for Hhmanists to be deeply engaged with every facet of the most recent information revolution (Robert Darnton points out that we are living through the beginnings of the fourth Information Age, not the first), but also a plea for humanists to guide the reshaping of the university—curricula, departmental and disciplinary structures, library and laboratory spaces, the relationship between the university and the greater community—in creative ways that facilitate the responsible production, curation, and dissemination of knowledge in the global cultural and social landscapes of the twenty-first century. Far from providing "right answers," the Manifesto is an attempt to examine the explanatory power, relevance, and cogency of established organizations of knowledge that were inherited from the nineteenth and twentieth centuries and to imagine creative possibilities and futures that build on long-standing humanistic traditions. It is not a call to throw the proverbial baby out with the bathwater, but rather to interrogate disciplinary and institutional structures, the media of knowledge production and modes of meaning making and to take seriously the challenges and possibilities set forth by the advent of the fourth Information Age. The Manifesto argues that the work of the humanities is absolutely critical and, in fact, more necessary than ever for developing thoughtful responses, purposeful interpretations, trenchant critiques, and creative alternatives—and that this work cannot be done while locked into restrictive disciplinary and institutional structures, singular media forms, and conventional expectations about the purview, function, and work of the humanities.

The Manifesto in no way declares the humanities "dead" or placed in peril by new technologies; rather, it argues, the humanities are more necessary and relevant today than perhaps any other time in history. It categorically rejects Stanley Fish's lament of “the last professor” and the work of his students, such as Frank Donoghue, which claims that the humanities will soon die a quiet death. To be sure, we must be vigilant of the “corporate university” and distinguish our Digital Humanities programs from the “digital diploma mills” (Noble), but we must also demonstrate that the central work of the humanities—creation, interpretation, critique, comparative analysis, historical and cultural contextualization—is absolutely essential as our cultural forms migrate to digital formats and new cultural forms are produced that are “natively digital.” Fish and Donoghue make an assessment of the end of the humanities based on the fact that its research culture, curricular programs, departmental structures, tenure and promotion standards, and, most of all, publishing models are based on paradigms that are quickly eroding. Indeed, they are not wrong about their assessment, which is quite convincing when we start from the crisis of the present and look backwards: academic books in the humanities barely sell enough copies to cover the cost of their production, and the job market—as the 2009 MLA report attests—betrays the worst year on record for PhDs hoping to land tenure-track positions in English or Foreign Literature departments (Jaschik). What this evidences is certainly a crisis, but the way out is neither to surrender nor to attempt to replicate the institutional structures, research problems, disciplinary practices, and media methodologies of the past; rather, it may be to recognize the liberating—and profoundly unsettling—possibilities afforded by the imminent disappearance of one paradigm and the emergence of another. The humanities, rather than disappear as Fish predicts, can instead guide this paradigm shift by shaping the look of learning and knowledge in this new world.

Instead of facilely dismissing either the critical work of the humanities or the potentialities afforded by new technologies, we must be engaged with the broad horizon of possibilities for building upon excellence in the humanities while also transforming our research culture, our curriculum, our departmental and disciplinary structures, our tenure and promotion standards, and, most of all, the media and format of our scholarly publications. While new technologies may threaten to overwhelm traditional approaches to knowledge and may, in fact, displace certain disciplines, scholarly fields, and pedagogical practices, they can also revitalize humanistic traditions by allowing us to ask questions that weren't previously possible. We see this, for example, in fields such as classics and archaeology, which have widely embraced digital tools such as Geographic Information Systems (GIS) and 3D modeling to advance their research in significant and unexpected ways. We also see it in text-based fields like history and literature, which have begun to draw on new authoring, data-mining, and text-analysis tools for dissecting complex corpora on a scale and with a level of precision never before possible.

While the first wave of Digital Humanities scholarship in the late 1990s and early 2000s tended to focus on large-scale digitization projects and the establishment of technological infrastructure, the current second wave of Digital Humanities—what can be called “Digital Humanities 2.0”—is deeply generative, creating the environments and tools for producing, curating, and interacting with knowledge that is “born digital” and lives in various digital contexts. While the first wave of Digital Humanities concentrated, perhaps somewhat narrowly, on text analysis (such as classification systems, mark-up, text encoding, and scholarly editing) within established disciplines, Digital Humanities 2.0 introduces entirely new disciplinary paradigms, convergent fields, hybrid methodologies, and even new publication models that are often not derived from or limited to print culture.

Let me provide a couple of examples based on my own work on a web-based research, educational, and publishing project called HyperCities ( Developed through collaboration between UCLA and USC, HyperCities is a digital media platform for exploring, learning about, and interacting with the layered histories of city spaces such as Berlin, Rome, New York, Los Angeles, and Tehran. It brings together scholars from fields such as geography, history, literary and cultural studies, architecture and urban planning, and classics to investigate the fundamental idea that all histories “take place” somewhere and sometime and that these histories become more meaningful and valuable when they interact with other histories in a cumulative, ever-expanding, and interactive platform. Developed using Google's Map and Earth APIs, HyperCities features research and teaching projects that bring together the analytic tools of GIS, the geo-markup language KML, and traditional methods of humanistic inquiry.2 The central theme is geo-temporal analysis and argumentation, an endeavor that cuts across a multitude of disciplines and relies on new forms of visual, cartographic, and time/space-based narrative strategies. Just as the turning of the page carries the reader forward in a traditionally conceived academic monograph, so, too, the visual elements, spatial layouts, and kinetic guideposts guide the “reader” through the argument situated within a multi-dimensional, virtual cartographic space. HyperCities currently features rich content on ten world cities, including more than two hundred geo-referenced historical maps, hundreds of user-generated maps, and thousands of curated collections and media objects created by users in the academy and general public.

As a Digital Humanities 2.0 project, HyperCities is a participatory platform that features collections that pull together digital resources via network links from countless distributed databases. Far from a single container or meta-repository, HyperCities is the connective tissue for a multiplicity of digital mapping projects and archival resources that users curate, present, and publish. What they all have in common is geo-temporal argumentation. For example, the digital curation project “2009-10 Election Protests in Iran” (see Fig. 3) meticulously documents, often minute-by-minute and block-by-block, the sites where protests emerged in the streets of Tehran and other cities following the elections in mid-June. With more than one thousand media objects (primarily geo-referenced YouTube videos, Twitter feeds, and Flickr photographs), the project is possibly the largest single digital collection to trace the history of the protests and their violent suppression. It is a digital curation project that adds significant value to these individual and dispersed media objects by bringing them together in an intuitive, cumulative and open-ended geo-temporal environment that fosters analytic comparisons through diachronic and synchronic presentations of spatialized data. In addition to organizing, presenting, and analyzing the media objects, the creator of the project, Xarene Eskandar, is also working on qualitative analyses of the data (such as mappings of anxiety and shame) as well as investigating how media slogans used in the protests were aimed at many different audiences, especially Western ones.

Figure 3
Election Protests in Iran
Election Protests in Iran (graphics3.png)

YouTube video on this collection:

Permalink to this collection in HyperCities:

Another project, “Ghost Metropolis” by Philip Ethington (see Fig. 4), is a digital companion to his forthcoming book on the history of Los Angeles, which starts in 13,000 BCE and extends through the present. Ethington demonstrates how history, experienced with complex visual and cartographic layers, “takes” and “makes” place, transforming the urban, cultural, and social environment as various “regional regimes” leave their impression on the landscape of the global city of Los Angeles. The scholarship of this project can be fully appreciated only in a hypermedia environment that allows a user to move seamlessly between global and local history, overlaying datasets, narratives, cartographies, and other visual assets in a richly interactive space. Significantly, this project—a scholarly publication in its own right—can be viewed side-by-side with and even “on top of” other projects that address cultural and social aspects of the same layered landscape, such as the video documentaries created in 2008-09 by immigrant youth living in Los Angeles' historic Filipinotown. The beauty of this approach is that scholarly research intersects with and is enhanced by community memories and archiving projects that tend, at least traditionally, to exist in isolation from one another.

Figure 4
Ghost Metropolis
Ghost Metropolis (graphics4.png)

YouTube video on this collection:

Permalink to this collection in HyperCities:

HyperCities is also used for pedagogical purposes to help students visualize and interact with the complex layers of city spaces. Student projects exist side-by-side with scholarly research and community collections and can be seen and evaluated by peers. These projects, such as those created by my students for a General Education course at UCLA, “Berlin: Modern Metropolis,” demonstrate a high degree of skill in articulating a multi-dimensional argument in a hypermedia environment and bring together a wide range of media resources ranging from 2D maps and 3D re-creations of historical buildings to photographs, videos, and text documents (see Fig. 5). What all of these projects have in common is an approach to knowledge production that underscores the distributed dimension of digital scholarship (by dint of the fact that all of the projects make use of digital resources from multiple archives joined together by network links), its interdisciplinary, hypermedia approach to argumentation, and its open-ended, participatory approach to interacting with and even extending and/or remixing media objects. Moreover, with the exception of the last, all of these HyperCities projects are works-in-progress, something that underscores the processual, iterative, and exploratory nature of Digital Humanities scholarship.

Figure 5
The Controversy over Rebuilding the Royal Palace in Berlin (Student Project)
The Controversy over Rebuilding the Royal Palace in Berlin (Student Project) (graphics5.png)

YouTube video on this collection:

Permalink to this collection in HyperCities:

This transformation in Digital Humanities scholarship is something that roughly parallels the development of the web from relatively static, read-only portals and stand-alone applications for the display of content to participatory platforms that foster collaborative production across media environments through the repurposing of both content and software. The birth of Web 2.0 has been well articulated by technology gurus such as Tim O'Reilly as well as leaders in the field of Digital Humanities such as Cathy Davidson and David Theo Goldberg, the co-founders of the virtual consortium HASTAC (Humanities, Arts, Science, and Technology Advanced Collaboratory), both of whom are fierce advocates of “Humanities 2.0.” Humanities 2.0 refers to generative humanities, a humanistic practice anchored in creation, curation, collaboration, experimentation, and the multi-purposing or multi-channeling of humanistic knowledge. By rejecting the affect-neutral, Enlightenment myth of simply relaying disembodied information and, instead, emphasizing design, multimediality, and the experiential, Digital Humanities 2.0 seeks to expand the affective range to which scholarship can aspire.

Let me now return to and further elaborate the three observations about the state of knowledge that I distilled from Lyotard's report on “computerized societies”: first, the status of the social bond; second, the status of the university and, in particular, the place of the Humanities; and third, the question of knowledge legitimation. The three are deeply entangled and cannot be fully disaggregated. By social bond, at least in the context of the university, I am not referring to uses of social networking applications like Facebook (which, in my opinion, is as much of a data-harvester and miner of personal information as it is a social technology); rather, I mean the transformation of scholarly practice from individuals working and writing in isolation to team-based approaches to research problems that cannot be conceptualized, let alone solved, by single scholars. Here, we are beginning to see the emergence of finite, flexible, and nimble “knowledge problematics” that do not derive from or reflect entrenched disciplinary lines, methodological assumptions, or scholarly silos. I see these knowledge problematics as “virtual departments,” which exist only for a finite period of time, are agile, and are constantly built and dismantled. To use a term from the emergent field of digital cultural mapping3, they might function as “overlays” on existing departments and institutions, connecting distant scholars and communities together and creating new feedback loops or among between them. It is imperative that we imagine concrete ways of reinventing the “social bond” at the university as an expanding set of networks that can be variously mobilized and also dispersed. The finitude of the social bond is just as important as its mobilization, because departmental and disciplinary structures must be flexible, nimble, adaptable, and, ultimately, mortal in order to foster innovation.

This not only affects the state of disciplines but also the status and role of the university in scholarly creation. In its best form, the university does not merely “store” and “transmit” knowledge but rather is a site of contestation, experimentation, and imaginative creation and re-creation. But today, many of the most innovative and impactful research technologies (at least for humanists and probably social scientists as well) are being developed by private industry, leaving the scholars and librarians to be consumers and users of these technologies. As Johanna Drucker has cogently argued, the design of new research environments cannot be left to technical staff and private corporations, as if this were somehow not intellectual work or not something in which we as scholars should be invested. The very word processing tools that I use to write this paper are not value-neutral; Word, PowerPoint, Outlook, web-browsers, web applications like Google, Wikipedia, Facebook, Second Life, and even markup and programming languages like HTML, XML, Java, and C++ are all culturally contingent technologies for knowledge production and dissemination. You may not agree with or care about the knowledge being produced here, but regardless, we—as humanities scholars—have barely started to grapple with the massive assumptions built into and implications of these technologies and languages and their social and cultural practices. We barely know what can be thought using these technologies, let alone what cannot be thought. The only thing that can be said with certainty is that these technologies weren't designed by and for humanists. I suggest that if we apply the same kind of rigorous, media-specific, social, cultural, and economic analyses that we have honed to study print culture to not just emerging but already prevalent technologies, we can begin to understand the status of knowledge in our “computerized societies” of 2009. Beyond “studying” such technologies, we must actively engage with, design, create, and even hack the environments and technologies that facilitate humanities research and knowledge production.

Finally, let me turn to the issue of knowledge legitimation, which is where Digital Humanities encounters the most resistance, skepticism, and denial. Most humanities scholars have been trained in "Normal Humanities" (to somewhat loosely apply Thomas Kuhn's formulation to our disciplines). "Normal Humanities" means clearly defined and legitimated research based on past achievements, on stabilized ways of knowing and communicating this knowledge, and on general agreement about what counts as and what looks like a research problem. I would venture that the vast majority of scholarship is not really novel but falls into “Normal Humanities,” obeying both the tacit and explicit rules of disciplinarity, media form, scholarly citation, and the accepted theoretical and methodological paradigms of a given field. Many people get tenure and are promoted by doing “Normal Humanities” well. What we are seeing today, however, is much more than a “paradigm shift.” We are at the beginning of a shift in “standards governing permissible problems, concepts, and explanations” (Kuhn, 106), and also in the midst of a transformation of the institutional and conceptual conditions of possibility for the generation, transmission, accessibility, and preservation of knowledge. To be sure, “traditional” humanities knowledge will not go the way of Ptolemy's computations of planetary position or phlogiston theory, since the transformation in humanities paradigms is not, strictly speaking, based upon the “incommensurability” with what came before. Rather, the transformation alters the ways in which the humanities articulates and investigates problems as well as the institutional and media structures that facilitate problem-solving in the first place. Within the transition period, there will, of course, be much searching for the fundamentals of the field as well as the emergence of competing and overlapping paradigms, but when the transition is complete, as Kuhn predicts with regard to scientific revolutions, “the profession will have changed its view of the field, its methods, and its goals” (85). A new “Normal Humanities” will have emerged.

Let me end by throwing down the gauntlet and arguing that Wikipedia is not only a model for the humanities but also for the university today. To be sure, there are other examples that I might have mentioned, but Wikipedia is probably the most pervasive, non-corporate, digital technology platform for knowledge generation. Far from a web-based encyclopedia for “intellectual sluggards” engaged in a “flight from expertise” (to quote Michael Gorman, former President of the American Library Association [qtd. in Stothart]), Wikipedia, I believe, represents a truly innovative, global, multilingual, collaborative, knowledge-generating community and platform for authoring, editing, distributing, and versioning knowledge.  To date, it has more than three million content pages, more than three hundred million edits, over ten million registered users, and articles in 47 languages (Wikipedia Statistics). This is a massive achievement for eight years of work. Wikipedia could, in fact, be a model for rethinking collaborative research and the dissemination of knowledge at institutions of higher learning, which are all too often fixated on “individual training, discrete disciplines, and isolated achievement and accomplishment” (Davidson and Goldberg, 14). 

Wikipedia represents a dynamic, flexible, and open-ended network for knowledge creation and distribution that underscores process, collaboration, access, interactivity, and creativity with an editing model and versioning system that documents every contingent decision made by every contributing author.  But you perhaps object: The content is amateurish, open to anyone, and, hence, cannot be trusted. Why would we want to abandon credentialing and expertise? And I reply: The point is not credentialing versus amateurness (or expertise versus crowd-sourcing); it's the fact that expertise and credentialing are distributed and shared in a way that increases the depth, scope, duration, and impact of both. Moreover, consensus never finally arrives when the system keeps an ongoing and ever-expanding record of each change and, significantly, always exposes its own conditions of possibility for knowledge production. At this moment in its short life, Wikipedia is already the most comprehensive, representative, and pervasive participatory platform for knowledge production ever created by humankind.  That's worth some pause and reflection.

The point here is not that Wikipedia is “the answer” to the crisis of the humanities or that humanities scholarship should turn into Wikipedia entries; rather, it's that Wikipedia represents a very different model for creating, authorizing, and distributing knowledge; Google Earth and HyperCities represent others; social technologies, virtual worlds, and creative commons authoring environments offer still others. A central part of the work of the humanities must be to create and interrogate new models for knowledge production in our “computerized” societies of 2009. Not only do we have to rethink how knowledge gets created, we also have to rethink what knowledge looks (or sounds, feels, or tastes) like, who gets to create knowledge, when it is "done" or transformed, how it gets legitimated and authorized, and how it is made accessible to a significantly broader (and potentially global) audience. The twenty-first century university has the potential to generate, legitimate, and disseminate knowledge in radically new ways on a scale never before realized, involving technologies and communities that rarely (if ever) were engaged in a global knowledge-creation enterprise. We have just begun to do this. And that's what Digital Humanities 2.0 is fundamentally about.


Darnton, Robert. “The Library in the New Age.” New York Review of Books 55.10, June 12, 2008.

Davidson, Cathy. “Humanities 2.0: Promise, Perils, Predictions.” PMLA 123.3 (2008): 707-17.

Davidson, Cathy and David Theo Goldberg. The Future of Learning Institutions. Cambridge: MIT Press, 2009.

Digital Humanities Manifesto Commentpress versions 1.0 and 2.0: (accessed September 9, 2009).

“Digital Humanities Manifesto.” Commentpress versions 1.0 and 2.0. Institute for Future of the Book. (accessed September 9, 2009).

Donoghue, Frank. The Last Professors: The Corporate University and The Fate of the Humanities. New York: Fordham Press, 2008.

Drucker, Johanna. “Blind Spots.” The Chronicle of Higher Education. April 3, 2009. (accessed September 9, 2009).

Fish, Stanley. “The Last Professor.” The New York Times. January 18, 2009. (accessed September 9, 2009).

Foucault, Michel. “The Discourse on Language.” The Archaeology of Knowledge. Translated by A.M. Sheridan Smith. New York: Pantheon Books, 1972.

Hayles, Katherine N. Writing Machines. Cambridge: MIT Press, 2002.


Jaschik, Scott. “Disappearing Jobs.” Inside Higher Education. December 17, 2009. (accessed December 22, 2009).

Kittler, Friedrich. Discourse Networks 1800/1900. Translated by Chris Metteer with Chris Cullens. Palo Alto: Stanford University Press, 1990.

Kuhn, Thomas S. The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1996.

Lyotard, Jean-François. The Postmodern Condition: A Report on Knowledge. Translated by Geoff Bennington and Brian Massumi. Foreword by Fredric Jameson. Minneapolis: University of Minnesota Press, 1991.

Negroponte, Nicholas. Being Digital. New York: Vintage, 1996.

Noble, David. Digital Diploma Mills: The Automation of Higher Education. New York: Monthly Review Press, 2001.

O'Reilly, Timothy. “What is Web 2.0: Design Patterns and Business Models for the Next Generation of Software.” 2005. (accessed September 9, 2009).

Presner, Todd. Mobile Modernity: Germans, Jews, Trains. New York: Columbia University Press, 2007.

Stothart, Chloe. “Web Threatens Learning Ethos.” Times Higher Education. June 22, 2007. (accessed September 9, 2009).

Wikipedia Statistics. (accessed September 9, 2009).


  1. Web-based media forms refer to any media produced and broadcast on the web, ranging from YouTube videos to Wikipedia. Many of these media are also viewable on mobile devices and, increasingly, search technologies are keyed to physical location. With a GPS-enabled mobile phone, for example, geographically relevant content can be uploaded and downloaded. Digital archives are steadily moving from being “digital silos” to becoming interoperable repositories, allowing for materials to be aggregated and integrated across collections. With the innovations of Google and Amazon, to name two examples, cloud computing no longer stores data on single machines or a limited number of servers but in the (virtually) infinite “cloud,” rendering data accessible anywhere, at anytime. Finally, the explosion of social networking sites allows for real-time interaction with friends, creating online communities composed of personal networks. Mixed reality applications such as Second Life integrate real-world social networking with embodied experiences of navigation, gaming, and moving through virtual spaces.
  2. An Application Programming Interface (API) allows programmers to build on, customize, and incorporate existing software code into their own applications. In 2005, Google released its map API, which let programmers invent their own mapping “mash-ups” using the basic content and technologies developed by Google.
  3. The emergent field of Digital Cultural Mapping brings together the analytic tools of Geographic Information Systems (GIS) and traditional methods of humanistic inquiry in order to investigate a wide-range of cultural, historical, and social dynamics through space-time visualizations. See, for example, UCLA's new program in Digital Cultural Mapping:

Content actions

Download module as:

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens


A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks