Skip to content Skip to navigation Skip to collection information

OpenStax-CNX

You are here: Home » Content » Online Humanities Scholarship: The Shape of Things to Come » Underpinnings of the Social Edition

Navigation

Table of Contents

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice Digital Scholarship

    This collection is included in aLens by: Digital Scholarship at Rice University

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Recently Viewed

This feature requires Javascript to be enabled.
 

Underpinnings of the Social Edition

Module by: Ray Siemens. E-mail the authorEdited By: Frederick Moody

The Shape of Things to Come -- buy from Rice University Press.

Underpinnings of the Social Edition? A Narrative, 2004-9, for the Renaissance English Knowledgebase (REKn) and Professional Reading Environment (PReE) Projects

Ray Siemens, Mike Elkink, Alastair McColl, Karin Armstrong, James Dixon, Angelsea Saby, Brett D. Hirsch and Cara Leitch, with Martin Holmes, Eric Haswell, Chris Gaudet, Paul Girn, Michael Joyce, Rachel Gold, and Gerry Watson, and members of the PKP, Iter, TAPoR, and INKE teams.

Abstract

The Renaissance English Knowledgebase (REKn) is an electronic knowledgebase consisting of primary and secondary materials (text, image, and audio) related to the Renaissance period. The limitations of existing tools to accurately search, navigate, and read large collections of data in many formats, coupled with the findings of our research into professional reading, led to the development of a Professional Reading Environment (PReE) to meet these needs. Both were conceived as necessary components of a prototype textual environment for an electronic scholarly edition of the Devonshire Manuscript. This article offers an overview of the development of both REKn and PReE at the Electronic Textual Cultures Laboratory (ETCL) at the University of Victoria, from proof of concept through to their current iteration, concluding with a discussion about their future adaptation, implementation, and integration with other projects and partnerships.

1. Introduction and Overview

The Renaissance English Knowledgebase (REKn) is a prototype research knowledgebase consisting of a large dynamic corpus of both primary (15,000 text, image, and audio objects) and secondary materials (some 100,000 articles, e-books, etc.) Each electronic document is stored in a database along with its associated metadata and, in the case of many text-based materials, a light XML encoding. The data is queried, analyzed and examined through a stand-alone prototype document-centered reading client called the Professional Reading Environment (PReE), written for initial prototyping in .NET and, in a more recent implementation, with key parts modeled in Ruby on Rails.

Recently, both projects have moved into new research developmental contexts, requiring some dramatic changes in direction from our earlier proof of concept. For the second iteration of PReE, our primary goal continues to be to translate it from a desktop environment to the Internet. By following a web-application paradigm, we are able to take advantage of superior flexibility in application deployment and maintenance, the ability to receive and disseminate user-generated content, and multi-platform compatibility. As for REKn, experimentation with the prototype has seen the binary and textual data transferred from the database into the file system, affording gains in manageability and scalability and the ability to deploy third-party index and search tools.

As initial proof-of-concepts, REKn and PReE evoked James Joyce’s apt comment that “a man of genius makes no mistakes”; rather, that “his errors are volitional and are the portals of discovery” (1986: 9.228-29). In our case, we set out to develop a “project of genius” and found that our errors (volitional or, as was more often the case, accidental) certainly provided the necessary direction to pursue a more usable and useful reading environment for professional readers.1

This article offers a brief outline of the development of both REKn and PReE at the Electronic Textual Cultures Laboratory (ETCL) at the University of Victoria, from proof of concept through to their current iterations, concluding with a discussion about their future adaptations, implementations, and integrations with other projects and partnerships. This narrative situates REKn and PReE within the context of prototyping as a research activity, and documents the life cycle of a complex digital humanities research program that is itself part of larger, ongoing, iterative programs of research.2

2. Conceptual Backgrounds and Critical Contexts

2.1. Conceptual Backgrounds

The conceptual origins of REKn may be located in two fundamental shifts in literary studies in the 1980s—the emergence of New Historicism and the rise of the sociology of the text—and in the proliferation of large-scale text-corpus humanities computing projects in the late 1980s and early 1990s.

2.1.1. New Historicism

New Historicism situated itself in opposition to earlier critical traditions that dismissed historical and cultural context as irrelevant to literary study, and proposed instead that “literature exists not in isolation from social questions but as a dynamic participant in the messy processes of cultural formation.” Thus, New Historicism eschewed the distinction between text and context, arguing that both “are equal partners in the production of culture” (Hall 2007: vii). In Renaissance studies, as elsewhere, this ideological shift challenged scholars to engage not only with the traditional canon of literary works but also with the whole corpus of primary materials at their disposal. As New Historicism blurred the lines between the literary and non-literary, its proponents were quick to illustrate that all cultural forms—literary and non-literary, textual and visual—could be freely and fruitfully “read” alongside and against one another.3

2.1.2. The Sociology of Text

A concurrent paradigm shift in bibliographical circles was the rise of the social theory of text, exemplified in the works of Jerome J. McGann (1983) and D. F. McKenzie (1986). “If the work is not confined to the historically contingent and the particular,” the social theory of text posited, “it is nevertheless only in its expressive textual form that we encounter it, and material conditions determine meanings” (Sutherland 1997: 5). In addition to being “an argument against the notion that the physical book is the disposable container,” as Kathryn Sutherland has suggested, “it is also an argument in favor of the significance of the text as a situated act or event, and therefore, under the conditions of its reproduction, necessarily multiple” (1997: 6).

In other words, the social theory of text rejected the notion of individual literary authority in favor of a model where social processes of production disperse that authority. According to this view, the literary “text” is not solely the product of authorial intention, but the result of interventions by many agents (such as copyists, printers, publishers) and material processes (such as revision, adaptation, publication). In practical terms, the social theory of text revised the role of the textual scholar and editor, who (no longer concerned with authorial intention) instead focused on recovering the “social history” of a text—that is, the multiple and variable forms of a text that emerge out of these various and varied processes of mediation, revision, and adaptation.4

2.1.3. Knowledgebases

The proliferation of Renaissance text-corpus humanities computing projects in North America, Europe, and New Zealand during the late 1980s and early 1990s5 might be considered the inevitable result of the desire of Renaissance scholars, spurred on by the project of New Historicism, to engage with a vast body of primary and secondary materials in addition to the traditional canon of literary works; the rise of the sociology of text in bibliographical circles; and the growing realization that textual analysis, interpretation, and synthesis might be pursued with greater ease and accuracy through the use of an integrated electronic database.

A group of scholars involved in such projects, recognizing the value of collaboration and centralized coordination, engaged in a planning meeting towards the creation of a Renaissance Knowledge Base (RKB).6 Consisting of “the major texts and reference materials […] recognized as critical to Renaissance scholarship,”7 the RKB hoped to “deliver unedited primary texts,” to “allow users to search a variety of primary and secondary materials simultaneously,” and to stimulate “interpretations by making connections among many kinds of texts” (Richardson & Neuman 1990: 1-2). Addressing the question of “Who needs RKB?” the application offered the following response:

Lexicographers [need the RKB] in order to revise historical dictionaries (the Oxford English Dictionary, for example, is based on citation slips, not on the original texts). Literary critics need it, because the RKB will reveal connections among Renaissance works, new characteristics, and nuances of meaning that only a lifetime of directed reading could hope to provide. Historians need the RKB, because it will let them move easily, for example, from biography to textual information. The same may be said of scholars in linguistics, Reformation theology, humanistic philosophy, rhetoric, and socio-cultural studies, among others. (1990: 2)

The need for such a knowledgebase was (and is) clear. Since each of its individual components was deemed “critical to Renaissance scholarship,” and because the RKB intended to “permit each potentially to shed light on all the others,” the group behind the RKB felt that “the whole” was “likely to be far greater than the sum of its already-important parts” (1990: 2).

Recommendations following the initiative’s proposal suggested a positive path, drawing attention to the merit of the approach and suggesting further ways to bring about the creation of this resource to meet the research needs of an even larger group of Renaissance scholars. Many of the scholars involved persevered, organizing an open meeting on the RKB at the 1991 ACH/ALLC Conference in Tempe to determine the next course of action. Also present at that session were Eric Calaluca (Chadwyck-Healey), Mark Rooks (InteLex), and Patricia Murphy, all of whom proposed to digitize large quantities of primary materials from the English Renaissance.

From here, the RKB project as originally conceived took new (and largely unforeseen) directions. Chadwyck-Healey was to transcribe books from the Cambridge Bibliography of English Literature and publish various full-text databases now combined as Literature Online. InteLex was to publish its Past Masters series of full-text humanities databases, first on floppy disk and CD-ROM and now web-based. Murphy’s project to scan and transcribe large numbers of books in the Short-Title Catalogue to machine-readable form was taken up by Early English Books Online and later the Text Creation Partnership. In the decade since the scholars behind the RKB project first identified the need for a knowledgebase of Renaissance materials, its essential components and methodology have been outlined (Lancashire 1992). Moreover, considerable related work was soon to follow, some by the principals of the RKB project and much by those beyond it, such as R. S. Bear (Renascence Editions), Michael Best (Internet Shakespeare Editions), Gregory Crane (Perseus Digital Library), Patricia Fumerton (English Broadside Ballad Archive), Ian Lancashire (Lexicons of Early Modern English), and Greg Waite (Textbase of Early Tudor English); by commercial publishers such as Adam Matthew Digital (Defining Gender, 1450–1910; Empire Online; Leeds Literary Manuscripts; Perdita Manuscripts; Slavery, Abolition and Social Justice, 1490–2007; Virginia Company Archives), Chadwyck-Healey (Literature Online), and Gale (British Literary Manuscripts Online, c.1660–c.1900; State Papers Online, 1509–1714), and by consortia such as Early English Books Online–Text Creation Partnership (University of Michigan, Oxford University, the Council of Library and Information Resources, and ProQuest) and Orlando (Cambridge University Press and University of Alberta).

As part of the shift from print to electronic publication and archiving, work on digitizing necessary secondary research materials has been handled chiefly, but not exclusively, by academic and commercial publishers. Among others, these include Blackwell (Synergy), Cambridge University Press, Duke University Press (eDuke), eBook Library (EBL), EBSCO (EBSCOhost), Gale (Shakespeare Collection), Google (Google Book Search), Ingenta, JSTOR, netLibrary, Oxford University Press, Project MUSE, ProQuest (Periodicals Archive Online), Taylor & Francis, and University of California Press (Caliber). Secondary research materials are also being provided in the form of (1) open access databases, such as the Database of Early English Playbooks (Alan B. Farmer and Zachary Lesser), the English Short Title Catalogue (British Library, Bibliographical Society, and the Modern Language Association of America), and the REED Patrons and Performance Web Site (Records of Early English Drama and the University of Toronto); (2) open access scholarly journals, such as those involved in the Public Knowledge Project or others listed on the Directory of Open Access Journals; and, (3) printed books actively digitized by libraries, independently and in collaboration with organizations such as Google (Google Book Search) or the Internet Archive (Open Access Text Archive).

Even with this sizeable amount of work on primary and secondary materials accomplished or underway, a compendium of such materials is currently unavailable, and, even if it were, there is no system in place to facilitate navigation and dynamic interaction with these materials by the user (much as one might query a database) and by machine (with the query process automated or semi-automated for the user). There are, undoubtedly, benefits in bringing all of these disparate materials together with an integrated knowledgebase approach. Doing so would facilitate more efficient professional engagement with these materials, offering scholars a more convenient, faster, and deeper handling of research resources. For example, a knowledgebase approach would remove the need to search across multiple databases and listings, facilitate searching across primary and secondary materials simultaneously, and allow deeper, full-text searching of all records, rather than simply relying on indexing information alone—which is often not generated by someone with field-specific knowledge. An integrated knowledgebase—whether the integration were actual (in a single repository) or virtual (via federated searching and/or other means)—would also encourage new insights, allowing researchers new ways to consider relations between texts and materials and their professional, analytical contexts. This is accomplished by facilitating conceptual and thematic searches across all pertinent materials, via the incorporation of advanced computing search and analysis tools that assist in capturing connections between the original objects of contemplation (primary materials) and the professional literature about them (secondary materials).

2.2. Critical Contexts

2.2.1. Knowledge Representation

Other important critical contexts within which REKn is situated arise out of theories and methodologies associated with the emerging field of digital humanities. When considering a definition of the field, Willard McCarty warns that we cannot “rest content with the comfortably simple definition of humanities computing as the application of the computer to the disciplines of the humanities,” for to do so “fails us by deleting the agent-scholar from the scene” and “by overlooking the mediation of thought that his or her use of the computer implies” (1998: n. pag.). After McCarty, Ray Siemens and Christian Vandendorpe suggest that digital humanities or “humanities computing” as a research area “is best defined loosely, as the intersection of computational methods and humanities scholarship” (2006: xii).8

A foundation for current work in humanities computing is knowledge representation, which John Unsworth has described as an “interdisciplinary methodology that combines logic and ontology to produce models of human understanding that are tractable to computation” (2001: n. pag.). While fundamentally based on digital algorithms, as Unsworth has noted, knowledge representation privileges traditionally held values associated with the liberal arts and humanities, namely: general intelligence about human pursuits and the human social/societal environment; adaptable, creative, analytical thinking; critical reasoning, argument, and logic; and the employment and conveyance of these in and through human communicative processes (verbal and non-verbal communication) and other processes native to the humanities (publication, presentation, dissemination). With respect to the activities of the computing humanist, Siemens and Vandendorpe suggest that knowledge representation “manifests itself in issues related to archival representation and textual editing, high-level interpretive theory and criticism, and protocols of knowledge transfer—all as modeled with computational techniques” (2006: xii).

2.2.2. Professional Reading and Modeling

A primary protocol of knowledge transfer in the field of the humanities is reading. However, there is a substantial difference between the reading practices of humanists and those readers outside of academe—put simply, humanists are professional readers. As John Guillory has suggested, there are four characteristics of professional reading that distinguish it from the practice of lay reading:

First of all, it is a kind of work, a labor requiring large amounts of time and resources. This labor is compensated as such, by a salary. Second, it is a disciplinary activity, that is, it is governed by conventions of interpretation and protocols of research developed over many decades. These techniques take years to acquire; otherwise we would not award higher degrees to those who succeed in mastering them. Third, professional reading is vigilant; it stands back from the experience of pleasure in reading […] so that the experience of reading does not begin and end in the pleasure of consumption, but gives rise to a certain sustained reflection. And fourth, this reading is a communal practice. Even when the scholar reads in privacy, this act of reading is connected in numerous ways to communal scenes; and it is often dedicated to the end of a public and publishable “reading” (2000: 31-32).

Much recent work in the digital humanities has focused on modeling professional reading and other activities associated with conducting and disseminating humanities research.9 Modeling the activities of the humanist (and the output of humanistic achievement) with the assistance of the computer has identified the exemplary tasks associated with humanities computing: the representation of archival materials; analysis or critical inquiry originating in those materials; and the communication of the results of these tasks.10 As computing humanists, we assume that all of these elements are inseparable and interrelated, and that all processes can be facilitated electronically.

Each of these tasks will be described in turn. In reverse order, the communication of results involves the electronic dissemination of, and electronically facilitated interaction about the product of, archival representation and critical inquiry, as well as the digitization of materials previously stored in other archival forms.11 Communication of results takes place via codified professional interaction, and is traditionally held to include all contributions to a discipline-centered body of knowledge—that is, all activities that are captured in the scholarly record associated with the shared pursuits of a particular field. In addition to those academic and commercial publishers and publication amalgamator services delivering content electronically, pertinent examples of projects concerned with the communication of results include the Open Journal Systems and Open Monograph Press (Public Knowledge Project) and Collex (NINES), as well as services provided by Synergies and the Canadian Research Knowledge Network / Réseau Canadien de Documentation pour la Recherche (CRKN/RCDR).

Critical inquiry involves the application of algorithmically facilitated search, retrieval, and critical processes that, although originating in humanities-based work, have been demonstrated to have application far beyond.12 Associated with critical theory, this area is typified by interpretive studies that assist in our intellectual and aesthetic understanding of humanistic works, and it involves the application (and applicability) of critical and interpretive tools and analytic algorithms on digitally represented texts and artifacts. Pertinent examples include applications such as Juxta (NINES), as well as tools developed by the Text Analysis Portal for Research (TAPoR) project, the Metadata Offer New Knowledge (MONK) project, the Software Environment for the Advancement of Scholarly Research (SEASR), and by Many Eyes (IBM).

Archival representation involves the use of computer-assisted means to describe and express print-, visual-, and audio-based material in tagged and searchable electronic form. Associated as it is with the critical methodologies that govern our representation of original artifacts, archival representation is chiefly bibliographical in nature and often involves the reproduction of primary materials such as in the preparation of an electronic edition or digital facsimile.13 Key issues in archival representation include considerations of the modeling of objects and processes, the impact of social theories of text on the role and goal of the editor, and the “death of distance.”

Ideally, object modeling for archival representation should simulate the original object-artifact, both in terms of basic representation (e.g. a scanned image of a printed page) and functionality (such as the ability to “turn” or otherwise “physically” manipulate the page). However, object modeling need not simply be limited to simulating the original. Although “a play script is a poor substitute for a live performance,” Martin Mueller has shown that “however paltry a surrogate the printed text may be, for some purposes it is superior to the ‘original’ that it replaces” (2005: 61). The next level of simulation beyond the printed surrogate, namely the “digital surrogate,” would similarly offer further enhancements to the original. These enhancements might include greater flexibility in the basic representation of the object (such as magnification and otherwise altering its appearance) or its functionality (such as fast and accurate search functions, embedded multimedia, etc.).

Archival representation might then involve modeling the process of interaction between the user and the object-artifact. Simulating the process affords a better understanding of the relationships between the object and the user, particularly as that relationship reveals the user’s disciplinary practices—discovering, annotating, comparing, referring, sampling, illustrating, representing.14

2.2.3. The Scholarly Edition

The recent convergence of social theories of text and the rise of the electronic medium has had a significant impact on both the function of the scholarly edition and the role of the textual scholar. As Susan Schreibman has argued, “the release from the spatial restrictions of the codex form has profoundly changed the focus of the textual scholar’s work,” from “publishing a single text with apparatus which has been synthesized and summarized to accommodate to codex’s spatial limitations” to creating “large assemblages of textual and non-textual lexia, presented to readers with as little traditional editorial intervention as possible" (2002: 284). In addition to acknowledging the value of the electronic medium to editing and the edition, such “assemblages” also recognize the critical practice of “unediting,” whereby the reader is exposed to the various layers of editorial mediation of a given text,15 as well as an increased awareness of the “materiality” of the text-object under consideration.16

Perfectly adaptable to, and properly enabling of, social theories of text and the role of editing, the electronic medium has brought us closer to the textual objects of our contemplation, even though we remain at the same physical distance from them. Like other enabling communicative and representative technologies that came before it, the electronic medium has brought about a “death of distance.” This notion of a “death of distance,” as discussed by Paul Delany, comes from a world made smaller by travel and communication systems, a world in which we have “the ability to do more things without being physically present at the point of impact” (1997: 50). The textual scholar, accumulating an “assemblage” of textual materials, does so for those materials to be, in turn, re-presented to those who are interested in those materials. More and more, though, it is not only primary materials—textual witnesses, for example—that are being accumulated and re-presented. The “death of distance” applies also to objects that have the potential to shape and inform further our contemplation of those direct objects of our contemplation: namely, the primary materials.17

We understand, almost intuitively, the end-product of the traditional scholarly edition in its print codex form: how material is presented, what the scope of that material is, how that material is being related to us and, internally, how the material presented by the edition relates to itself and to materials beyond those directly presented—secondary texts, contextual material, and so forth. Our understanding of these things as they relate to the electronic scholarly edition, however, is only just being formed. We are at a critical juncture for the scholarly edition in electronic form, where the “assemblages” and accumulation of textual archival materials associated with social theories of text and the role of editing meet their natural home in the electronic scholarly edition; and such large collections of primary materials in electronic form meet their equivalent in volume in the world of secondary materials, that ever-growing body of scholarship (Siemens 2001: 426).

To date, two models of the electronic scholarly edition have prevailed. One is the notion of the “dynamic text,” which consists of an electronic text and integrated advanced textual analysis software. In essence, the dynamic text presents a text that indexes and concords itself and allows the reader to interact with it in a dynamic fashion, enacting text analysis procedures upon it as it is read.18 The other, often referred to as the “hypertextual edition,” exploits the ability of encoded hypertextual organization to facilitate a reader’s interaction with the apparatus (textual, critical, contextual, and so forth) that traditionally accompanies scholarly editions, as well as with relevant external textual and graphical resources, critical materials, and so forth.19

Advances over the past decade have made it clear that electronic scholarly editions can in fact enjoy the best of both worlds, incorporating elements from the “dynamic text” model—namely, dynamic interaction with the text and its related materials—while at the same time reaping the benefits of the fixed hypertextual links characteristically found in “hypertextual editions.”20 At present, there is no extant exemplary implementation of this new dynamic edition, an edition that transfers the principles of interaction afforded by a dynamic text to the realm of the full edition, comprising of that text and all of its extra- and para-textual materials—textual apparatus, commentary, and beyond.21

2.2.4. Prototyping as a Research Activity

In addition to the aforementioned critical contexts, it is equally important to situate the development of REKn and PReE within a methodological context of prototyping as a research activity. The process of prototyping in the context of our work involves constructing a functional computational model that embodies the results of our research, and, as an object of further study itself, undergoes iterative modification in response to research and testing. A prototype in this context is an interface or visualization that embodies the theoretical foundations our work establishes, so that the theory informing the creation of the prototype can itself be tested by having people use it.22

Research prototypes, such as those we set out to develop, are distinct from prototypes designed as part of a production system in that the research prototype focuses chiefly on providing limited but research-pertinent functionality within a larger framework of assumed operation.23 Production systems, on the other hand, require full functionality and are often derived from multiple prototyping processes.

3. The Proof of Concept

REKn was originally conceived as part of a wider research project to develop a prototype textual environment for a dynamic edition: an electronic scholarly edition that models disciplinary interaction in the humanities, specifically in the areas of archival representation, critical inquiry, and the communication of results. Centered on a highly encoded electronic text, this environment facilitates interaction with the text, with primary and secondary materials related to it, and with scholars who have a professional engagement with those materials. This ongoing research requires (1) the adaptation of an exemplary, highly-encoded and properly-imaged electronic base text for the edition; (2) the establishment of an extensive knowledgebase to exist in relation to that exemplary base text, composed of primary and secondary materials pertinent to an understanding of the base text and its literary, historical, cultural, and critical contexts;24 and (3) the development of a system to facilitate navigation and dynamic interaction with and between materials in the edition and in the knowledgebase, incorporating professional reading and analytical tools; to allow those materials to be updated; and to implement communicative tools to facilitate computer-assisted interaction between users engaging with the materials.

The electronic base-text selected to act as the initial focal point for the prototype was drawn from Ray Siemens’ SSHRC-funded electronic scholarly edition of the Devonshire Manuscript (BL MS Add. 17492). Characterized as a “courtly anthology” (Southall 1964) and as an “informal volume” (Remley 1994: 48), the Devonshire Manuscript is a poetic miscellany consisting of 114 original leaves, housing some 185 items of verse (complete poems, fragments, extracts from larger extant works, and scribal annotations). Historically privileged in literary history as a key witness of Thomas Wyatt’s poetry, the manuscript has received new and significant attention of late, in large part because of the way in which its contents reflect the interactions of poetry and power in early Renaissance England and, more significantly, because it offers one of the earliest examples of the explicit and direct participation of women in the type of literary and political-poetic discourses found in the document.25

While editing the Devonshire Manuscript as the base text was underway, work on REKn began by mapping the data structure in relation to the functional requirements of the project, selecting appropriate tools and platforms, and outlining three objectives: to gather and assemble a corpus of primary and secondary texts to make up the knowledgebase; to develop automated methods for data collection; and to develop software tools to facilitate dynamic interaction between the user(s) and the knowledgebase.

3.1. Data Structure and Functional Requirements

We felt that the database should include tables to store relations between documents; that is, if a document includes a reference to another document, whether explicitly (such as in a reference or citation) or implicitly (such as in keywords and metadata), the fact of that reference or relation should be stored. Thus, the document-to-document relationship will be a many-to-many relationship.

In addition to a web service for public access to the database, it was proposed that there should be a standalone data entry and maintenance application to allow the user(s) to create, update, and delete database records manually. This application should include tools for filtering markup tags and other formatting characters from documents; allow for automating the data entry of groups of documents; and allow for automating the data entry of documents where they are available from web services, or by querying electronic academic publication amalgamator services (such as EBSCOhost).

Finally, a scholarly research application to query the database in read-only mode and display documents—along with metadata where available (such as author, title, publisher)—was to be developed. The appearance and operation of the application should model the processes of scholarly research, with many related documents visible at the same time, easily moved and grouped by the researcher. The application should display the document in as many different forms as are available—plain text, marked-up text, scanned images, audio streams, and so forth. Users should also be able to easily navigate between related documents; to easily search for documents that have similar words, phrases or word patterns; and to perform text analysis on the document(s)—word list, word frequency, word collocation, word concordance—and display the results.

3.2. Tools and Platforms

The database management system chosen for the REKn prototype was PostgreSQL. As a standard system commonly used by the academic community, PostgreSQL allows for future collaboration with other researchers and integration with other projects. PostgreSQL’s open source status caters to the possibility of writing custom functions and indexes that cannot be supplied by other means. Moreover, PostgreSQL offers scaling and clustering of database systems and the data in the systems. Redundancy is also possible with PostgreSQL—that is, if one server in a cluster crashes, the others will continue processing queries and data uninterrupted.

A similar rationale dictated writing the web service in PHP, since PHP is a commonly used and well-understood framework for database access via the Internet, in addition to being open source. The data-entry application is likewise based on Perl scripts to use the web service as a database access proxy, since in addition to being open source software, Perl is well suited for string processing.

3.3. Gathering Primary and Secondary Materials

The gathering of primary materials for the knowledgebase was initially accomplished by pulling down content from open-access archives of Renaissance texts, and by requesting materials from various partnerships (researchers, publishers, scholarly centers) interested in the project. These materials included a total of some 12,830 texts in the public domain or otherwise generously donated by EEBO-TCP (9,533), Chadwyck-Healey (1,820), Text Analysis Computing Tools (311), the Early and Middle English Collections from the University of Virginia Electronic Text Centre (273 and 27 respectively), the Brown Women Writers Project (241), the Oxford Text Archive (241), the Early Tudor Textbase (180), Renascence Editions (162), the Christian Classics Ethereal Library (65), Elizabethan Authors (21), the Norwegian University of Science and Technology (8), the Richard III Society (5), the University of Nebraska School of Music (4), Project Bartleby (2), and Project Gutenberg (2).26 The harvesting and initial integration of these materials took a year, during which time various formats of almost 4 gigabytes of files were standardized into a basic TEI-compliant XML format. Roughly a dozen different implementations of XML, SGML, COCOA, HTML, plain text, and more eclectic encoding systems were accommodated.

For example, accommodating the XML TEI P4 conforming documents obtained from the University of Virginia Electronic Text Center’s Early English Collection required the following three-step process:

  1. (1) EarlyUVaStepOne.xslApplication of an XSL transformation to remove the unnecessary XML tags, and to restructure the document using our internal-use tags. This step also derived a minimal set of metadata necessary for identifying the document with bibliographic MARC records.
  2. (2) EarlyUVaStepTwo.xslThis step, applied to the cleaned, stripped and possibly restructured documents resulting from step one, transformed the XML list of our metadata into an HTML list, built links to the HTML and XML files, and provided some rudimentary navigation and statistics.
  3. (3) EarlyUVaToHTML.xslApplied to either the source document or to the result of the first transformation, this process produced HTML suitable for web browsers. These transformations are very simple, producing only a minimum of HTML tagging. When we wish to serve more polished products to web browsers, this XSLT will serve as a starting point.

The bulk of the primary material was so substantial that harvesting the secondary materials manually would be too onerous a task—clearly, automated methods were desirable and would allow for continual and ongoing harvesting of new materials as they became available. Ideally, these methods should be general enough in nature so that they can be applied to other types of literature, requiring minimal modification for reuse in other fields. This emphasis on transportability and scalability would ensure that the form and structure of the knowledgebase could be used in other fields of scholarly research.

Initially, the strategy was to assemble a sample database of secondary materials in partnership with the University of Victoria Libraries, gathering materials harvested automatically from electronic academic publication amalgamator services (such as EBSCOhost). An automated process was developed to retrieve relevant documents and store them in a purpose-built database. This process would query remote databases with numerous search strings, weed out erroneous and duplicate entries, separate metadata from text, and store both in a relational database. The utility of our harvesting methods would then be demonstrated to the amalgamators and other publishers with the intent of fostering partnerships with them.

3.4. Building a Professional Reading Environment

At this stage REKn contained roughly 80 gigabytes of text data, consisting of some 12,830 primary text documents and an ongoing collection of secondary texts in excess of 80,000 documents. Text data in the knowledgebase was roughly 80 gigabytes; text and image data combined was estimated to be in the 2 to 3 terabyte range. Given its immense scale, development of a document viewer with analytical and communicative functionality to interact with REKn was a pressing issue. The inability of existing tools to accurately search, navigate, and read large collections of data in many formats, later coupled with the findings of our research into professional reading, led to the development of a Professional Reading Environment (PReE) to meet these needs.

Initially designed as a desktop GUI to the PostgreSQL database containing REKn, the PReE proof of concept was developed as a .NET Windows Form application. Very little consideration was given to further use of the code at this stage—the focus was solely on testing whether it all could work. Using .NET Framework was justified on the grounds that it is the standard development platform for Microsoft Windows machines, presumably used by a large portion of our potential users. Developing the proof of concept in .NET Framework meant that the application could use the resources of the client’s machine to a greater extent than if the application were housed in a browser. Local processing would be necessary if, for example, users were to use image-processing tools on scanned manuscript pages.

As demonstrated in the movie below (Movie 1), the proof of concept built in .NET sported a number of useful features. Individual users were able to log in, opening as many separate document-centered instances of the GUI as they desired simultaneously, and perform search, reading, analytical, and composition and communication functions. These functions, in turn, were drawn on our modeling of professional reading and other activities associated with conducting and disseminating humanities research. Searches could be conducted on document metadata and citations (by author, title, and keyword) for both primary and secondary materials (Figure 1). A selected word or phrase could also spawn a search of documents within the knowledgebase, as well as a search of other Internet resources (such as the Oxford English Dictionary Online and Lexicons of Early Modern English) from within PReE. Similarly, the user could use TAPoR Tools to perform analyses on the current text or selected words and phrases in PReE (Figure 2).

Figure 1: Metadata Search and Search Results.
Searching on document metadata

Figure 2: Spawned Search and Analytical Functions.
TAPoR tools

The proof-of-concept build could display text data in a variety of forms (plain-text, HTML, and PDF) and display images of various formats (Figures 3 and 4). Users could zoom in and out when viewing images, and scale the display when viewing texts (Figure 5). If REKn contained different versions of an object—such as images, transcriptions, translations—they were linked together in PReE, allowing users to view an image and corresponding text data side-by-side (Figure 6).

Figure 3: Reading Text Data
Text display

Figure 4: PDF Display
PDF display

Figure 5: Zoom and Pan Images.
Zoom and pan images

Figure 6: Side-by-Side Display of Texts and Images
Side-by-side display of text and images

This initial version of PReE also offered composition and communication functions, such as the ability for a user to select a portion of an image or text and to save this to a workflow, or the capacity to create and store notes for later use. Users were also able to track their own usage and document views, which could then be saved to the workflow for later use. Similarly, administrators were able to track user access and use of the knowledgebase materials, which might be of interest to content partners (such as academic and commercial publishers) wishing to use the data for statistical analysis.

Movie 1: Demonstration of REKn/PReE proof of concept

4. Research Prototypes: Challenges and Experiments

After the success of our proof of concept, we set out to imagine the next steps of modeling as part of our research program. Indeed, growing interest amongst knowledge providers in applying the concept of a professional reading environment to their databases and similar resources brought us to consider how to expand PReE beyond the confines of REKn. After evaluating our progress to date, we realized that we needed to take what we had learned from the proof of concept and apply that knowledge to new challenges and requirements. Our key focus would be on issues of scalability, functionality, and maintainability.

4.1. Challenge: Scalable Data Storage

In the proof-of-concept build, all REKn data was stored in binary fields in a database. While this approach had the benefit of keeping all of the data in one easily accessible place, it raised a number of concerns—most pressingly, the issue of scalability. Dealing with several hundred gigabytes is manageable with local infrastructure and ordinary tools. However, we realized that we had to reconsider the tools when dealing in the range of several terabytes. Careful consideration would also have to be given to indexing and other operations which might require exponentially longer processing times as the database increased in size.

Even with a good infrastructure, practical limitations on database content are still an important consideration, especially were we to include large corpora (the larger datasets of the Canadian Research Knowledge Network were discussed, for example) or significant sections of the Internet (via thin-slicing across knowledge domain-specific data). Setting practical limitations required us to consider what was essential and what needed to be stored—for example, did we have to store an entire document, or could it be simply a URL? Storing all REKn data in binary fields in a database during the proof-of-concept stage posed additional concerns. Incremental backups, for example, required more complicated scripts to look through the database to identify new rows added. Full backups would require a server-intensive process of exporting all of the data in the database. This, of course, could present performance issues should the total database size reach the terabyte range. Equally, to distribute the database in its current state amongst multiple servers would pose no mean feat.

Indexing full-text in a relational database does not give optimum performance or results: in fact, the performance degradation could be described as exponential in relation to the size of the database. Keeping both advantages and disadvantages in mind, it was proposed that all REKn binary data be stored in a file system rather than in the database. File systems are designed to store files, whereas the PostgreSQL database is designed to store relational data. To get the two mixed together defeats the advantages of each. Moreover, in testing the proof of concept, users found speed to be a significant issue, with many unwilling to wait five minutes between operations. In its proof-of-concept iteration, the computing interaction simply could not keep pace with the cognitive functions it was intended to augment and assist. We recognized that this issue could be resolved in the future by recourse to high-performance computing techniques—in the meantime, however, we decided to reduce the REKn data to a subset, which would allow us to imagine and work on functionality at a smaller scale.

Having decided to store all binary data in a file system, we had to develop a standardized method of storing and linking the data, one that accounted for both linking the relational data to the file system data as well as keeping the data mobile (such as would allow migrating the data to a new server or distributing the files over multiple servers). Flexibility was also flagged as an important design consideration, since the storage solution might eventually be shared with many different organizations, each with their own particular needs. This method would also require the implementation of a search technology capable of performing fast searches over millions of documents. In addition to the problem posed by the sheer volume of documents, the variety of file types stored would require the employ of an indexing engine capable of extracting text out of encoded files. After a survey of the existing software tools, it was decided that Lucene was the perfect fit for our project requirements: it is an open source full-text indexing engine capable of handling millions of files of various types without any major degradation in performance, and it is extensible with plug-ins to handle additional file types should the need arise.

4.2. Challenge: Document Harvesting

The question of how to go about harvesting data for REKn, or indeed any content-specific knowledgebase, turned out to be a question of negotiating with the suppliers of document collections for permission to copy the documents. Since each of these suppliers (such as the academic and commercial publishers and the publication amalgamator service providers) has structured access to the documents differently, scripts to allow for harvesting their documents had to be tailored individually for each supplier. For example, some suppliers provide an API to their database, others use HTTP, and still others distribute their documents via tapes or CDs of files. Designing an automated process for harvesting documents from suppliers could be accomplished by combining all of these different scripts together with a mechanism for automatically detecting the various custom access requirements and selecting the correct script to use.

Inserting documents into REKn offered technical challenges as well. Documents from different sources often had different XML structures. Even TEI-standard documents from various sources had different markup tags and elements, depending on the goals of the projects supplying the documents and the particular TEI DTDs used.

4.3. Challenge: Standalone vs. Web Application

Developed as a down-and-dirty solution to the original project requirements, PReE at the proof-of-concept stage was built as an installable standalone Windows application written in Microsoft .NET. For the second version of PReE, we considered whether to translate it from a desktop environment to the Internet.

The main advantages of following a web-application (or rich Internet application) paradigm are that it offers superior flexibility in application deployment and maintenance, and the ability to receive and disseminate user-generated content and multi-platform compatibility. The main disadvantage is that browsers impose limitations on the design of applications and usually restrict access to the resources (file system and processing) of the local machine.

A major advantage that standalone applications have over web applications is that performance and functionality are not dependent on the speed or availability of an Internet connection. Further, standalone desktop applications are able to use all of the resources of the local machine with very few design restrictions, other than those imposed by the target hardware and software tools. However, standalone applications must be installed by each individual user and, as a result, involve a level of training, familiarization, and support, which may discourage some users. Perhaps most importantly, given the goals of the project, standalone applications simply do not offer the same level of multi-platform compatibility or flexibility in application deployment and maintenance.

Essentially the question came down to identifying the features or services users would require, and whether those could be accommodated in the client application. For example, if users required the ability to create files and store them locally on their own machines, it may not have been feasible for the client application to be a web browser. After weighing the pros and cons, it was decided that PReE would be further developed as a web application. This decision was followed by a a survey of the relevant applications, platforms, and technologies in terms of their applicability, functionality, and limitations (Appendix 3).

4.4 Experiment: Shakespeare's Sonnets

As outlined above, to facilitate faster prototyping and development of both REKn and PReE it was proposed that REKn should be reduced to a limited dataset. Work was already underway on an electronic edition of Shakespeare’s Sonnets, so limiting REKn data to materials related to the Sonnets would offer a more manageable dataset.

Modern print editions of the Sonnets admirably serve the needs of lay readers. For professional readers, however, print editions simply cannot hope to offer an exhaustive and authoritative engagement with the critical literature surrounding the Sonnets, a body of scholarship that is continually growing. Even with the considerable assistance provided by such tools as the World Shakespeare Bibliography and the MLA International Bibliography, the sheer volume of scholarship published on Shakespeare and his works is difficult to navigate. Indeed, existing databases such as these only allow the user to search for criticism related to the Sonnets through a limited set of metadata, selected and presented in each database according to different editorial priorities, and often by those without domain-specific expertise. Moreover, while select bibliographies such as these have often helped to organize specific areas of inquiry, the last attempt to compile a comprehensive bibliography of scholarly material on Shakespeare’s Sonnets was produced by Tetsumaro Hayashi in 1972. Although it remains an invaluable resource in indicating the volume and broad outlines of Sonnet criticism, Hayashi’s bibliography is unable to provide the particularity and responsiveness of a tool that accesses the entire text of the critical materials it seeks to organize.

Without the restrictions of print, an electronic edition of Shakespeare’s Sonnets could be both responsive to the evolution of the field, updating itself periodically to incorporate new research, and more flexible in the ways in which it allows users to navigate and explore this accumulated knowledge. Incorporating the research already undertaken toward an edition of Shakespeare’s Sonnets, we sought to create a prototype knowledgebase of critical materials reflecting the scholarly engagement with Shakespeare’s Sonnets from 1972 to the present day.

The first step required the acquirement of materials to add to the knowledgebase. A master list of materials was compiled through consultation with existing electronic bibliographies (such as the MLA International Bibliography and the World Shakespeare Bibliography) and standard print resources (such as the Year’s Work in English Studies). Criteria were established to dictate which materials were to be included in the knowledgebase. To limit the scope of the experiment, materials published before 1972 (and thus considered already in Hayashi’s bibliography) were excluded. It was also decided to exclude works pertaining to translations of the Sonnets, performances of the Sonnets, and non-academic discussions of the Sonnets. Monograph-length discussions of the Sonnets were also excluded on the basis that they were too unwieldy for the purposes of an experiment.

The next step was to gather the materials itemized on the master list. Although a large number of these materials were available in electronic form, and therefore much easier to collect, the various academic and commercial publishers and publication amalgamator service providers delivered the materials in different file formats. A workable standard would be required, and it was decided that regularizing all of the data into Rich Text format would preserve text formatting and relative location, and allow for any illustrations included to be embedded. Articles available only in image formats were fed through an Optical Character Recognition (OCR) application and saved in Rich Text format.

Materials unavailable in electronic form were collected, photocopied, and scanned as grayscale TIFF images. A resolution of 400 dpi was agreed upon as maintaining a balance between image clarity and file size. As a batch, the scanned images were enhanced with a negative brightness and a slightly high contrast in order to throw the type characters into relief against the page background. In addition to being stored in this format, the images were then processed through an OCR application and saved in Rich Text format.

The next step will involve applying a light common encoding structure on all of the Rich Text files and importing them into REKn. The resulting knowledgebase will be responsive to full-text electronic searches, allowing the user to uncover swiftly, for example, all references to a particular sonnet. License agreements and copyright restrictions will not allow us to make access to the knowledgebase public. However, we will be exploring a number of possible output formats that could be shared with the larger research community. Possibilities might include the use of the Sonnet knowledgebase to generate indices, concordances, or even an exhaustive annotated bibliography. For example, a dynamic index could be developed to query the full-text database and return results in the form of bibliographical citations. Since many users will come from institutions with online access to some or most of the journals, and with library access to others, these indices will serve as a valuable resource for further research.

Ideally, such endeavors will mean the reassessment of the initial exclusion criteria for knowledgebase materials. The increasing number of books published and republished in electronic format, for example, means that the inclusion of monograph-length studies of the Sonnets is no longer a task so onerous as to be prohibitive. Indeed, large-scale digitization projects such as Google Books and the Internet Archive are also making a growing number of books, both old and new, available in digital form.

4.5. Experiment: The REKn Crawler

We recognized that the next stages of our work would be predicated on the ability to create topic- or domain-specific knowledgebases from electronic materials. The work, then, pointed to the need for a better Internet resource discovery system, one that allowed topic-specific harvesting of Internet-based data, returning results pertinent to targeted knowledge domains, and that integrated with existing collections of materials (such as REKn) operating in existing reading systems (such as PReE), in order to take advantage of the functionality of existing tools in relation to the results. To investigate this further, we collaborated with Iter, a not-for-profit partnership created to develop and support electronic resources to assist scholars studying European culture from 400 to 1700 CE.27

4.5.1. Premises

We thought we could use technologies like Nutch and models from other more complex harvesters (such as DataFountains and the Nalanda iVia Focused Crawler)28 to create something that would suit our purposes and be freely distributable and transportable among our several partners and their work. In using such technologies, we hoped also to explore how best to exploit representations of ontological structures found in bibliographic databases to ensure that the material returned via Internet searches was reliably on-topic.

4.5.2. Method

The underlying method for the prototype REKn Crawler is quite straightforward. An Iter search returns bibliographic (MARC) records, which in turn provide the metadata (such as author, title, subject) to seed a web search, the results of which are returned to the knowledgebase. In the end, the original corpus is complemented by a collection of pages from the web that are related to the same subject. While all of these web materials may not always be directly relevant, they may still be useful.

The method ensures accuracy, scalability, and utility. Accuracy is ensured insofar as the results are disambiguated by comparison against Iter’s bibliographic records—that is, via a process of domain-specific ontological structures. Scalability is ensured in that individual searches can be automatically sequenced, drawing bibliographic records from Iter one at a time to ensure that the harvester covers all parts of an identified knowledge domain. Utility is ensured because the resultant materials are drawn into the reading system and bibliographic records are created (via the original records, or using Lemon8-XML).

4.5.3. Workflow

From a given corpus or record set, the basic workflow for the REKn Crawler is as follows:

  1. (1) Extract keywords from every document in a given corpus. For the prototype, we used a large MARC file from Iter as our record set and used PHP-MARC, an open source software package built in PHP that allows for manipulation and extraction of MARC records.
  2. (2) Build search strings from the keywords extracted earlier. The following combinations were used in our experimentation: author; author and title; title; author and subject; subject.
  3. (3) Query the web using each constructed search string. Up to fifty web page results per search are then collected and stored in a site list. Search engines that follow the OpenSearch standard can be queried from the back-end of a software application—the REKn Crawler employs this technique. OpenSearch-compatible search engines provide access to a variety of materials.
  4. (4) Send a crawler into the web to harvest web pages from the site list generated in step 3. We are currently exploring implementation strategies for this stage of the project. Nutch is currently the best candidate because it is an open source web-search software package that builds on Lucene Java.

Consider the following example. A user views a document in PReE; for instance, Edelgard E. Dubruck, “Changes of Taste and Audience Expectation in Fifteenth-Century Religious Drama.”29 Viewing this document triggers the crawler, which begins crawling via the document’s Iter MARC record (record number, keywords, author, title, subject headings). Search strings are then generated from the Iter MARC record data (in this particular instance the search strings will include: DuBruck, Edelgard E.; DuBruck, Edelgard E. Changes of Taste and Audience Expectation in Fifteenth-Century Religious Drama; DuBruck, Edelgard E. Religious drama, French; DuBruck, Edelgard E. Religious drama, French, History and criticism; Changes of Taste and Audience Expectation in Fifteenth-Century Religious Drama; Religious drama, French; Religious drama, French, History and criticism). The Crawler conducts searches with these strings and stores them for the later process of weeding out erroneous returns.

In the example given above, which took under an hour, the Crawler generated 291 unique results to add to the knowledgebase relating to the article and its subject matter. In our current development environment, the Crawler is able to harvest approximately 35,000 unique web pages in a day. We are currently experimenting with a larger seed set of 10,000 MARC records, which still amounts to a 1% subset of Iter’s bibliographical data.

4.5.4. Application

The use of the REKn Crawler in conjunction with both REKn and PReE suggests some interesting applications, such as: increasing the scope and size of the knowledgebase; being able to analyze the results of the Crawler’s harvesting to discover document metadata and document ontology; and harvesting blogs and wikis for community knowledge on any given topic, and well beyond.

5. Moving into Full Prototype Development: New Directions

5.1. Rebuilding

Our rebuilding process was primarily driven by the questions generated from our earlier proof of concept. The proof-of-concept pointed us toward a web-based user interface to meet the needs of the research community. Building human knowledge into our application also becomes more feasible with a web environment, since we can depend on a centralized storage system and an ability to easily share information. The proof-of-concept also suggested that we rethink our document storage framework, since exponential slow-downs in full-text searching speed quickly render the tool dysfunctional in environments with millions of documents. For long-term scalability a new approach was needed.

In order to move into full prototype development, we were first required to rebuild the foundation of both REKn and PReE applications, as outlined in detail in the previous section. To summarize:

  1. (1) We are rebuilding the PReE(?) user interface. A web-based environment allows us to be agile in our development practices and to quickly incorporate emerging ideas and visions.
  2. (2) The Ruby programming language has been selected as the new development platform. While it can be considered the “new kid on the block” of web-scripting languages, the benefits it offers (such as the Ruby on Rails application framework) make it an enticing choice to say the least. The use of Ruby on Rails offers a rapid prototyping environment, which cuts huge chunks of development time out of our overhead. Ruby on Rails also provides us with the ability to add “Web 2.0” user interface features to our project simply and easily.
  3. (3) We are working on developing a “one-stop” administrative interface for harvesting and processing new documents. Rather than having bits and pieces scattered around, we propose to use an extensible model for adding processing abilities to our application. Once the model has been built, the processing of a new type of document will simply require the addition of a new plug-in to bring the document into the application.
  4. (4) We decided to keep the relational database for application-specific data needs (such as user info and user created content) in addition to implementing a dedicated full-text indexing engine to search both the text and the associated metadata. An application that offers time-efficient full-text searchability of documents is greatly valued by its users. To this end we decided to enlist the use of the “granddaddy” of open-source full-text indexing engines, Lucene. Lucene gives us fast, robust and scalable full-text searching. The Solr layer on top of Lucene allows us to “talk” to Lucene from any programming language we choose and give it powerful additions such as basic text analysis and the ability to uniquely identify a document. While Fedora Commons might prove to be a better alternative to Solr, the switch will have to wait until such time as the Fedora GSearch tool has been built into the RubyFedora library.
  5. (5) We are working toward centralizing document processing. Until now, a different stand-alone tool processed each style of document. We are planning to pull all of these tools together in one place and to allow new tools to be added easily, with the facility for administrators to go through the process of adding new documents into the knowledgebase attached to PReE.
  6. (6) We are rebuilding the interconnections between PReE and other related community tools. From metadata lookup tools to applications providing data analysis, the next development of PReE will be designed with flexibility and long-term scalability in mind.

With new development paths come new questions and concerns. For example, how would we provide consistent metadata for widely disparate sources? To address this, we are investigating the possibility of using natural language processing tools (NLP) to discover key information points within the document, and then using this information to do a lookup within a robust metadata database. At the time of writing, metadata for our documents is stored inside the database structures. The documents are transformed into HTML or plain-text equivalents, which are then fed into Solr through its REST web interface. PReE uses Solr’s REST API to provide full-text searching, handing off each search request to Solr and converting the search results into HTML for the browser.

Figure 7: High Level Architectural Diagram of REKn/PReE
High Level Architectural Diagram of REKn/PReE

A high level architectural diagram (Figure 7) was created that situated the Crawler (marked ‘Harvester’) within the intended rebuild of REKn and PReE. As suggested by the diagram, we maintained the belief that integration with Fedora Commons was the ideal solution (see Appendix 3), but that we would have to wait until the technology allowed.

5.2. New Directions: Social Networking

Users are beginning to expect more from web applications than ever before. Social networking tools and the “Web 2.0” pattern of design has given web application developers many new ways of building knowledge into their applications. By adopting a web-application model for PReE, we could tie into existing social networking tools and begin to innovate with the creation of new tools designed specifically for the professional reader. The decision to include social networking capabilities in the PReE design was based on research conducted by the Public Knowledge Project (PKP) into the reading strategies of domain-expert readers (a subset of professional readers).30 Like PReE, the goal for the reading tools developed by PKP was to provide access to research and scholarship and to support critical engagement with those materials. During interviews conducted by PKP and ETCL researchers, expert readers identified the ability to communicate with other researchers as an important benefit of an online reading environment. These readers also expressed interest in contextual information that would help them judge the value of an author’s work. From these observations, researchers concluded that future online reading environments would need to provide the kind of communication and profile-management features currently offered by social networking tools.

Before adding social networking components to the PReE features list, we researched existing social networking tools and their use by expert readers (Leitch et al. 2008). Based on evidence gathered during the PKP study we determined that as expert readers became adept at using online tools, they would demand a higher level of sophistication from an online reading environment. In order to respond to this increasing awareness of the potential of social networking tools for scholarly research, a successful online reading environment should integrate social networking tools in such a way that it extends the readers’ existing research strategies. We identified three key strategies that readers used as part of their research: evaluating, communicating, and managing. Our survey found that no single social networking tool supported all three of these strategies. An environment able to facilitate all three strategies would be of immense value to the expert reader, who would not be forced to use a variety of disjointed social networking tools. Instead, he or she would be able to perform the same tasks from within the reading environment.

How could we incorporate these findings into PReE? In answering that question we were effectively reconceptualizing PReE as social software, “loosely defined” by Tom Coates as software that “supports, extends, or derives added value from, human social behaviour” (2005: n. pag.). If we could outline the common elements of the social networking tools we wished to incorporate, the task of combining them could be more streamlined. For Ralph Gross and Alessandro Acquisti, the feature common to all social networking applications is the ability to create a user-generated identity (or “profile”) for other users to peruse “with the intention of contacting or being contacted by others” (2005: 71). Acknowledging the importance of identity, Judith Donath and danah body have proposed that “a core set of assumptions” underlie all social networking applications, all of which emphasize the notion of making connections, that “there is a need for people to make more connections, that using a network of existing connections is the best way to do so, and that making this easy to do is a great benefit” (2004: 71).

5.2.1. Identity and Evaluation

The “Digital Footprints” report prepared by the Pew Internet and American Life Project found that “one in ten internet users have a job that requires them to self-promote or market their name online,” and that “voluntarily posted text, images, audio, and video has become a cornerstone of engagement with Web 2.0 applications” to the point that “being ‘findable and knowable’ online is often considered an asset in participatory culture where one’s personal reputation is increasingly influenced by information others encounter online” (Madden et al. 2007: iii, 4). Similar assertions have been made by other scholars: Andreas Girgensohn and Alison Lee suggest that one of the benefits of creating an maintaining a profile on a social networking site is the opportunity to create a “persistent and verifiable identity” (2002: 137), whereas danah boyd and Nicole B. Ellison note that “what makes social network sites unique is not that they allow individuals to meet strangers, but rather that they enable users to articulate and make visible their social networks” (2007: n. pag.).

Given the importance expert readers place on markers of authority such as credentials and past publications, it is in the individual’s best interest to exert some control over his or her online identity. The ability to create and maintain an online profile as part of PReE allows users to include the kind of information expert readers look for when evaluating the value of research material.

5.2.2. Connections and Communication

Expert readers learn about new ideas and develop existing ones by engaging in scholarly communication with their peers and colleagues. Online, these readers participate in discussion forums, mailing lists, and use commenting tools on blogs and other social networking sites. As Kathleen Fitzpatrick observes:

Scholars operate in a range of conversations, from classroom conversations with students to conference conversations with colleagues; scholars need to have available to them not simply the library model of texts circulating amongst individual readers but also the coffee house model of public reading and debate. This interconnection of individual nodes into a collective fabric is, of course, the strength of the network, which not only physically binds individual machines but also has the ability to bring together the users of those machines, at their separate workstations, into one communal whole. (2007: n. pag.)

Likewise, Christopher M. Hoadley and Peter G. Kilner have asserted that conversation is the method by which information becomes knowledge, suggesting that “knowledge-building communities are a particular kind of community of practice focused on learning,” where the “explicit goal [is] the development of individual and collective understanding” (2005: 32). Adopting this definition, PReE models a knowledge-building community of practice by combining content with communication through the use of social networking tools.

5.2.3. User and Content Management

Searching, retrieving, classifying, and organizing research material is a primary activity of professional readers. Expert readers employ a variety of strategies ranging from simple filing systems to elaborate systems of classification and storage. Reference management tools allow users to find, store, and organize research materials online. The use of folksonomy tagging in reference management tools can improve on a reader’s existing research strategies by providing him or her with a flexible and easily accessible way of organizing research according to his or her own criteria.31 These tools also allow users to share research collections with colleagues and find material relevant to their interests in other collections. Moreover, as Bryan Alexander has observed, social bookmarking functions in a higher education context as a tool for “collaborative information discovery” (2006: 36). As Alexander suggests, “finding people with related interests” through social bookmarking “can magnify one’s work by learning from others or by leading to new collaborations,” and that “the practice of user-created tagging can offer new perspectives on one’s research, as clusters of tags reveal patterns (or absences) not immediately visible” (2006: 36). User incentives for tagging include the ability to quickly retrieve research material, to share relevant material with colleagues, and to express an opinion or make a public statement about one’s interests (Marlow et al. 2006: 34-35). The planned inclusion of similar tools in PReE extends expert readers’ existing management strategies by simplifying the organization process and creating new opportunities for collaborative categorization.

5.3. Designing the PReE Interface

When the original interface was designed for the proof of concept of REKn in .NET, very little consideration was given to further use of the code. The focus was solely on producing a down-and-dirty prototype. The decision to translate PReE from a desktop application to a web application promised a whole host of new benefits: superior flexibility in application deployment and maintenance, the ability to receive and disseminate user-generated content, and multi-platform compatibility. These new benefits, however, came with new challenges.

Migrating the application from desktop to Internet also offered us an opportunity to completely rethink the appearance and functionality of the interface. This gave us the chance to consult with prominent researchers working in the field of professional reading and designing such interfaces, as well as the opportunity to conduct our own usability surveys in order to better accommodate professional readers of various disciplinary backgrounds and levels of expertise.

5.3.1. User Needs: Analyzing the Audience

Before embarking on a new interface design, it was pertinent to identify the features and functions that users would expect and desire from PReE. Surveys and interviews were conducted, and the results led to our distinguishing between users of PReE in terms of their backgrounds, goals, and needs. Of course, it was recognized that the usefulness of these user profiles was limited, particularly with respect to the needs of interdisciplinary users and users from less text-centric disciplines (such as Fine Arts). These limitations notwithstanding, this initial discussion allowed us to identify three general user profiles: graduate students (“students”), teaching professors (“teachers”), and research professors (“researchers”).

“Student” users were characterized as coming from potentially broad disciplinary backgrounds. Their goals were to conduct self-directed research for the purposes of acquiring a thorough knowledge of a particular field; to complete their doctoral or masters theses; and to build their scholarly reputations. Needs and desires dictated by these goals included access to citations and bibliographies; a way of assessing the impact-factor of a given article, topic, or researcher in a particular field; and a system to facilitate both formal and informal peer review of their research.

“Teacher” users were characterized as potentially belonging to broad disciplinary backgrounds (such as history) and/or specific fields (such as late medieval English military history). Their goals included recommending readings to students, and undertaking self-directed research for the purpose of compiling knowledge-area bibliographies (often annotated), and writing and delivering lectures. These goals required access to citations and surveys of new and recent research in their particular field(s).

“Researcher” users were similarly characterized as potentially coming from a broad field and/or a more specific field of research expertise. Their goals included self-directed research for the purpose of building knowledge-area bibliographies (often annotated), writing and presenting conference papers, writing and delivering lectures, engaging in scholarly publication, and building and maintaining their scholarly reputations.

As a whole, these results suggested three key user requirements: the facilitation of high-level research, the facilitation of collaboration, and the achievement of recognition in their field of study. Although additional features were suggested, meeting these key requirements would be the driving force behind the design of the new PReE interface.

5.3.2. Design Principles, Processes, and Prototypes

A series of design principles were also agreed upon, which dictated that the interface design should focus on providing efficient ways to complete tasks (efficiency), on managing higher and lower priority objects (visual balance), on testing usability (prototyping), and on the ability to rapidly execute tasks in an agile work environment (flexibility). These principles suggested a design process of four steps. The first step was to conduct environmental scans in order to survey successful features offered by other web applications and assess their applicability for our present needs. The next step was to construct workflow sketches. The third step was to develop simple prototypes, and the fourth, to develop initial designs.

Movie 2: Design Processes of the PReE User Interface

Environmental scans focusing on the search and display functions of existing web applications highlighted a number of useful user features. A useful feature of some applications is the suggestion of search terms to the user, either by way of a drop-down list or by auto-completion of the search string. Other applications offer “bookshelves” of saved search items, allowing their users to group items together and to tag, rate, and comment on them (Figure 8). The survey of reader and display functions similarly suggested useful features that we could implement in the PReE user interface. As outlined in more detail above (see 5.2), there is growing interest in the research application of social annotations and annotation tools (Figure 9).32 Other web applications enrich their content through the inclusion of user-contributed data, such as comments, tags, links, ratings, and other media (Figure 10). As in the original proof-of-concept, the capacity for viewing images and texts side-by-side was also expected to be included (Figure 11). As indicated in the movie above (Movie 2), all of these features were included in the PReE workflow sketches, simple prototypes, and initial designs of the user interface.

Figure 8: Interface design: bookshelves
'Bookshelves' of saved searched items

Figure 9: Interface design: annotations and bookmarks
Annotations and bookmarks

Figure 10: Interface design: annotations, bookmarks, and user comments
Annotations, bookmarks, and user comments

Figure 11: Interface design: side-by-side text and image display
Side-by-side text and image display

6. New Insights and Next Steps

6.1. Research Insights and the Humanities Model of Dissemination

While we have learned much about humanistic engagement with the technologies under consideration, we recognize also that we have gained significant experience and understanding about the nature of the work itself from a disciplinary perspective.

One unexpected insight involved the nature of where the research lies in our endeavor. Our original approach to the project was to work toward a reading environment that suited the needs of professional readers, with the belief that we understood our own needs best and could therefore contribute to the development of professional reading tools through our active participation in pertinent research processes. Conceptualizing and theorizing the foundations of and rationales for humanist tools and their features was an important part of our role, as was modeling the features and functions computationally so that it was clear that what we wished to do could be done. Indeed, we had particular success in amalgamating previously unconnected (but research-pertinent) database contents so that a researcher could speed workflow by not having to enter search terms across several unconnected databases and interfaces. By modeling these processes we were better able to understand the problems and to suggest possible solutions. From our perspective as researchers, developing the prototype that proved the concept was our primary goal —anything beyond this was more production- than research-oriented, and it was unclear to us whether production was part of our endeavor.

In the second instance, we found that the most valuable point of impact for our research work manifested in ways that our humanities disciplines could not readily understand, evaluate, and appreciate. Our research-related successes often involved (1) the identification of a key area of intervention pertaining to our larger program of research; (2) understanding this area and modeling it with the computer; (3) testing and refining the model until we achieved acceptable functionality in proof of concept; (4) delivering a conference paper on this as quickly as possible (because computational fields, their tools, and the possibilities they enable advance rapidly) and engaging in further discussions with those who were interested in carrying this work further; and either (5a) working with a partner who was interested in putting our research into production within their own work; (5b) watching others involved in adjacent programs of research implement similar features in their own work and advancing our own research in that way; or (5c) noting the adoption of our procedures without our involvement by other area stakeholders. As a progression from idea to point of impact, this is ideal in every way except one: our home disciplines in the humanities find it difficult to document this impact in professional terms. It simply does not fit the article- and book-focused publication and dissemination model favored by humanities scholarship, and most digital humanities venues do not integrate conference presentation and publication in a way that provides immediate publication on presentation (as is common in the sciences). As a result, work related to this project has, for the most part, been disseminated without publication, and is therefore largely unquantifiable in humanities disciplinary terms.

6.2. Partnerships and Collaborations

The second phase of our development of both REKn and PReE is at a crossroads. Over the course of some five years, we have been working on REKn and PReE in various ways. During this time we have presented our findings at conferences and discussed our methodology of modeling and prototyping with other research groups. The professional and pedagogical exercise of this work has been immense, driven at its core by a consistent aim to explore document-centered reading environments, and to work toward the production of a functional tool for a variety of professional readers. As with any project of this nature, our research experience has been (and continues to be) attended by successes and fraught with apparent dead-ends. However, as the preceding project narrative has made clear, even these seemingly inconclusive pursuits are in fact evidence of an active pedagogical process and a professional evolution in design and implementation—something privileged in all academic pursuit—where each step has led to a better understanding of how our overall research goals could be accomplished.

In light of the insights gained and lessons learned, our next steps are firmer and more secure, and we bring our experience to a series of very fruitful partnerships in which elements of our research are being extended in ways not initially considered. Moreover, we are incorporating our research experience into a large collaborative initiative, Implementing New Knowledge Environments (INKE), sponsored by the Social Sciences and Humanities Research Council of Canada MCRI program, as well as contributing to further developments associated with the Text Analysis Portal for Research (TAPoR).

Our research on interfaces, annotation, social interaction, and document-centered reading environments has also been incorporated into more focused research partnerships with groups like the Public Knowledge Project (PKP) and Synergies. Our collaboration with PKP has seen work toward the integration of professional reading tools into the PKP Open Journal Systems (OJS). As outlined briefly above, our partnership began with conducting user experience surveys to identify and assess elements of users' engagement with texts and the OJS interface.33 Work was then undertaken towards the identification of basic principles for an OJS interface redesign to respond to needs identified by the study; the carrying out of more precise user analysis and profiling; the design of wireframes (sketch prototypes) to emulate workflows; and consultation about technological facilitation for interaction that was imagined (including the integration of social networking technologies). These processes led to iterative computational modeling and testing, aimed at the creation of a proof-of-concept prototype. This prototype was presented to PKP in early 2008, in order that they might consider integrating it into their current development cycle—and also in more traditional research dissemination.34 The next step of this conjoint research program is to build on earlier work carried out toward provision of a knowledgebase approach to speed professional readers’ workflow through better access to pertinent critical textual resources. In turn, this new work draws on earlier and ongoing work with Iter, another of our research partners, to further develop the concept of enriched domain-specific knowledgebases, as well as ongoing research as part of a collaboration with the Transliteracies and BlueSky working groups at the University of California, Santa Barbara, towards the prototyping of an interface with document-centered professional reading tools and advanced social networking capabilities.

To return to the words of James Joyce with which this article began, our experience in developing REKn and PReE thus far has shown that the errors we encountered on the way truly were “portals of discovery” (1986: 9.229). As we embark on new directions and build new partnerships and collaborations, we expect many more portals in the immediate future, and beyond.

Works and Resources Cited

Alexander, Bryan. “Web 2.0: A New Wave of Innovation for Teaching and Learning?” Educause Review 41.2 (2006): 32-44. Print.

Austin, David. “How Google Finds Your Needle in the Web’s Haystack.” Feature Column. American Mathematical Society. Dec. 2006. Web. 24 Apr. 2009. http://www.ams.org/featurecolumn/archive/pagerank.html.

Bolton, Whitney. “The Bard in Bits: Electronic Editions of Shakespeare and Programs to Analyze Them.” Computers and the Humanities 24.4 (1990): 275-87. Print.

Boot, Peter. “Mesotext: Digitised Emblems, Modelled Annotations and Humanities Scholarship.” Diss. U Utrecht, 2009. Print.

boyd, danah. “The Significance of Social Software.” BlogTalks Reloaded: Social Software Research and Cases. Ed. Thomas N. Burg and Jan Schmidt. Norderstedt: Books on Demand, 2007. 15-30. Print.

———., and Nicole B. Ellison. “Social Network Sites: Definition, History, and Scholarship.” Journal of Computer-Mediated Communication 13.1 (2007): n. pag. Web. 24 Apr. 2009.

Bowen, William R. "Iter: Building an Effective Knowledge Base." New Technologies and Renaissance Studies. Ed. William R. Bowen and Ray Siemens. New Technologies in Medieval and Renaissance Studies 1. Toronto and Tempe: Iter and Arizona Center for Medieval and Renaissance Studies, 2008. 101-9. Print

———. "Iter: Where Does the Path Lead?" Early Modern Literary Studies 5.3 (2000): 2.1-26. Web. 24 Apr. 2009. http://extra.shu.ac.uk/emls/05-3/bowiter.html

Brown, Susan, Stan Ruecker, Jeffrey Antoniuk, Sharon Balasz, Patricia Clements, and Isobel Grundy. “Designing Rich-Prospect Access to a Feminist Literary History.” Women Writing and Reading 2.1 (2007): 12-17. Print.

Canadian Research Knowledge Network / Réseau Canadien de Documentation pour la Recherche (CRKN/RCDR). Web. 24 Apr. 2009. http://researchknowledge.ca/

Coates, Tom. “An Addendum to a Definition of Social Software.” Plasticbag.org. 5 Jan. 2005. Web. 24 Apr. 2009. http://www.plasticbag.org/archives/2005/01/an_addendum_to_a_definition_of_social_software/.

Collex. NINES. Web. 24 Apr. 2009. http://www.collex.org/

Data Fountains. iVia Project. U of California, Riverside. Web. 24 Apr. 2009. http://datafountains.ucr.edu/.

De Grazia, Margreta, and Peter Stallybrass. “The Materiality of the Shakespearean Text.” Shakespeare Quarterly 44 (1993): 255-83. Print.

Delany, Paul. “Virtual Universities and the Death of Distance.” TEXT Technology 7 (1997): 49-64. Print.

Donath, Judith, and danah boyd. “Public Displays of Connection.” BT Technology Journal 22.4 (2004): 71-82. Print.

Drucker, Johanna, and Geoffrey Rockwell. "Introduction: Reflections on the Ivanhoe Game." TEXT Technology 12.2 (2003): vii-xviii. Print.

DuBruck, Edelgard E. “Changes of Taste and Audience Expectation in Fifteenth-Century Religious Drama.” Fifteenth-Century Studies 6 (1983): 59-91. Print.

Early Modern English Dictionaries Database. Ed. Ian Lancashire. U of Toronto. Web. 24 Apr. 2009. http://www.chass.utoronto.ca/~ian/emedd.html.

Erickson, Peter. “Rewriting the Renaissance, Rewriting Ourselves.” Shakespeare Quarterly 38 (1987): 327-37. Print.

eXist. Wolfgang Meier, et al. Web. 24 Apr. 2009. http://exist.sourceforge.net.

eXist XML-RPC API. Mike Elkink and James Dixon. Web. 24 Apr. 2009. http://www.rubyforge.org/projects/exist-xml-rpc/.

Faulhaber, Charles B. “Textual Criticism in the 21st Century.” Romance Philology 45 (1991): 123-48. Print.

Fedora. Fedora Commons. Web. 24 Apr. 2009. http://www.fedora-commons.org/.

Fitzpatrick, Kathleen. “CommentPress: New (Social) Structures for New (Networked) Texts.” Journal of Electronic Publishing 10.3 (2007): n. pag. Web. 24 Apr. 2009. http://dx.doi.org/10.3998/3336451.0010.305.

Fortier, Paul. “Babies, Bathwater, and the Study of Literature.” Computers and the Humanities 27 (1993-94): 375-85. Print.

Girgensohn, Andreas, and Alison Lee. “Making Web Sites Be Places for Social Interaction.” Proceedings of the 2002 ACM Conference on Computer Supported Cooperative Work. New York: ACM, 2002. 136-45. Print.

Google Book Search. Google. Web. 24 Apr. 2009. http://books.google.com/.

Greetham, D. C. Theories of the Text. Oxford: Oxford UP, 1999. Print.

Gross, Ralph, and Alessandro Acquisti. “Information Revelation and Privacy in Online Social Networks.” Proceedings of the 2005 ACM Workshop on Privacy in the Electronic Society. New York: ACM, 2005. 71-80. Print.

Guillory, John. “The Ethical Practice of Modernity: The Example of Reading.” The Turn to Ethics. Ed. Marjorie Garber, Beatrice Hanssen, and Rebecca L. Walkowitz. New York: Routledge, 2000. 29-46. Print.

Hall, Kim F. “About This Volume.” Othello: Texts and Contexts. Ed. Kim F. Hall. New York: Bedford/St. Martin’s, 2007. vii-xii. Print.

Hayashi, Tetsumaro. Shakespeare’s Sonnets: A Record of Twentieth-Century Criticism. Metuchen: Scarecrow P, 1972. Print.

Hoadley, Christopher M., and Peter G. Kilner. “Using Technology to Transform Communities of Practice into Knowledge-Building Communities.” SIGGROUP Bulletin 25.1 (2005): 31-40. Print.

Hockey, Susan. Electronic Texts in the Humanities: Principles and Practice. Oxford: Oxford UP, 2000. Print.

Howard, Jean E. “The New Historicism in Renaissance Studies.” English Literary Renaissance 16 (1986): 13-43. Print.

Internet Shakespeare Editions. Coordinating Ed. Michael Best. U of Victoria. Web. 24 Apr. 2009. http://internetshakespeare.uvic.ca/.

Iter. Renaissance Society of America, U of Toronto Centre for Reformation and Renaissance Studies, Arizona Center for Medieval and Renaissance Studies. Web. 24 Apr. 2009. http://www.itergateway.org/.

Joyce, James. Ulysses. Ed. Hans Walter Gabler. New York: Random House, 1986. Print.

Juxta. NINES. Web. 24 Apr. 2009. http://www.juxtasoftware.org/

Lancashire, Ian. “Bilingual Dictionaries in an English Renaissance Knowledge Base.” Historical Dictionary Databases. Ed. T. R. Wooldridge. CCH Working Papers 2 (1992): 69-88. Print.

———. “Computer Tools for Cognitive Stylistics.” From Information to Knowledge: Conceptual and Content Analysis by Computer. Ed. Ephraim Nissan and Klaus M. Schmidt. Oxford: Intellect, 1995. 28-47. Print.

———. “Working with Texts.” IBM Academic Computing Conference. Anaheim, California. June 1989. Address.

Leitch, Cara, Ray Siemens, James Dixon, Mike Elkink, Angelsea Saby, and Karin Armstrong. “Social Networking and Online Collaborative Research with REKn and PReE.” Society for Digital Humanities/Société pour l'étude des médias interactifs, Congress of the Canadian Federation of Humanities and Social Sciences. U of British Columbia, Vancouver. 3 Jun. 2008. Poster presentation.

Lemon8-XML. Public Knowledge Project. U of British Columbia, Stanford U, and Simon Fraser U. Web. 24 Apr. 2009. http://pkp.sfu.ca/lemon8.

Lexicons of Early Modern English. Ed. Ian Lancashire. U of Toronto Library and U of Toronto P. Web. 24 Apr. 2009. http://leme.library.utoronto.ca/.

Literature Online. Chadwyck-Healey Literature Online. ProQuest. Web. 24 Apr. 2009. http://lion.chadwyck.com/.

Lucene. Apache Software Foundation. Web. 24 Apr. 2009. http://lucene.apache.org/.

Machan, Tim William. “Late Middle English Texts and the Higher and Lower Criticisms.” Medieval Literature: Texts and Interpretation. Ed. Tim William Machan. Medieval and Renaissance Texts and Studies 79. Binghamton: Center for Medieval and Renaissance Studies, 1991. 3-16. Print.

Madden, Mary, Susannah Fox, Aaron Smith, and Jessica Vitak. “Digital Footprints: Online Identity Management and Search in the Age of Transparency.” Pew Internet and American Life Project. 16 Dec. 2007. Web. 24 Apr. 2009. http://pewinternet.org/Reports/2007/Digital-Footprints.aspx.

Many Eyes. IBM Collaborative User Experience Research Group, Visual Communication Lab. Web. 24 Apr. 2009. http://manyeyes.alphaworks.ibm.com/manyeyes/

Marcus, Leah S. Unediting the Renaissance: Shakespeare, Marlowe, Milton. New York: Routledge, 1996. Print.

Marlow, Cameron, Mor Naaman, danah boyd, and Marc Davis. “HT06, Tagging Paper, Taxonomy, Flickr, Academic Article, To Read.” Proceedings of the Seventeenth Conference on Hypertext and Hypermedia. New York: ACM, 2006. 31-40. Print.

McCarty, Willard. “Modeling: A Study in Words and Meanings.” A Companion to Digital Humanities. Ed. Susan Schreibman, Ray Siemens, and John Unsworth. Malden: Blackwell, 2004. 257-70. Print.

———. “Knowing . . . : Modeling in Literary Studies.” A Companion to Digital Literary Studies. Ed. Ray Siemens and Susan Schreibman. Malden: Blackwell, 2008. 391-401. Print.

———. “What is Humanities Computing? Toward a Definition of the Field.” Address. Reed College, Portland. Mar. 1998. Web. 24 Apr. 2009. http://staff.cch.kcl.ac.uk/~wmccarty/essays/McCarty, What is humanities computing.pdf.

McLeod, Randall. “Information on Information.” Text 5 (1991): 240-81. Print.

———. “UnEditing Shakespeare.” Sub-Stance 33-34 (1982): 26-55. Print.

McGann, Jerome J. A Critique of Modern Textual Criticism. Chicago: U of Chicago P, 1983. Print.

———, and Johanna Drucker. “The Ivanhoe Game: An Introduction." 2000-1. Web. 24 Apr. 2009. http://jefferson.village.virginia.edu/~jjm2f/old/IGamehtm.html.

———, and Lisa Samuels. “Deformance and Interpretation.” New Literary History 30 (1999): 25-56. Print.

McKenzie, D. F. Bibliography and the Sociology of Texts. London: British Library, 1986. Print.

Miall, David S. “The Library versus the Internet: Literary Studies Under Siege?” PMLA 116 (2001): 1405-14. Print.

Michigan Early Modern English Materials. Eds. Richard W. Bailey, Jay L. Robinson, James W. Downer, and Patricia V. Lehman. U of Michigan. Web. 24 Apr. 2009. http://quod.lib.umich.edu/m/memem/.

Mitchell, Steve. “Machine-Assisted Metadata Generation and New Resource Discovery: Software and Services.” First Monday 11.8 (2006): n. pag. Web. 24 Apr. 2009. http://firstmonday.org/issues/issue11_8/mitchell/.

Metadata Offer New Knowledge (MONK) Project. Web. 24 Apr. 2009. http://monkproject.org/

Mueller, Martin. “The Nameless Shakespeare.” TEXT Technology 14.1 (2005): 61-70. Print.

Nalanda iVia Focused Crawler. iVia Project. U of California, Riverside. Web. 24 Apr. 2009. http://ivia.ucr.edu/projects/Nalanda.

Nutch. Apache Software Foundation. Web. 24 Apr. 2009. http://lucene.apache.org/nutch/.

Open Access Text Archive. Internet Archive. Web. 24 Apr. 2009. http://www.archive.org/details/texts.

Open Journal Systems. Public Knowledge Project. U of British Columbia, Stanford U, and Simon Fraser U. Web. 24 Apr. 2009. http://pkp.sfu.ca/ojs

Open Monograph Press. Public Knowledge Project. U of British Columbia, Stanford U, and Simon Fraser U. Web. 24 Apr. 2009. http://pkp.sfu.ca/omp

Oxford English Dictionary Online. Oxford UP. Web. 24 Apr. 2009. http://dictionary.oed.com/.

Oxford Text Archive. Oxford University Computing Services, Oxford U. Web. 24 Apr. 2009. http://www.ota.ox.ac.uk/.

Pechter, Edward. “The New Historicism and Its Discontents: Politicizing Renaissance Drama.” PMLA 102 (1987): 292-302. Print.

PostgreSQL. PostgreSQL Global Development Group. Web. 24 Apr. 2009. http://www.postgresql.org/.

Public Knowledge Project. U of British Columbia, Stanford U, and Simon Fraser U. Web. 24 Apr. 2009. http://pkp.sfu.ca/.

Remley, Paul. “Mary Shelton and Her Tudor Literary Milieu.” Rethinking the Henrician Era: Essays on Early Tudor Texts and Contexts. Ed. Peter C. Herman. Urbana: U of Illinois P, 1994. 40-77. Print.

Richardson, David A., and Michael Neuman. “Application for NEH Funding: A Planning Conference for a Renaissance Knowledge Base.” Funding Application, 1990. Print.

Rockwell, Geoffrey. “Is Humanities Computing an Academic Discipline?” Humanities Computing Seminar. U of Virginia, Charlottesville. Address. 19 Nov. 1999. Web. 24 Apr. 2009. http://www.iath.virginia.edu/hcs/rockwell.html.

Ruby on Rails. David Heinemeier Hansson. Web. 24 Apr. 2009. http://www.rubyonrails.org/.

RubyFedora. MediaShelf. Web. 24 Apr. 2009. http://yourmediashelf.com/rubyfedora/.

Ruecker, Stan. “The Electronic Book Table of Contents as a Research Tool.” Congress of the Humanities and Social Sciences: Consortium for Computers in the Humanities / Consortium pour Ordinateurs en Sciences Humaines (COCH/COSH) Annual Conference. U of Western Ontario, London. 30 May 2005. Address.

———, Milena Radzikowska, Susan Brown, Thomas M. Nelson, Isobel Grundy, Patricia Clements, Sharon Balasz, Jeff Antoniuk, and Stéfan Sinclair. “The Dynamic Table of Contents: Extending a Venerable List in a Digital Context.” The Potential and Limitations of a List: An International Transdisciplinary Workshop. Prague, Czech Republic. Nov. 2007. Address.

Schreibman, Susan. “Computer-Mediated Texts and Textuality: Theory and Practice.” Computers and the Humanities 36 (2002): 283-93. Print.

Shakespeare Database Project. Dir. H. Joachim Neuhaus. Westfälische Wilhems-U, Münster. Web. 24 Apr. 2009. http://www.shkspr.uni-muenster.de/.

Siemens, Ray. “Text Analysis and the Dynamic Edition? Some Concerns with an Algorithmic Approach in the Electronic Scholarly Edition.” TEXT Technology 14.1 (2005): 91-98. Print.

———. “Unediting and Non-Editions: The Death of Distance, the Notion of Navigation, and New Acts of Editing in the Electronic Medium.” Anglia 119.3 (2001): 423-55. Print.

———, and Cara Leitch. “Editing the Early Modern Miscellany: Modeling and Knowledge [Re]Presentation as a Context for the Contemporary Editor.” New Ways of Looking at Old Texts IV. Ed. Michael Denbo. Tempe: Arizona Center for Medieval and Renaissance Studies, 2008. 115-30. Print.

———, and Christian Vandendorpe. “Canadian Humanities Computing and Emerging Mind Technologies.” Mind Technologies: Humanities Computing and the Canadian Academic Community. Ed. Ray Siemens and David Moorman. Calgary: U of Calgary P, 2006. xi-xxiii. Print.

———, William R. Bowen, Jessica Natale, Karin Armstrong, Alastair McColl, and Greg Newton. “Iter Database: Research Report on the Inclusion of Electronic Resources.” Whitepaper. Electronic Textual Cultures Laboratory, University of Victoria. 2006. Web. 24 Apr. 2009. http://etcl-dev.uvic.ca/public/iter-report/.

———, Johanne Paquette, Karin Armstrong, Cara Leitch, Brett D. Hirsch, and Eric Haswell, “Drawing Networks in the Devonshire Manuscript (BL Add MS 17492): Toward Visualizing a Writing Community’s Shared Apprenticeship, Social Valuation, and Self-Validation.” Digital Studies/Le Champ Numérique. In Press.

———, John Willinsky, Analisa Blake, Karin Armstrong, Lindsay Colahan, and Greg Newton. “A Study of Professional Reading Tools for Computing Humanists.” Report. Electronic Textual Cultures Laboratory, U of Victoria. May 2006. Web. 24 Apr. 2009. http://etcl-dev.uvic.ca/public/pkp_report/.

———, John Willinsky, Cara Leitch, and Analisa Blake. “It May Change My Understanding of the Field: Understanding Reader Tools for Scholars and Professional Readers.” Digital Humanities Quarterly. In Press.

Sinclair, Stéfan, and Geoffrey Rockwell. “Reading Tools, or Text Analysis Tools as Objects of Interpretation.” Digital Humanities 2007. U of Illinois at Urbana-Champaign, Illinois. June 2007. Address.

Solr. Apache Software Foundation. Web. 24 Apr. 2009. http://lucene.apache.org/solr/.

Southall, Raymond. “The Devonshire Manuscript Collection of Early Tudor Poetry, 1532–41.” Review of English Studies, new series 15 (1964): 142-50. Print.

———. The Courtly Maker: An Essay on the Poetry of Wyatt and His Contemporaries. Oxford: Blackwell, 1964. Print.

Sutherland, Kathryn. “Introduction.” Electronic Text: Investigations in Method and Theory. Ed. Kathryn Sutherland. Oxford: Oxford UP, 1997. 1-18. Print.

———. “Revised Relations? Material Text, Immaterial Text, and the Electronic Environment.” Text 11 (1998): 17-30. Print.

Synergies. Web. 24 Apr. 2009. http://www.synergiescanada.org/

Tanselle, G. Thomas. “Textual Criticism and Literary Sociology.” Studies in Bibliography 44 (1991): 83-143. Print.

TAPoR Tools. Text Analysis Portal for Research (TAPoR) Project. Web. 24 Apr. 2009. http://portal.tapor.ca/.

Textbase of Early Tudor English. Eds. Alistair Fox and Greg Waite. U of Otago. Web. 24 Apr. 2009. http://www.hlm.co.nz/tudortexts/.

Unsworth, John. “Knowledge Representation in Humanities Computing.” eHumanities NEH Lecture Series on Technology and the Humanities. Washington. Address. Apr. 2001. Web. 24 Apr. 2009. http://www.iath.virginia.edu/~jmu2m/KR/KRinHC.html.

———. “Documenting the Reinvention of Text: The Importance of Failure.” Journal of Electronic Publishing 3.2 (1997): n. pag. Web. 24 Apr. 2009. http://http://dx.doi.org/10.3998/3336451.0003.201.

———. “Scholarly Primitives.” Humanities Computing: Formal Methods, Experimental Practice. King’s College, London. Address. May 2000. Web. 24 Apr. 2009. http://www.iath.virginia.edu/~jmu2m/Kings.5-00/primitives.html.

Vander Wal, Thomas. “Folksonomy Coinage and Definition.” Off the Top. 2 Feb. 2007. Web. 24 Apr. 2009. http://www.vanderwal.net/folksonomy.html.

Warwick, Claire. “Print Scholarship and Digital Resources.” A Companion to Digital Humanities. Ed. Susan Schreibman, Ray Siemens, and John Unsworth. Malden: Blackwell, 2004. 366-82. Print.

Women Writers Project. Women Writers Project. Brown U. Web. 24 Apr. 2009. http://www.wwp.brown.edu/.

Zotero. Center for History and New Media, George Mason U. Web. 24 Apr. 2009. http://www.zotero.org/.

Footnotes

  1. On the importance of imperfection and failure, especially as it pertains to a digital humanities audience, see Unsworth 1997.
  2. Much of the content of the present article has been presented in other forms elsewhere. See Appendix 1 for a list of addresses and presentations from which the present article is drawn.
  3. It is outside the purview of this article to evaluate the claims of New Historicism. Interested readers are directed to the following early critical assessments of New Historicism: Erickson 1987, Howard 1986, and Pechter 1987.
  4. As with New Historicism, it is outside the purview of this article to critically evaluate the claims of social textual theory. Interested readers are directed to critical assessments by Tanselle (1991) and Greetham (1999: 397-418).
  5. Representative examples include: the Women Writers Project; the Century of Prose Corpus; the Early Modern English DictionariesDatabase; the Michigan Early Modern English Materials; the Oxford Text Archive; the Riverside STC Project; Shakespeare Database Project; and, the Textbase of Early Tudor English.
  6. Richardson and Neuman 1990. In addition to the authors of the application itself, other investigators involved with the group included David A. Bank, Jonquil Bevan, Lou Burnard, Thomas N. Corns, Michael Crump, R. J. Fehrenback, Alistair Fox, Roy Flannagan, S. K. Heniger Jr., Arthur F. Kinney, Ian Lancashire, George M. Logan, Willard McCarty, Louis T. Milic, Barbara Mowat, Joachim Neuhaus, Michael Neuman, Henry Snyder, Frank Tompa, and Greg Waite.
  7. As outlined in the application, the materials intended for inclusion and integration in the RKB were “old-spelling texts of major authors (Sidney, Marlowe, Spenser, Shakespeare, Jonson, Donne, Milton, etc.), the Short-Title Catalogue (1475–1640), the Dictionary of National Biography, period dictionaries (Florio, Elyot, Cotgrave, etc.), and the Oxford English Dictionary” (Richardson and Neuman 1990: 2).
  8. See also Rockwell (1999).
  9. On the importance of reading as an object of interest to humanities computing practitioners and a brief discussion of representative examples, see Warwick. For a discussion of professional reading tools, see Siemens et al., 2006; and the forthcoming “It May Change My Understanding of the Field.”
  10. On modeling in the humanities, see McCarty (2004). On modeling as it pertains to literary studies in particular, see McCarty (2008).
  11. See Miall (2001).
  12. Representative examples include Lancashire (1995) and Fortier (1993-94).
  13. For a detailed discussion of electronic archival forms, see Hockey (2000). In addition to the projects mentioned above (such as the English Broadside Ballad Archive) and others, pertinent examples of projects concerned with archival representation include digitization projects undertaken by the Internet Archive and Google, and by libraries, museums, and similar institutions.
  14. See Unsworth (2000).
  15. On this sense of “unediting,” see Marcus; on “unediting” as the rejection of critical editions in preference to the unmediated study of originals or facsimiles, see McLeod (1982).
  16. On the materiality of the Renaissance text, see De Grazia and Stallybrass (1993), and Sutherland (1998).
  17. See also Siemens (2001).
  18. Lancashire (1989). See also the exemplary illumination of three early “dynamic text” Shakespeare editions in Bolton (1990).
  19. The elements of the hypertextual edition were rightly anticipated in Faulhaber (1991).
  20. Indeed, scholarly consensus is that the level of dynamic interaction in an electronic edition itself—if facilitated via text analysis in the style of the “dynamic text”—could replace much of the interaction that one typically has with a text and its accompanying materials via explicit hypertextual links in a hypertextual edition.
  21. See the discussion of these issues in Siemens (2005).
  22. For example, see Sinclair and Rockwell (2007); see also the discussion of modeling in this context in McCarty (2004, 2008).
  23. An example of a prototypical tool that performs an integral function in a larger digital reading environment is the Dynamic Table of Contexts, an experimental interface that draws on interpretive document encoding to combine the conventional table of contents with an interactive index. Readers use the Dynamic Table of Contexts as a tool for browsing the document by selecting an entry from the index and seeing where it is placed in the table of contents. Each item also serves as a link to the appropriate point in the file. See Ruecker (2005); Ruecker et al. (2007); and, Brown, et al. (2007).
  24. An important distinction between REKn and the earlier RKB project is the scope of the primary and secondary materials contained. While RKB set out to include “old-spelling texts of major authors (Sidney, Marlowe, Spenser, Shakespeare, Jonson, Donne, Milton, etc.), the Short-Title Catalogue (1475–1640), the Dictionary of National Biography, period dictionaries (Florio, Elyot, Cotgrave, etc.), and the Oxford English Dictionary” (Richardson & Neuman 1990: 2), REKn is not limited to “major authors” but seeks to include all canonical works (in print and manuscript) and most extra-canonical works (in print) of the period.
  25. On the editing of the Devonshire Manuscript in terms of modeling and knowledge representation, see Siemens and Leitch (2008). See also the forthcoming Siemens et al, “Drawing Networks in the Devonshire Manuscript.”
  26. A master list of the primary text titles and their sources is included as Appendix 2.
  27. On the mandate, history, and development of Iter, see Bowen (2000, 2008). For a more detailed report on this collaborative experiment, see Siemens, et al. (2006).
  28. See also Mitchell (2006).
  29. DuBruck (1983).
  30. See Siemens et al. (2006) and “It May Change My Understanding of the Field,” forthcoming.
  31. For the origin of the term folksonomy and its use to describe the practice of socially derived content tagging, see Vander Wal (2007).
  32. For a useful survey and assessment of existing annotation tools and their implementation in electronic editions of literary texts, see Boot (2009).
  33. The results of this process have been published in Siemens et al. (2006) and “It May Change My Understanding of the Field,” forthcoming, and presented at a number of conferences and symposia.
  34. See the list of presentations delivered in 2008 in Appendix 1, in particular those presented in June 2008.

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks