# Connexions

You are here: Home » Content » EVIA, Sustainability, and Mission-Creep

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Rice Digital Scholarship

This module is included in aLens by: Digital Scholarship at Rice UniversityAs a part of collection: "Online Humanities Scholarship: The Shape of Things to Come"

Click the "Rice Digital Scholarship" link to see all content affiliated with them.

### Recently Viewed

This feature requires Javascript to be enabled.

# EVIA, Sustainability, and Mission-Creep

Module by: John Unsworth. E-mail the authorEdited By: Frederick Moody, Ben Allen

EVIA (Ethnographic Video for Instruction and Analysis) is a Mellon-funded project that collects video made by ethnographers in order to preserve it and make it available to scholars and teachers. EVIA is a joint effort of the Indiana University and the University of Michigan, and Alan Burdette’s paper for this conference describes it as “unique in its combination of preservation, annotation, and scholarly publishing.” The order of those words is significant; in the “Premise and Mission” section of his paper, Burdette elaborates:

The primary mission of the EVIA Project is to preserve ethnographic field video created by scholars as part of their research. The secondary mission is to make those materials available in conjunction with rich, descriptive annotations, creating a unique resource for scholars, instructors, and students. The EVIA Project was initially driven by a realization that a large amount of research video had not been deposited in institutional archives and was instead stored in personal collections in improper conditions with little or no access to anyone besides the scholar who made the recordings. . . . The ability to preserve these recordings and make them available to other scholars is a cornerstone of the EVIA Project. (190-1)

It makes sense to put preservation first in the mission statement because if these video recordings aren’t preserved, they can’t be made available, can’t be annotated, etc. That much is true of any cultural heritage information you might choose to preserve—but there’s an added urgency in the case of the material that EVIA deals with, because it is observational data, and because it is recorded on ephemeral media. Unlike books, magazines, recorded music, commercial films, or many other facets of the cultural record, these ethnographic videos are unique documents of a unique event. In that respect, they are like some kinds of scientific data, which can only be gathered as an event unfolds. And, as Burdette notes,

…the archival shelf life of videotape is extremely short. Although based on formats similar to audiotape in principle, the density of the magnetic information on videotape and the more complex manner with which it must be retrieved result in more rapid deterioration of the signal than we see in audio. Video recording formats have also experienced a higher level of obsolescence compared to audio. (198)

However, as we read Burdette’s paper, an interesting thing happens: the preservation objective recedes into the background, and it is replaced by two other topics, each energetically described, but I think described by different authors with some unreconciled differences in their perspectives and priorities. In all, I think I can trace three camps in this one paper: in the work the paper describes, the preservation camp seems to have lost out to the other two, which are focused, respectively, on peer-review and software development. Reading the account of the project in Burdette’s paper, I would predict that preservation, should it happen, will occur in spite of these other two activities, and not because of them.

EVIA was funded as a stand-alone project from 2001 to 2009 with $2.5 million from Mellon and$1.5 million in match from Indiana and Michigan. As someone in the Burdette paper admits, “The EVIA Project has been an expensive endeavor by humanities project standards. Much of this support went towards software development. . . . Our software will need to be maintained at least, and of course, we have many ideas for extending and improving it” (203). While much of EVIA’s \$4 million in funding was going to software development, the project managed to complete ingestion of only seventy hours of video. That’s ten hours a year, or about one hour a month. Granted, there are 1200 more hours in the pipeline, but at the currently established rate of completion, that 1200 hours will take about 118 years to ingest. What slows things down to this rate? Perhaps it is the activity of annotation and the related activity of peer review:

Annotation involves taking an assembled corpus of unedited video files, segmenting it using a three-level hierarchical scheme, and annotating each segment. Annotation also typically includes developing a lengthy glossary, citations, and transcriptions. When given the tools, scholars were annotating their recordings in much more detail than we originally anticipated. As a response we recognized that we had to provide scholarly credit for the kind of work they were doing and that implied implementing some kind of peer review. . . . Once a collection is completed, a designated editor evaluates it with assistance from a managing editor, and if acceptable, suitable peer reviewers are found and the project is sent for review. Usually, there are some small changes that the author will be required to make as part of the peer review dialog and once those changes are complete, we send the project to be copy-edited. Some of these collections contain annotations that are equivalent in length to a small monograph. (192-3)

Well, no wonder it’s hard to move video through that pipeline. I can understand how this happens, of course: one thing leads to another. You want to preserve video, but to preserve it you have to collect it, and to collect it some ethnographer has to give it to you. When she does, you also need her to give you some descriptive metadata. She will need some kind of tool to do that, and as you develop that tool, you realize that you could go further, and allow ethnographers to embed notes about the subject matter into the video record itself—but once you put that tool in the hands of the ethnographers, it turns out they have lots and lots to say, so much, in fact, that they can’t justify their level of effort unless the results count for tenure and promotion, so now you need to implement a peer-review and publishing process in order to provide them with professional credit. The only problem is that lately your rate of accessioning video has slowed to a crawl, between the tool development, the endless annotations, and building a peer-review and publishing operation. But tool development is more fundable than preservation because it seems more finite (even though it isn’t), and publishing could provide some income that would help to make the preservation activity sustainable (though it probably won’t), and day by day the preservation activities just seem to get pushed to the back burner, always with the best of intentions.

It seems reasonable to ask, with respect to preservation, if peer review is a necessary or appropriate precondition of collection. The paper admits that “our practice is to continue to work with scholars in cases when peer review has not been entirely favorable and to keep moving towards an acceptable final product” (193), and it also admits that “some EVIA Project collections are not peer reviewed because they are accepted through a method other than the scholarly collection proposal and the EVIA Fellowship” (193-4). Even annotation isn’t, apparently, a necessary precondition of collection: some of those collections accepted through alternate methods “are quite large, making it impossible to annotate video with the level of detail common in smaller ten-hour scholarly projects” (194). If preservation is the first goal, then for heaven's sake lower the bar as far as possible to get everything in, and worry about curation, peer-review, annotation, even cataloging, later on—especially if, as is said later in the paper, "the rapid deterioration and obsolescence of video recordings requires that we act now to make preservation transfers” (198). Some sense of this seems to be dawning by about page 10:

The summer institute process we utilized during project development is a model we found to be incredibly productive and satisfying. It is also very expensive. While we hope to do more summer institutes in the future, we know we cannot be entirely dependent on them for collection development. (201)

In fact, I think the whole model here is inverted: a cynic might say that the urgent need for preservation, and the potential benefit of access, are being used as a stalking horse for the funding of software development, and that domain experts who flesh out content to demonstrate the usefulness of that software are being bought off with peer review, probably at a price much lower than what’s going into programming. A more sympathetic respondent, one who had experienced mission creep in his own projects, might say that preservation/access, scholarship and software development each have their own imperatives, and although they may seem potentially complementary in some ideal world, in a world of limited resources they are necessarily in competition. In the end, the only way you can tell which is driving the bus they’re all on is to look at which costs the most money and consumes the most time and attention.

Let’s assume that preservation and access remain the problems that EVIA needs to solve, and let’s see what that solution might look like, if the imperatives of these activities actually trumped those of peer review and software development. In other words, what would EVIA look like if the archival process were not turned upside down?

First, you’d establish a minimal metadata set that was required for deposit, and you’d make it as easy as possible for creators to contribute their materials. You wouldn’t require much at all; you’d ask them to fill out a web-based form with collection-level, not item-level, information, including generalizations about rights in the collection that were designed to err on the conservative side, and the result would be a record in your database and a sheet they could print out and enclose with the tapes when they mailed them in. You’d copy or transfer the tapes and mail them back. You might actually run this part through a commercial digitization service, since your ethnographers are using commercial video formats and there is, therefore, a consumer market for conversion services, with attendant economies of scale.

Next, as soon as you had these materials in digital form, you’d make them available online, if the conservative rights declaration seemed to allow that. You’d encourage crowd-sourced annotation and comment, and you’d use that as input to a peer-reviewed product that would be distinguished from the pre-review one—but you would also make peer-reviewed products available to comment and annotation by readers, on the Wikipedia model, and perhaps actually using Wikipedia and Wikimedia as platforms, since ethnography of all subjects cries out for a plurality of perspectives. In other words, as soon as content could be made available, it would be made available, and while a peer-review process would be conducted, it would benefit, before and after, from public review, comment, and contribution: each video would be an evolving resource in a rapidly expanding collection. Access would draw in audience, probably some from unanticipated quarters, and audience would provide information, not just appreciation. Much of this could be accomplished by thoughtfully folding EVIA content into sites, services, and communities that already exist. However, this is not the likely future for EVIA. Looking forward, we can see that EVIA is now part of five other grant-funded projects, described on page 12 of Burdette’s paper. Of the projects described there, four are about software development (with one of those four also being about publishing), and only one is focused on accessioning new video collections.

As for crowd-sourcing or the dynamic development of annotated resources, EVIA believes that “the process of peer review implies that a piece of writing is fixed in time and that revisions void the validity of the peer review” (207). That seems to me like a remarkably stunted vision of peer review: isn't it actually a process rather than a prize? Even when something's peer reviewed for print, is the review by peers really complete at that point? What about reception? What about citation? What about impact? Some of this the EVIA participants seem to realize:

By choosing the weight of peer review and the functionality of persistent URLs, however, we cement our written content and create a static object that cannot fully utilize the dynamic capabilities of online publishing. We think that meta-annotation functionalities are perhaps the best solution, and so this is an avenue we continue to explore. (207)

Well, perhaps meta-annotation could allow the people, the audience, the public, to participate in a role other than that of consumer of packaged goods.

On the other hand, EVIA seems willing to envision authorship as an everlasting and exclusive activity: “We have the kind of material that the depositing scholars could spend the rest of their lives describing and analyzing and indeed, this has presented challenges to the completion of annotations because an individual’s ethnographic understanding usually continues to evolve” (207). By this logic, especially if completion of authorial annotation is a precondition for final accessioning and publication, EVIA is not likely to actually collect much until the creator dies—at which point, peer review, unless performed by Saint Peter, is probably irrelevant.

Preservation will always turn on sustainability: if the institution that runs the library or the archive should perish, then the collection is likely to disappear. However, it seems to me that audience and use are the keys to sustainability, for any collection and for any institution. If you demonstrate value to many people, you will be sustained, and your content will be preserved. Academic publishing always seems to make the mistake of assuming that its interests are too esoteric for the hoi polloi, and that the riffraff, if admitted into authorship, would trash the place. Automated spam in poorly designed and maintained blogs, wikis, etc. is a real problem, but that is just one more argument for contributing your content in an environment maintained by many, for a very large audience. EVIA can do this. It won’t result in the death of scholarship or the desecration of content. It might help to shift the balance in favor of preservation, while at the same time providing a more direct justification (based on use) for devoting resources to software development, and a more compelling justification (based on impact) for tenure.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks