December 25, 2007

Booking Images

So last night I had an interesting conversation with an acquaintance. He had a small problem. To wit, he used images from another published work for a book that he had just published. The images in this case were all famous works of art, all public domain. His question was how much ought he worry about using the images from this previously published work, which was still in print and therefore in copyright.

My initial response was a simple one. "Did you use a large enough number of images from this work to infringe its "anthology" copyright" (the copyright implicit in just selecting and arranging already created content)? Since his batting average was roughly 20% of the total from the previous publication, he appeared to straddle the line.

Next I asked, "Are images of these masterworks commonly available from other publications--musuem catalogs, coffee-table books, and so forth?" Most were, which offered the additional protection of plausible deniability. In brief, it would be difficult for the publisher of the source work to detect reuse of the images without a smoking gun in hand (i.e., a memo from the author stating where she got the works).

His concern was not a surprising one for someone new to publishing. He wanted to be open about the sources of his information. Yet he also wanted to know that he would be protected from infringement claims, something that publishing contracts do not always offer outright to authors. (After all, publishers trust their authors not to plagiarize and do not want to be in the position of having to police this basic expectation.) Was he being dishonest by failing to provide credit where credit was ostensibly due?

Full disclosure seems the ethical thing to do. And yet, crediting sources that offered little more than exact reproductions of already public domain images may be granting unearned credit. Years ago, I built American history CD-ROM products rich in textual and pictorial primary sources that I commonly, dare I say, pilfered liberally from documentary readers and anthologies. As beholden as I was to these readers, their contribution was primarily reproduction. They had no legal claim to the intellectual copyright of the work. Mere reproduction of an image or transcription of a textual document does not meet the minimum requirement for infringement. So crediting the "reprint" would have been, for all intents and purposes, a courtesy, one that I did not refrain from extending in most instances.

However, let's say for the sake of argument that I were producing a digital product on the formation of the Constitution. It is possible that I might use twenty critical documents--many short and self-contained, such as Federalist essay no. 19--from a source that had assembled some roughly 100 documents. Here the question of crediting the source becomes more complicated. On the one hand, I have used a seemingly substantial portion of the work, and presumably I should clear rights to use this combination of otherwise public domain documents. On the other hand, my digital product will encompass some 500 documents on the formation of the American constitution, and twenty included are not all that rare. Would I be inviting trouble from a publisher if I were to request permission that the publisher does not necessarily have the right to grant? (Oh, yes, publishers will sometimes try to assume rights that they may not possess.) Should I even bother crediting the publisher, which has done no more than reprint works that themselves might have been gathered without credit from earlier reprint anthologies?

If I seem to be crossing a line, think of it this way. Let's say I include the Declaration of Independence, several essays from the Federalist Papers, and several anti-Federalist essays from this work. It seems I ought credit the source anthology. But nearly all of these items have been reproduced elsewhere by other publishers. Not only who is to know where I sourced the documents under question, but the anthology publisher is more than likely to be guilty of having pilfered them in turn. For these reproduced transcriptions of public domain works--in the case of the Federalist Papers, reproduced from the newspapers in which they first appeared--the more logical and useful citation would be the work's original source. In fact, in citing unmodified primary sources, I commonly credit not the reprint anthology, but the original source referenced in that anthology. Even when one reprint credits another reprint (it does happen), I traditionally skipped over both reprints and cite the original source. It is, after all, the more useful bit of bibliographic information for the scholar seeking to undertand better the provenance of the primary source.

Note, of course, the critical term italicized above: unmodified. If the anthology I use has signficantly edited the original texts (often through ellipses), then I must both clear permission and provide credit. In those instances, the publisher can rightly claim copyright on the modified version of the original work by virtue of those editorial elisions, which then constitute the creative input component for a copyright claim. For still images, modification of these, too, would place a copyright stamp on the use of that image that would require both permission and credit. (Think Andy Warhol silkscreening a Velazquez painting.) In the case of my interlocutor whose concerns precipitated this entry, there was no modification by the publisher of the works. And so, in my view, there was no necessary obligation to cite the reprinter as the source for his own work.

November 15, 2007

New Venture in Literary Publishing

About two months ago, a friend with whom I play tennis mentioned an article that had appeared in the local business paper. It recorded the advent of a new literary journal in my hometown of New Haven. It was tentatively named the New Haven Review of Books, and she thought it appropriate to share because two years earlier I had tried the idea of just such a publication out on her. My reasoning then had been that New Haven, lying midway between New York and Boston, with Yale serving as its local fount of culture, could easily support a quality book review publication. New Haven is rife with intellectuals and artists, and perhaps its greatest problem is retaining them before they head off to, well, New York and Boston.

The magazine was intended as a bit of a lark. I thought it might be fun to do, as a way of reinvigorating my interest in things intellectual. The major drawback for me was quite simply lack of time—I was then a full-time editorial director at Thomson Gale (now Gale Cengage Learning), and my range of contacts among writers was limited. Moreover, my background was stronger as a publisher than as an editor. When I was an undergraduate at the University of Chicago, I had served as an editor for the student-run Chicago Review. During my doctoral studies, I then worked as editor and publisher for Response: A Jewish Contemporary Review, a nationally published quarterly journal of Jewish fiction, nonfiction, and poetry that had some rather impressive contributors over its forty years of existence. Because of the journal’s small size, I handled pretty much everything, from working with the printer to cashing the checks. It was an interesting experience and my introduction to the business of publishing.

Fast forwarding back to the present, as I was reading through the article in Business New Haven about this new publication, I learned that its founder was a fellow synagogue member (and a fine writer, to boot). When I caught up with him during Saturday services, I mentioned the article and expressed my interest in helping out.

“Do you write?” he asked.

“Not really,” I replied. “In fact, that’s not how I want to help. I’m really more interested in the business of publishing, so I’d like to assist on that front.”

“Really? That’s great. Publishing is the part I know little about, and that’s exactly the type of help we can use now. We published our first issue, and we want to become more established. We’re just not sure how to go about doing that.”

“That’s not too hard,” I responded. “In fact, that’s pretty much what I did years ago when I started working with literary journals.”

And this is how I come to write about the New Haven Review (no longer the New Haven Review of Books). So, over the next few months, I will record some of the work I am doing with respect to this endeavor, hoping what I write can serve as a bit of a primer for others thinking of starting their own literary publications. Let me point out that I will be concentrating on the dynamics of setting up shop and publishing a journal—not on editorial policy. That is left to the wiser heads in the volunteer group of individuals who have gathered around this effort. If you’d like to see what I’m talking about specifically, go ahead and check out the online edition of the New Haven Review of Books. The print version exists as a privately published and distributed, side stapled, edition for now.

November 14, 2007

Options, Embargoes, and Exemptions in Commercial Microfilm Publishing


As I’ve described elsewhere, commercial microfilm products largely comprise content from libraries or print publishers (such as newspapers or periodicals). Just a day ago, I had a phone call with an institutional contact about a proposed microfilm venture. In the course of our conversation, I reviewed several key issues affecting products of this nature. A summary of some of these should illuminate the available options and trade-offs for institutions and publishers about to engage with one another in a microfilm publishing arrangement.

ROFOs and ROFRs


Prior to the mid-1980s, and sometimes even later, micropublishers rarely included digital rights in their microfilm contracts. In the last decade, they no longer overlook this opportunity, especially those with the financial wherewithal to go digital or hopeful of eventually getting there. As a consequence, commercial microfilm publishing contracts commonly include provisions concerning the possibility of a digital edition. The simplest approach to addressing a digital version of a microfilm product is to include a “digital option.” Options come in various forms and follow standard legal practice. My contracts often included a right of first offer (ROFO), which grants the publisher the right to bid first in good faith on a digital project in the event that the content owner opts or a third party offers to digitize the content, whether for free or for fee. A variant is the right of first refusal (ROFR). This tends to be more restrictive because in this arrangement the content owner must give the publisher the right to match or meet any possible offer, especially from a third party. In the current climate, library sources find ROFOs more acceptable because they want the latitude to publish their content for free and not be subject to the restrictions (and obligations) implied in an ROFR.

Cannibalization

In my own experience, library sources have rarely posted serious objections to granting a digital option. After all, the provision is just an option and not a binding agreement. More important to content owners, especially libraries, are restrictions on the digitization of the data. For the publisher, these restrictions have a simple purpose: to prevent the “cannibalization” of the microfilm product by the availability of a digital version. (Cannibalization is the common term for lost sales by dissemination of an alternate edition or version of the same content.) However, many libraries—especially research libraries, which are often the main sources for scholarly microfilm sets—have public and scholarly missions that emphasize expanded access to knowledge and, therefore, their holdings. (I am perhaps too bold here, but this argument, which I have heard frequently seems disingenuous in the extreme. Many of the institutions claiming this free knowledge mandate are private and do not grant on-site free access to their holdings to unaffiliated individuals, so why the rush to give it all away free to the teeming masses?)

The expanded access mission of these institutions generally translates into “open access” (read “free”) digital versions of their collections. Where once upon a time funding agencies favored microfilm preservation, providing grant money for either equipping micrographics departments within libraries or paying vendors under the direction of the grantee, today those same funds have been aggressively financing the provision of digital equipment and services. So here we have a fundamental conflict of interests. The microfilm publisher would, of course, see any open source digital version of the content from the prospective product as anathema. On the other hand, libraries see opportunities to upload digital versions of their holdings —much simplified by provision of a microfilm version of their content from which to work—as critical to their mission to augment access and, equally important, raise the visibility of the collection and its host library.

Embargoes & Exemptions

The result for most microfilm contracts is the arrangement of a limited “embargo” on the digitization of content from the microfilm, especially by the source institution and for free. Few institutions, however, will tolerate a complete embargo. As a consequence, it is not uncommon to include an exemption of up to ten percent of the content for any form of dissemination by the source institution. (Indeed, for publishers, this allowance may favor the collection’s distribution in microfilm by serving up a digital teaser for the more complete collection.) Outside of that exemption, embargoes vary. With the Library of Congress, I negotiated a 36-month embargo on digital conversion and dissemination from the date of publication of the microfilm edition of the content. Columbia University’s Rare Book and Manuscript Library, on the other hand, opted for a yearly addition of ten percent to the first-year exemption, so that by year ten, the entire collection would be available for digitization and dissemination without restriction. Note that embargoes are especially important to the publisher if it decides—as some do today—to create the microfilm collection from a digital imaging process. (Yes, as backwards as this seems, there are a number of good reasons for creating microfilm collections in just this manner.) Since digital images, along with gratis copies of the microfilm, will be supplied to the source institution, the embargo is critical to the publisher. On the other hand, the gratis provision of a complete set of digital images represents a strong temptation to the library.

For the publisher, of course, provision of an exemption does not necessarily result in digitization and dissemination of the content by the content. First it is possible that the publisher may “go digital” with the product before the embargo period finishes. Second, and more likely, the content owner will not have digitized the content by the time the selling arc of the product has ended (usually 3 to 5 years) owing to a lack of funds, interest, staff, or what have you.

Nonetheless, content owners often do not want to abdicate the ability to digitize, should a grant funder or a more generous library director come waltzing through the door. Libraries want these exemptions and, in an ideal world, to limit the embargo as much as is reasonable. In the course of negotiation, content owners can always just stand firm on promoting its open access interests—seeking large exemptions or miniscule embargoes. But I recommend the trading on negotiable portions of the contract. For example, the source institutions might trade a greater exemption or shortened embargo for reduced royalty payments or some other cost-savings concession that sweetens the bottom line for the publisher. This give-and-take will at least provide some form of compensation for potential sacrificed revenues as a result of cannibalization. On the other hand, for content owners that may care little about exemptions or embargoes—they really have no internal plans to digitize the content—they may adopt the opposite tactic. In exchange for a reduced exemption or longer embargo, they prefer a higher royalty, larger advance, or a guaranteed yearly minimum.

As I’ve recommended to content owners with whom I’ve negotiated, publishing contracts are not written in stone. They actually comprise a set of moveable parts that may be added, revised, or discarded, and since each of these parts represent a set of obligations—if you will—they each have opportunity costs that can be traded. In brief, contracts are “barterable,” and should be treated accordingly in that way, especially for commercial microfilm products, which are technically “print-on-demand” products with notoriously long lifespans.

October 12, 2007

Microfilm Primer: Close Calls and Tragedies in Microfilm Publishing

Previous Post: Microfilm Publishing Primer: Rumors of Its Death

Previous Post: Microfilm Primer: Why Microfilm Publishing Still Pays…in More Ways Than One

In the course of sniffing out new collections for publication at Primary Source Microfilm, I stumbled across a record in Columbia University’s online public access catalog describing its Federated Press collection. I had no idea what this collection represented, although its size at roughly 250 rolls made it an attractive prospect. A little research revealed that the Federated Press had been a relatively short-lived wire service for labor-friendly publications during the heyday of the 20th-century labor movement. (It boasted Betty Friedan as one of its early journalists.) The collection comprised news releases organized by chronology and subject and biographies. When I asked Columbia staff about the collection, imagine my dismay when I learned that they had no real idea where the master negative was located . Since the master negative is critical to any high-quality reproduction of the set for prospective customers, I faced the grim prospect of working from the heavily scratched microfilm positive that sat in the Rare Book and Manuscript Library. Not good. Only after several weeks of searching did Columbia’s library staff discover the presence of the master negative at Preservation Resources, which had stored it, in turn, at one of Iron Mountain’s many facilities. In the end this little known and not much used collection was successfully published, resulting in a revamped catalog, a new print negative placed in Primary Source Microfilm’s cold vault, another negative and positive delivered for safekeeping this time to Columbia University for local storage, and, of course, the distribution of a number of positive copies among major research libraries with strong programs in labor history throughout the United States. Close call that one!

Sadly, not all stories turn out so well. Roughly around that same period I discovered another collection, this time a gathering of manuscripts, serial publications, and monographs—many quite rare—from and about the Yucatan region of Mexico. Based on a bibliography I discovered, I began to root around for its location. I started at the University of Alabama, which owned an old and heavily worn positive set of the over 140 rolls that comprised this 35-millimeter microfilm collection. With the assistance of the bibliography’s editor, Edward Terry, who still taught at the university, I learned that a master negative resided in the Yucatan, little attended to by its owners. There was much to concern me. The Yucatan is notorious for its humidity, and during my visit, the region had just weathered a devastating hurricane that tore up much of the countryside and damaged parts of Merida where the film set resided. The microfilm itself was stored in a dilapidated building with no means of controlling the temperature or the humidity. And need I add that the microfilm itself was acetate stock?

After laborious negotiation, we brought the negative to the United States and began the difficult process of creating second-generation print negative on silver halide polyester microfilm in order to give the this collection of some very rare material a fighting chance of survival. Fortunately, the microfilm had somehow managed to survive without suffering the baleful effects of vinegar syndrome. This was good. But, lo and behold, as we worked our way frame-by-frame through the film, we were shocked to discover a host of imaging problems that, despite our best efforts, we could not salvage. The microfilm, which had been shot in the Yucatan, featured among other things the thumbs of the operator holding down book pages, indicating the absence of a glass plate. Moreover, there had been no effort to control the lighting. As a consequence density readings varied widely—sometimes from one photographic frame to the next—with dark images sidling up next to overexposed shots. Finally, pages started showing up on a regular basis that were out of focus. This final blemish was the straw that broke the back of this product. Primary Source Microfilm, in good faith, could not distribute the product. And even worse, the original library of materials has been dismantled some years after the creation of the microfilm, making any prospect of reshooting the collection impossible. The money that the University of Alabama had spent so many years ago to create this microfilm set was all for nought.

Although there is no guarantee of the fact, had the microfilming been performed under the auspices of a publisher, which must always look over its shoulder at the customer’s reception, many of these problems might never have happened. In the end, despite the painful decision not to redistribute the collection because of its many flaws, Primary Source Microfilm nobly--and at my urging--created gratis negative and positive copies on polyester stock of the entire microfilm set for both the Instituto de Lengua y Cultura de Yucatán and the University of Alabama respectively, to prevent further deterioration of the acetate originals.

Microfilm Primer: Why Microfilm Publishing Still Pays…in More Ways Than One

Previous Post: Microfilm Publishing Primer: Rumors of Its Death

The purchase of any good or service requires an agreement on the need for it between the seller and the buyer. Microfilm publishers and customers are no different in that regard. The substance of that agreement rests on the following reasons.

  1. Microfilm is a preservation medium, capable of lasting hundreds of years with proper care;
  2. It is relatively inexpensive to duplicate;
  3. Although cumbersome to use, the basic technology to view the data is simple, requiring little more than a light and a lens;
  4. Security of the original material from theft or wear and tear is supplied without having to restrict access to the content;
  5. Space savings can be very real, especially for libraries in metropolitan areas where the cost of new space can be formidable unless you resort to an annex site.

Unfortunately, while these reasons may explain why microfilm is still a useful medium to own, it does not explain why we need commercial micropublishers. After all, many libraries and archives own microfilm equipment for the capture of data internally—for the very reasons stated above. So why micropublish?

Here is where a second set of reasons come into play.

  1. Micropublishers bear not only the cost of creating the microfilm set, but may even finance improvements to conservation of the original materials; creation, correction, or deepening of the collection’s cataloging; and duplication and dissemination of the microfilm set.
  2. Micropublishers raise the visibility of the original collection through its sales and marketing efforts, thereby augmenting the source institution’s reputation, informing scholarly users about the collection’s existence, and placing in users’ hands to hard-to-access content.
  3. Micropublishers serve as an added layer of security for the long-term preservation of the content by serving as a back-up repository for master or print negatives of the microfilm set.
  4. Micropublishers become a revenue source—through royalty payments—for the source institution.


This added layer of reasons draws on the muscle of the marketplace to fund creation of the microfilm and motivate the micropublisher to preserve and disseminate the content in this format. And this is no small matter. I have witnessed firsthand what can happen to microfilm collections that were born without the lever of the market to ensure their proper creation and care. Here are just two stories...

July 2, 2007

University Presses: Going Digital

University Presses are remarkable, in some way, for how utterly behind the times they are. This little item was brought to my attention regarding Ohio State University Press. The item concerns the decision to take selected backlist titles, digitize these, and put them on the Web for free. This seeming innovation, however, is hardly new at all and pales in comparison to the plunge National Academic Press took, when it decided to digitize and make available all of its titles. Of course, the digitization of university press publications, such as by Google, is a complicated issues because the knowledge base among university presses concerning their digitization options and requirements with respect to workflow, print-on-demand technology, long-term electronc preservation, and so many other issues is so limited.

And so, a discussion of those issues will become the substance the next series of posts that I make to this blog.

March 20, 2007

Microfilm Publishing Primer: Rumors of Its Death

I’ve worked in commercial microfilm publishing for nearly a decade, developing products in that medium for consumption by researchers. In 1998, I joined Primary Source Media, which had been founded in th 1960s as Research Publications. The change from Research Publications to Primary Source Media presumably foretold the rapid decline in commercial micropublishing among libraries as they switched over to electronic products. Eight years later, just before my depature in early 2006, only part of the prophecy had come to pass.

Electronic products had, indeed, taken off, but not necessarily at the expense of microfilm products. (This no doubt explained the change in name from Primary Source Media to Primary Source Microfilm five years later). Lesson to be learned? Generally speaking, while commercial micropublishers have seen attrition in the sale of microfilm, the rate of attrition has been far less than anticipated. So how has this shaken out in the market over the last decade?

Although grateful for the slowness of the attrition, micropublishers continue to bemoan the lack of growth in their ("top-line") total revenues. One result in the last ten years is the gobbling up of smaller firms. In late 1999, ProQuest acquired Chadwyck-Healey. In early 2003, it followed up by purchasing Norman Ross Publishing. In May 2004, Gale Cengage (formerly Thomson Gale) acquired Scholarly Resources. And the rationale for acquiring these properties is not hard to see. First, microfilm publishing is very profitable, making microfilm divisions classic "cash cows." Second, microfilm imprints carry large swaths of repurposable content easily "repurposed"--that is, converted--into potentially lucrative digital products. (The cost of converting certain types of microfilmed content to digital form is far less than that of working from originals, but that is another subject.) Net result is that rumors of micropublishing’s imminent death remain...well...just that: rumors. Explaining why requires a little more investigation.

January 9, 2007

Back Issue Digitization Projects (BIDPs): Accessing the Whole Megillah

Several publishers already recognize that the most promising buyers of electronic versions of their back issues are institutional in nature, and those institutions will most likely be libraries. Driven by the needs of their patrons, public, academic, corporate, and even secondary school libraries "channel" research by raising the visibility of sources that the World Wide Web and search engines effectively flatten. Libraries are more than mere customers; they are promoters. They operate as a market for intellectual goods and as a partner for the redistribution of those goods, which can benefit publishers seeking to monetize their back issues.

Take the example of JSTOR, which has successfully capitalized on the research needs of library patrons by recognizing them long before the publishers whose content it has digitized. Seeded by money from the Andrew W. Mellon Foundation, JSTOR’s mission was to preserve and disseminate in digital form those academic serials, particularly in the arts, humanities, and social sciences, whose publishers lacked the fiscal wherewithal, vision, or interest to support a BIDP. For these publishers, JSTOR provided an enormous service in salvaging their journals. However, JSTOR is a third-party distributor. This means that in exchange for bearing the cost of digitization, it pays royalties based on "usage," which can decline over time into a fairly limited revenue stream. (I will blog "usage royalty" arrangements another day to explain how they work). While publishers managed to shed the risk of investment in a BIDP, they may have given up greater returns to their top and bottom lines as a consequence.

JSTOR's role as a third-party distributor is not unique. The variety and subtlety of its pricing models is. Instead of tiering its costs solely by FTEs or materials acquisition budgets, it cannily drew on specialized or weighted systems to classify customers. This modeling process started with JSTOR's use of the Carnegie Classification scheme for academic libraries (its main market). This tiered system offered a new way to calibrate and target an academic library's level of interest in the kinds of publications JSTOR featured. Since then JSTOR has tailored its pricing for community colleges, public libraries, secondary schools and museums in order to make its product offerings more responsive to the subscriber's mission and sensitive to its fiscal ability. For community colleges, it uses the "Associate of Arts" designation in the Carnegie Classification scheme; for public libraries, it balances sizes of populations served, materials budgets, and numbers of active serial subscriptions; for secondary schools, it divides them into three classes according to their "college-enrolling rate"; and finally for museums, it weighs operating budgets, material acquisition budgets, numbers of active serial subscriptions and numbers of FTE curators and librarians.

A number of JSTOR’s targeted publishers, some belatedly recognizing the success of this business model, have jumped into the market with their own offerings, a list of which appears on the website of the American Association of University Presses. These include the University of California Press’s CALIBER, Cornell University’s EUCLID, and Duke University Journals Online. (One of the few early entrants prior to JSTOR was Johns Hopkins University Press's Project Muse.) Moreover, even though BIDP offerings may end up spotty (e.g., incomplete), more publishers have begun to recognize the value of the staking their own claim instead of collecting royalties that can range from middling to marginal from third-party distributors.

Up until the last few years, online serial products, including those with relatively long backfiles, have been supported through subscription models. More recently, however, publishers have begun to explore the outright sale of back issues. Take the case of Sage Publishing, which not only offers subscriptions to its backfiles but permits institutions to purchase them outright. What is unusual about Sage's model is the way in which it pitches its offering. Third-party distributors typically provide serial subscriptions (and sometimes even "archival purchases") prepackaged by topic (e.g., religion, science, health) or target audience. (e.g., high school student, college student, academic researcher). Aggregators like Gale, EBSCO, ProQuest, JSTOR and sundry others all offer packages like this. Sage, however, has taken the unusual step of allowing customers to purchase individual title backfiles. Alas, it is not clear if this purchase is in lieu of or in addition to a subscription. As technology continues to permit microtargeted customization, it may be a matter of time before prefab packages becomes themselves a thing of the past for serial subscribers/buyers.

In the meantime, the economics of subscriptions and archival purchases represent two different business models. Each has its respective risks and benefits. Archival purchases feature higher price tags because the transaction is structured as an outright “buy” of the content. The customer obtains ownership rights to the content, which obliges the publisher or distributor to ship copies of the digital files to the customer on a hard medium storage device (e.g., USB hard drive) for offline storage or create a “dark archive” with a trusted third party (such as the Library of Congress). In the end, this transfer permits customers to locally load, host, and disseminate the data from its own or contracted servers in the event of the product’s discontinuation by the publisher. Unlike electronic subscriptions, should the product go away, the customer is left with something more than a mere history of access.

For publishers, archival purchases can offer a more aggressive return on the investment, but the publisher or distributor will still have hosting and access management obligations. These are currently recouped through “access fees.” For libraries, archival purchases offer two benefits: one is ownership of the content, something that online subscriptions simply do not permit (lease-to-buy options are rare because of the accounting nightmares for publishers); two is the ability for librarians to use discretionary portions of their materials acquisition budgets instead of their typically stressed and often preallocated serials acquisition budget. The downside for libraries (and therefore publishers) is the growing resistance to access fees as they accumulate within a library’s budget.

Fee accumulation and unique interfaces for each offering create special resistance to backfile products that comprise a single or just a few titles that had modest circulation at best. Major titles stand a better chance, even as one-offs. Best are large sets of titles, like those available from Sage, Elsevier, OVID, and others that assemble many titles in one place for institutional consumption.

Bennett Lovett-Graff
Publisher, Content Solutions
National Archive Publishing Company
Digitization, Microfilming, and Publisher Services

January 8, 2007

Back Issue Digitization Projects (BIDPs) by the Pound: Article and Issue Business Models

As I've described in a previous post on business models for back issue digitization projects (BIDPs), serial publishers have begun to explore a variety of ways to monetize their backfiles. One such way is to sell their content "by the pound," if you will.

There are several by-the-pound scenarios. While institutional subscriptions to the New York Times are available to public, academic, and public library patrons through ProQuest, individual consumers with no particular affiliation who may wish to have immediate access and ready ability to view, download, or print have not necessarily proven averse to hunting down and plunking down hard cash for individual articles. Consider the pre-1981 portion of the New York Times, which uses ProQuest's Archiver service to permit users to purchase individual New York Times articles. Alas, there are limits on what is available owing to copyright or cost. As outlined in the New York Times's helpful FAQs on what customers can and cannot purchase, photographs, display ads and classified cannot be had. But even with this limitation, the service plays an important role for researchers.

As a newspaper, The New York Times throws additional curveballs. Newspapers tend to be complicated animals, with their complicated layouts and many intended audiences. Periodical publications feature simpler designs because of their narrower publishing objectives and audience. This also makes them easier to digitize and sell in electronic form. Several magazines have recognized that reality. The Atlantic Monthly offers a good example through its creative pricing for electronic backfile access. The publisher here capitalized on the flexibility of the ProQuest Archiver to not only sell by the pound but supply volume discounts through a "pass" system for those who want to buy in bulk.

Then there are the many other publishers who have opted for simpler "go-it-alone" arrangements. Instead of using a pre-packaged (albeit dexterous) service like ProQuest Archiver, some monetize their backfiles by loading tables of contents at the issue level and introducing basic shopping cart software for straightforward per article purchasing. The American Institute of Aeronautics and Astronautics publishes nine journals. Its flagship journal, the AIAA Journal, lets users touch down in the table of contents of each volume. Users gain access to the first page (this one is from the first article in volume 3, 1965) before being prompted onwards to purchase the full article by clicking the "Add to Cart" button. (Articles seem headily priced at $25.00 per item, although for science, technology and medicine publications, this is not unusual.) AIAA members are, of course, encouraged to include their membership identification number to receive discounts. For subscription-based (instead of membership-based) publications, a current subscriber account number might serve just as well for that discounted access to articles from the periodical's backfile.

A variant on paying per article is "paying per view." The difference is more semantic than actual. Pay-per-view arrangements are just more forward about the time limits they impose on access. Science exemplifies this approach. As perhaps the premiere publication in its area, Science has aggressively relaunched itself in the digital arena. Its fully digital backfile is automatically available 24/7 to all dues-paying members of the American Association for the Advancement of Science. This arrangement has the salutary effect of transforming the backfile into a membership drive-and-retention tool. For nonmembers, access is a more highly restricted and therefore more expensive affair. Not only must nonmembers pay for access to an article, but Science imposes a 24-hour use-it-or-lose-it time limit per article, presumably in order to manage IP address traffic. Still, the ability to read, print and download is there, little different from other BIDPs on the market.

Finally, there is the option of acquiring the entire issue itself. There are certain inherent advantages to this mode of delivery through distributors like Zinio, especially for popular magazines that both want to defend their copyright and defend themselves from intellectual property violations of their own making. Zinio adopts a newsstand sales model. Subscribers purchase digital versions of the complete back issues at prices that differ little from print back orders. What customers receive is online access to full-color high-resolution JPEGs of the original work. Zinio's presentation does not permit text searching, a weakness for users who demand that form of retrieval. Nonetheless it does curtail infringement by webcrawling software engines that scrape the Web (not a small point in light of Google's recent court victory) or individuals who resort to something as simple as blocking out, copying and pasting text from HTML or PDF files. (Of course, JPEGs, GIFs, PNGs and other formats can be outfitted with "hit-term highlighting," which simultaneously allows full-text searching and prevents Web scraping, but that's another story.) Moreover, Zinio's presentation helps overcome the Tasini court ruling that pinched periodical aggregators and publishers who sought to remarket their content in a "disintermediated" form (e.g., articles and images separated from one another). By keeping the entire issue intact, Zinio follows Tasini's allowance to reproduce content without reacquiring rights from authors, illustrators or photographers. As such, Zinio is an ideal, although limited, tool for publishers seeking to protect their content and meet intellectual property requirements that might otherwise prevent them from making their content available for electronic consumption.

Bennett Lovett-Graff
Publisher, Content Solutions
National Archive Publishing Company
Digitization, Microfilming, and Publisher Services

Back Issue Digitization Projects (BIDPs)

Untily recently, publishers who haven't digitized back issues of their publications, back issue digitization projects (BIDPs) have seemed more than little more than a royal headache with an uncertain return on the investment. More often than not, too many publishers operate in relative ignorance of their digital options and even their monetization opportunities. There are a number of options respecting BIDP standards, implementation platforms, and markets, so it hardly comes as a surprise that publishers, not knowing how to proceed, just pass on the opportunity altogether.

Presently, tens of thousands of periodicals are electronically available on websites either directly from their publishers or, more commonly, periodical aggregators. For the latter, who often take on the responsibility for digitization, most of these publications only go back in electronic form from five years to three decades, when such aggregators (e.g., LexisNexis or Dialog) first came came on the scene.

Since then several established publishers (Elsevier) and more recent aggregators (JSTOR) have aggressively engaged in BIDPs--defined here as projects that encompass runs of journals from their first issues to the most current--of either their own or others' content. Yet despite these more recent ventures, large swaths of copyrighted back issues go undigitized, breathing artificial life into back issue distributors (PastPaper.com or MillionMagazines.com) and giving succor to Google-like initiatives to digitize and freely distribute copyrighted but unlikely-to-be-defended serial publications (either because "orphaned" by defunct publishers or neglected by extant ones), unless actively stopped. The growing pressure to digitize and distribute everything, a trend that Google's library partnerships epitomizes, represents a very real threat to publishers who opt not to explore aggressively ways to monetize BIDPs of their own content. Consider the case of a project I worked on from Wolter Kluwer's Ovid, which rushed through this 4 million-page BIDP project in some 6 months (with content sourced from library partners), resulting in a $50K/sale product that saw a near immediate ROI.

The questions surrounding page imaging, text capture, display, hosting, platforms, maintenance, billing, and customer service are legion--and the answers are inevitably driven by cost, specifications, capacity and customer expectation. While infinite electronic ink can be spilled on all of these issues, none of these much matter without a business model and target market to support the BIDP business case. A number of publishers and distributors have since begun the process of creating these models (with more to come). These models include the sale of individual articles or articles at volume discount to individual consumers; entire individual issues in electronic form to consumers; annual subscriptions to complete backfiles of the journal(s) to consumers or libraries; entire archival backfiles (most often to institutional purchasers).

To get a better sense of how publishers have been selling by the pound or the entire animal, see my respective articles on article/issue-based and entire backfile business models.

Bennett Lovett-Graff
Publisher, Content Solutions
National Archive Publishing Company
Digitization, Microfilming, and Publisher Services