ASA: "Aggregating content and the substitution of subscriptions"
Tuesday, February 28, 2006
In this final session, customisation and interoperability were recurring themes, with some aggregators performing better than others in these areas. From the publisher perspective, the importance of balancing aggregator licensing with other revenue streams was highlighted, whilst aggregators will need to consider the broadening of future usage both in terms of media/format and geography.
ProQuest's Simon Beale (view slides – .pps) was not the first to warn against over reliance on Google (the many librarians in the audience were no doubt nodding their heads off in agreement), and suggested that, in time, Google's efforts for the public good will have to conflict with their need to give investors a good ROI. Google has relevance to some of the wider debates which Simon listed:
Intermediaries, Simon asserted, are able to sit somewhat on the fence where OA is concerned. Aggregators should embrace any option which supports their overall aim to provide access to quality content. The future will require content providers to become less English (language)-centric, with increasing importance of "local" content necessitating multi-lingual interfaces, search options etc. within the next 10 years. Simon suggested aggregators would also need to prepare for wider use of mobile devices when accessing scholarly content (it would be interesting to know if current statistics bear this out – I find it hard to imagine most of the "content users" I know wanting to access publications in this way).
EBSCO's Melissa Kenton (view slides – .pps) went back to basics to explain that libraries subscribe to aggregated databases to add to their core journal collections, and that aggregated content proves popular with undergraduates who are less fussy about the source of their content. Publishers, meanwhile, use aggregators to access new markets e.g. public libraries, smaller colleges, high schools. The major difference between subscription and aggregator access is that databases do not guarantee to provide content in perpetuity (as the aggregator does not own the content it licenses).
Both Melissa and Simon conceded that very few databases are sold via agents (less than 1%, in ProQuest's case), with orders tending to come direct to the aggregator. Following a question from Scholarly Information Strategies' Chris Beckett, both confirmed that publishers are given some control over the markets their (aggregated) content is sold into, although Melissa noted that EBSCO prefers not to license content with such stipulations, as they are costly to observe.
Southampton University's Gordon Burridge enquired about libraries' ability to customise aggregator collections according to their needs, both in terms of content and interface. Whilst ProQuest is "aiming for more commonality" in its interface, EBSCO's aggregated databases must be careful not to conflict with other EBSCO products, and thus do not enable creation of a library-specific e-journal collection (which is possible for subscribed content accessed via the EBSCO EJS package). Queries about Gale's capabilities in this area were addressed by delegate (and Gale representative) Diane Thomas, who confirmed that Gale enables libraries to create customer-specific collections (thus allowing the "buy-by-the-drink" purchasing referred to the previous day by Rick Anderson).
Consultant John Cox's polished performance at the podium (view slides – .pps) posited a second distinction between journals and databases, whereby journals are the "minutes of science", and databases are used "largely for teaching". Embargoes, being considerably more obstructive to research than to teaching, are therefore enough to resist the cannibalisation of subscriptions by aggregated content (though they should be reasonable i.e. more than 12 months is unacceptable). Title specificity continues to be a factor, such that libraries are more likely to replace a cancelled subscription with on-demand document delivery than with access to an aggregator collection.
Not so, argued the ever-voluble Chris Beckett of Scholarly Information Strategies, whose recent survey indicated that 48% of respondents considered a database to be an adequate substitution for a subscription. Some scholarly joshing followed, and eventually the two agreed that cannibalisation is a risk, but primarily from budget squeeze; databases are a secondary issue.
Order was restored by Helen Edwards of the London Business School, a postgraduate business school which subscribes to 700 journals and 137 aggregator products (view slides – .pps). LBS has invested in creating its own interfaces with relatively "deep" links to available content, to provide interface commonality to its users. (It has also configured its link server at Google Scholar to enable "surfacing [of] content through a range of access points".) Helen reiterated the importance of customisability and interoperability: libraries need to be able to stamp their "seal of approval" on resources they license, and said resources must recognise that they are but one piece of a wider picture.
Blackwell Publishing's Steven Hall brought the conference to a close with a presentation (view slides – .pps) which he suggested might have been called "practising safe aggregation" ... some publishers abstain; some practise a somewhat unsatisfactory withdrawal method. The mass of aggregators can be broken down by focus – be it a niche market, an extension of A&I activities, or a specific discipline. Some are complementary to publishers' activities; others are alternatives. The complementary business model comprises:
ProQuest's Simon Beale (view slides – .pps) was not the first to warn against over reliance on Google (the many librarians in the audience were no doubt nodding their heads off in agreement), and suggested that, in time, Google's efforts for the public good will have to conflict with their need to give investors a good ROI. Google has relevance to some of the wider debates which Simon listed:
- open access vs total access
- aggregation vs standalone delivery
- general search engines vs specialist retrieval systems
- big deals vs specialist content
- print vs online
Intermediaries, Simon asserted, are able to sit somewhat on the fence where OA is concerned. Aggregators should embrace any option which supports their overall aim to provide access to quality content. The future will require content providers to become less English (language)-centric, with increasing importance of "local" content necessitating multi-lingual interfaces, search options etc. within the next 10 years. Simon suggested aggregators would also need to prepare for wider use of mobile devices when accessing scholarly content (it would be interesting to know if current statistics bear this out – I find it hard to imagine most of the "content users" I know wanting to access publications in this way).
EBSCO's Melissa Kenton (view slides – .pps) went back to basics to explain that libraries subscribe to aggregated databases to add to their core journal collections, and that aggregated content proves popular with undergraduates who are less fussy about the source of their content. Publishers, meanwhile, use aggregators to access new markets e.g. public libraries, smaller colleges, high schools. The major difference between subscription and aggregator access is that databases do not guarantee to provide content in perpetuity (as the aggregator does not own the content it licenses).
Both Melissa and Simon conceded that very few databases are sold via agents (less than 1%, in ProQuest's case), with orders tending to come direct to the aggregator. Following a question from Scholarly Information Strategies' Chris Beckett, both confirmed that publishers are given some control over the markets their (aggregated) content is sold into, although Melissa noted that EBSCO prefers not to license content with such stipulations, as they are costly to observe.
Southampton University's Gordon Burridge enquired about libraries' ability to customise aggregator collections according to their needs, both in terms of content and interface. Whilst ProQuest is "aiming for more commonality" in its interface, EBSCO's aggregated databases must be careful not to conflict with other EBSCO products, and thus do not enable creation of a library-specific e-journal collection (which is possible for subscribed content accessed via the EBSCO EJS package). Queries about Gale's capabilities in this area were addressed by delegate (and Gale representative) Diane Thomas, who confirmed that Gale enables libraries to create customer-specific collections (thus allowing the "buy-by-the-drink" purchasing referred to the previous day by Rick Anderson).
Consultant John Cox's polished performance at the podium (view slides – .pps) posited a second distinction between journals and databases, whereby journals are the "minutes of science", and databases are used "largely for teaching". Embargoes, being considerably more obstructive to research than to teaching, are therefore enough to resist the cannibalisation of subscriptions by aggregated content (though they should be reasonable i.e. more than 12 months is unacceptable). Title specificity continues to be a factor, such that libraries are more likely to replace a cancelled subscription with on-demand document delivery than with access to an aggregator collection.
Not so, argued the ever-voluble Chris Beckett of Scholarly Information Strategies, whose recent survey indicated that 48% of respondents considered a database to be an adequate substitution for a subscription. Some scholarly joshing followed, and eventually the two agreed that cannibalisation is a risk, but primarily from budget squeeze; databases are a secondary issue.
Order was restored by Helen Edwards of the London Business School, a postgraduate business school which subscribes to 700 journals and 137 aggregator products (view slides – .pps). LBS has invested in creating its own interfaces with relatively "deep" links to available content, to provide interface commonality to its users. (It has also configured its link server at Google Scholar to enable "surfacing [of] content through a range of access points".) Helen reiterated the importance of customisability and interoperability: libraries need to be able to stamp their "seal of approval" on resources they license, and said resources must recognise that they are but one piece of a wider picture.
Blackwell Publishing's Steven Hall brought the conference to a close with a presentation (view slides – .pps) which he suggested might have been called "practising safe aggregation" ... some publishers abstain; some practise a somewhat unsatisfactory withdrawal method. The mass of aggregators can be broken down by focus – be it a niche market, an extension of A&I activities, or a specific discipline. Some are complementary to publishers' activities; others are alternatives. The complementary business model comprises:
- an embargo (12 months is acceptable for STM, but not for humanities)
- no archival rights for libraries
- licensing of collections only (not individual journals)
- a "one stop shop" for content (meeting the needs of undergraduates hunting for citations)
- functional (i.e. non-sophisticated?) interfaces
- non-core markets
- no embargo (strong cannibalisation risk, but in some markets there may be advantages which outweigh the potential losses)
- archival rights (maybe)
- licensing of individual journals
- emphasis on full text offering, not A&I merits
- competition in core markets
- other types of content added to "core" content
- contextualisation to assist understanding
- "smart" searching
- interactivity
- royalty rates/model
- currency/embargo
- archival rights
- length of agreement
- reporting of customers/titles/usage
- sub-licensing (the longer the chain, the less control you have)
posted by Charlie Rapple at 8:24 pm
<<Blog Home