Wherein the IngentaConnect Product Management, Engineering, and Sales Teams
ramble, rant, and generally sound off on topics of the day
 

IngentaConnect Visit Africa

Monday, October 23, 2006

Like many people reading this, my experience of business travel is predominantly that of visiting smart but lifeless European hotels, airports and offices, where the most exciting cultural experience is discovering a slightly different label on a can of Coke or being entertained by products with names that translate into rude words in English (Swedish Plopp chocolate anyone?)

However between 16th and 18th October 2006 I had the great privilege of representing IngentaConnect at the African Journals OnLine Publishing Project (AJOPP) Workshop in Entebbe, Uganda. As you might imagine, it was an entirely different experience.

The workshop was run by the International Network for the Availability of Scientific Publications and its aim was to assist the delegates in developing a strategic plan for the online presence of their journals.

With a relatively small number of people attending (12 people from 4 journals), the sessions were lively and interactive, covering all aspects of online journal publishing, from promotional techniques to the back-end facilities offered on platforms like IngentaConnect.

For me, the biggest eye opener was the extremely difficult conditions in which publishers in developing countries operate in:

Financial Many of the publishers have piles of journals ready to send, but no money with which to send them. If and when the money is found, post sent between African countries can take two or three months to arrive. Similarly, phone calls, especially international calls, can be prohibitively expensive. In Zimbabwe, the currency is so unstable that prices in shops can change several times a day.

Infrastructure Regular power cuts are common place across Africa. Ethiopia's telecommunication system is referred to by its long-suffering customers as the tele-confuse-ication system, as calls are very often directed to the wrong destination.

Political Delegates told of riots on campus where police shoot and kill students.

The inevitability of difficulties in publishing in developing countries seems to have led to a kind of calm determination with which the publishers attack and overcome problems and get their information disseminated - a kind of slow-burning "Spirit of the blitz" that carries on and on through Africa's continuing challenges.

With the conclusion of the workshop came a leisure trip to Kampala. Being suddenly enveloped in people, noise, traffic and general chaos I have to confess to a degree of culture shock. Whereas in Europe you get the feeling of being controlled and protected by laws, signs, road markings, stop/go signs, queues and systems, in Uganda the people rule over their environment - and people are everywhere! With time pressing on we returned to Entebbe and boarded our planes back to our respective countries, I think, everyone slightly wiser... but in different ways.

Incidentally, I did find an entertainingly named product in Uganda. Just what is Kevin Juice made of do you think?

posted by EdMcLean at 2:59 pm

 

Blogging, defamation, bacon sandwiches.

Last week, I attended a 'Breakfast Seminar' on The Legal Risks of Web 2.0 for your business, at Pinsent Masons in Birmingham.

By "Web 2.0", the presenters meant second generation user services like Blogs, Wikis, Flickr, MySpace etc, (rather than the XML/Semantic Web angle.)

Web 2.0 services are characterised by user-generated content. User-generated content is harder to control, and so presents more legal risks. Speakers Hartshorn and Ives focused on corporate blogging:

Blogging

The big concerns are Defamation, and Copyright Infringement.

What is Defamation?

Any statement that: is published, identifies a person, and damages that person's reputation - can be defamatory and give grounds to a libel case.

Employer liability

Liability for employee's defamatory comment on a corporate blog lies with the employer. Ingenta is liable for what I write here. (By the way.. did I mention that customer who was a complete.. OK, don't panic, Charlie ;) ) The conclusion: employers need to take defence measures to minimise the risk of liability.

What defence measures?

The presenters advise:

1. Don't Moderatate

Surprisingly, it may be sensible not to moderate a blog; ie, not to have a human eyeballing comments from the public, before they are published. This is because the Defamation Act 1996, protects the mere 'provider of access to a communication system'.

More recent E-Commerce Regulations 2002 distinguish between a "service provider" (eg, an ISP) and a "publisher" of content. If you moderate, you're a publisher, and you're liable, if you don't, you're a mere conduit and may not be.

2. Have a way for people to report bad postings.

In order to rely on protection by the Defamation Act, you need to show firstly that you are not the publisher, as above, but also that you 'took reasonable care'. This seems confusing and contradictory at first, so here's an example of how it was interpreted:

In 1999 Lawrence Godfrey wrote to Demon Internet requesting they remove a defamatory statement in a newsgroup. The ISP ignored him (what a surprise, an ISP with poor customer service.. never!) Godfrey sued under the new act, Demon were found not to have taken 'reasonable care' and were forced to pay £200,000. The conclusion: have a process in place for people to report bad postings, and act on them ASAP.

3. Have a Terms & Conditions and a Disclaimer

Eg, have them linked from the footer. The T&C should include behaviour that is forbidden, such as posting defamatory or otherwise unlawful content, as well as stating the rights of the blog editor to reproduce, or modify content.

4. Tailored Warnings

Repeat the statements from the T&C if there's an obvious reason, and place, to do so. YouTube repeat copyright warnings during the upload process.


What about personal blogs?

The employer is unlikely to be liable, unless there is some specific connection between the personal blog contents and the company, but it is a grey area. Waterstone's dismissed Joe Gorden for bringing the company into disrepute when he blogged about "Bastardstone's", but Gorden successfully appealed the sacking. Similarly, Orange suspended but then reinstated Inigo Wilson, even though his offensive Leftie Lexicon was unrelated to the company. So what's the conclusion here? I'm not sure. Perhaps: be wary.


Anything other than blogging?


Other bewarities with Web 2.0 include AJAX and Disability regulations (screen readers), copyright problems with RSS and Podcasting, and child protection issues. But I think I'll leave those for another blogger.


Disclaimer: You should neither rely nor act upon my comments in this blog post. Corporate Web 2.0-ers should consult their lawyers. ;)

posted by Katie Portwin at 10:57 am

 

Automatic for the people: publishing for a user-centric generation

Monday, October 16, 2006

Here's the unedited version of a paper I wrote recently for Research Information (Oct/Nov 2006 issue):

There's been no hotter topic in 2006 than Web 2.0. Much has been made of its community engagement: putting research back into the hands of the researchers, fulfilling the web's true potential for user interaction. And like any new Big Thing, it has its sceptics, not least those who simply recoil from buzzword worship. But there's no doubting the widespread nature of the attitudinal shift. Where do publishers fit into this brave new user-centric world?

Attempts to define the term "Web 2.0" engender heated debate amongst the technical community. It is broadly associated with several concepts of the moment -- the semantic web, the long tail, social software -- and its use, whilst contentious, is pervasive. Web 2.0 philosophy may be viewed as a threat to traditional publishers -- in that, amongst other things, it provides researchers with alternative channels for content dissemination -- but those embracing its technologies are well-positioned to expand and thus protect their role in the information chain.

Positive engagement will involve facing, and overcoming, some fears. Key to the concept of the semantic web is the sharing of structured data by providing an interface (API) with which other web applications can interact. For publishers, whose revenue models often rely upon restricting access to their data, uptake is restricted by issues of strategy rather than technology. And it's not just the publishers who are cautious; witness those librarians who warn against the perils of social tagging, by which users of social software such as del.icio.us and flickr tag data according to self-created taxonomies (known as folksonomies). Equally, many researchers are unconvinced of the accuracy and thus the value of user-created resources such as Wikipedia.

But there's more than one way to keep abreast of a new wave. Think of the movement simply as a way to engage users -- both in terms of enabling them to interact, but also in terms of helping them to access and manipulate data they need. It becomes easy to rewrite your own history and consider yourself 2.0. Hey, weren't specialist publishers "monetizing the long tail" way before eBay -- and haven't we been taking advantage of the e-journal revolution to reach out further to niche markets? Rather than trying comprehensively to embrace the new order, publishers should look to build on those areas where it overlaps with their existing methodology; there's no sense throwing the business model out with the bath water purely to bandy about a buzzword. You'll be surprised at the levels on which you can engage without undermining -- nay, even supporting -- current revenue streams or strategies. For example, publishers have long been encouraging third party use of open data to drive traffic to access-controlled content, by making structured metadata openly available (to abstract & indexing databases, or via Open Archives Initiatives), and supporting predictable linking syntaces.

Or take remixing. More progressive implementations of RSS, such as "recent content" or "most viewed articles" feeds, are semantic web-friendly in that they can be retrieved and 'remixed; by another site (such as a library OPAC). One could further argue that our industry was an early adopter of remixing in its development of, and support for, federated searching, which espouses the seamless spirit of Web 2.0 by providing a single interface to multiple data sets. Elsewhere, early adopters have created blogs (such as Ingenta's All My Eye) to complement or replace the role of traditional newsletters in publicising service developments and product announcements; the format lends itself well to syndication and thus increased use of the content. Blogs can also be tied in with specific journals as an extension to the discussion fora of the '90s; enabling comments on postings can drum up debate and encourage usage of the journal articles to which they relate. This capitalises on the pre-existing status of a given journal as the centre of its community, and the freely available content can serve to draw users in to the paid-for papers.

Then there's the long tail. How about promoting less mainstream content within your site by adding "more like this" links from popular articles, or enabling users to vote for articles as is possible on sites such as Digg -- or even, if you're brave enough, posting a "least read" list to catch users' attention? Of course, there's no better way to maximise visibility and use of all your content than enabling it to be indexed by Google. The technology giant is also held responsible for the rise of another Web 2.0 phenomenon: the mashup, whereby publicly available data sets are combined to provide dynamic new services. The launch of the Google Maps API in June 2005 encouraged a plethora of programmers to create Ajax applications that draw on Google Maps' data -- such as Map My Run, which also brings in data from US Geological Service to provide elevations of plotted routes. At Ingenta we're piloting some projects which utilise a variety of data sets to enrich the full text we host, and both OCLC and Talis have recently announced prizes for the best mashups. Google is not the only search player to board the Web 2.0 train; other providers such as Yahoo! are developing social search tools which filter results based on folksonomies and user preferences, whilst new search engine Rollyo allows you to create a "searchroll" to restrict results to sites you trust -- a user-defined extension to the concept of Elsevier's Scirus.

In spite of this technolust, we should remember that we're a long way from critical mass: non-sophisticated users still make up the majority, and aren't interested in the more collaborative aspects of Web 2.0. What they do want from it is an information-rich user experience: more data to supplement the published literature, such as the additional details necessary to reproduce an experiment; or a means to feed back responses to authors and thus engender discussion which could further the research. Informal communication media (technical presentations, conference papers, pre-prints, even email and phone discussions) can be harnessed to strengthen the message of formal communication channels and to counteract the length of the formal process.

A range of technologies can be employed to support this; from community "trust" mechanisms, which can verify the expertise of participants (as eBay feedback attests to transactor reliability), to sophisticated data storage. Ingenta's award-winning new Metastore is a component-based data repository which allows us to store and deliver raw research data alongside the associated journal article. The technology behind it, RDF, is popular amongst Web 2.0 advocates because its common data model makes it readily extensible and remixable. Its flexibility allows us to extend the formats in which research results can be communicated, and to embrace the informal media which more traditional online publishing technologies preclude. Longer term we anticipate that authors themselves could add supplementary data directly to Metastore; whilst author self-archiving of papers is currently sluggish, espousal of collaborative enterprises such as Nature's Omics or Signaling gateways suggests it is not unreasonable to expect stronger support longer term.

Key to the availability of such data is the business model by which it can be accessed. Whilst Web 2.0 is lauded for going hand-in-hand with open source, such generosity is not compulsory -- but a flexible e-commerce framework is advantageous to encourage maximum usage. Of equal value is granular addressibility of content, whereby URLs are clean, compact, assigned at the lowest possible level and, preferably, predictable. Interoperability is clearly critical to the collaborative environment and, as elsewhere, work towards standardisation in this area will pave the way for further uptake.

In summary? Whilst early adopters are thriving on the additional functionality that Web 2.0-styled services can supply, the majority of researchers continue to have relatively simplistic requirements. Some publishers still need to focus on successfully delivering the basics before expending resource to deal in the bells and whistles -- and even then it is critical to serve our communities appropriately, with data and tools that will add genuine value to their workflow. Given the frenzied debate around the term Web 2.0, it seems inevitable that its usage will decline as providers try to dissociate from the media hype. Many in the industry are predicting a dotcom-style bust, as the bubble bursts for operations trading heavily on their Web-2.0-ability. If it does, those who survive -- like their dotcom-era predecessors -- will be those who have taken steps to provide user-focussed services whilst maintaining a strategy with substance, not hype, at its core.

posted by Charlie Rapple at 9:22 am

 

Ingenta at the Frankfurt Book Fair

Monday, October 09, 2006

Phew. I am breathing a big sigh of relief now that another Frankfurt Book Fair has been and gone. This year our presence at the Fair was bigger than ever and naturally, took a lot of work from a lot of people to make it so. Here's a snap of our "sparkling" party just getting underway (our lovely new stand still just about visible in the background at this point):



We announced the following bits of news just prior to the Fair -- sorry not to have blogged this earlier (time got away from me a bit prior to the Fair!) -- some previous postings have covered particular aspects:

Ingenta expands presence in China
We gathered feedback on our existing China services at the recent Beijing Bookfair, and will be extending our offering in this key market as a result. We will soon:
Longer term, we will introduce a Chinese interface to IngentaConnect, along with additional contextual help information. [Full press release].

Ingenta and Google Scholar: the next generation
We've been working closely with Google for over 2 years now, and the latest development is that we will be making our library holdings data available to Google Scholar's Library Links program. This means that institutional users will be recognised by their IP and offered "appropriate copy" links to direct them to their library's subscribed articles on IngentaConnect. The cost and frustration of linking to non-subscribed content will thereby be reduced. This will be of great benefit to libraries who can't afford a link server, but nonetheless want to see their users directed to "appropriate" resources. We are proud to be the first scholarly hosting service to work with Google in this way. [Full press release].

IngentaConnect 2.5: more power to the user
Our latest platform release is packed with state-of-the-art, user-driven features, including
In addition, the latest release enables publishers to sell advertising space around their content, enabling them to explore alternative revenue streams for scholarly journals. [Full press release].

New publishers: Ingenta leads with 3rd quarter signings
10 new publishers, including Inderscience, have signed up, whilst key current customer Brill Academic Publishers has signed a new 3-year contract with Ingenta. [Full list and press release].

And finally, I managed to get away to attend the International Publishers Association's 2006 Publishing Update, held on the Friday morning during the Book Fair; I wrote a brief review of this session for the UKSG blog, LiveSerials.

Auf wiedersehen bis 2007!

posted by Charlie Rapple at 8:38 pm

 

Managing IA the Web 2.0 Way - poster feedback

Friday, October 06, 2006

Unlike a building architect, I don't have a very good sense of spatial awareness, which is how Lucy and I ended up taking a rather large poster with us to the EuroIA conference in Berlin for the evening poster session! However, the large size nicely showed off the poster's Web 2.0-style shiny buttons and gradients which complement the theme of using a range of Web 2.0 social software tools to help us with our work.

The poster shows examples of the variety of tools we have been experimenting with using screenshots as illustrations. Each set of illustrations is supported by a summary of the strengths and weaknesses of the approach based on our experiences as Information Architects at Ingenta.
We review four types of tool relating to different activities:
The poster features face-to-face meetings in the centre to reinforce the idea that these tools are intended to supplement meetings rather than replace them.

We received very positive responses to the poster and we enjoyed talking to the conference delegates about it. In particular, there was much interest in the use of the Ingenta wiki for specification writing. We have found this technique to be advantageous over Word documents in several ways: we were both able to work on the same document simultaneously and could easily keep track of the status of each page, whilst the Developers could consult the most recent version at all times and had the ability to add comments.

We were surprised by how many people did not seem to know about the other services (we reckon that about 40% hadn't heard of them) - but a lot were keen to check them out, particularly after finding out that these tools are either free (install your own wiki, or Google's Writely) or inexpensive (on average less than $50 a month). We hope that some of the delegates will find them as useful and as fun to use as we have for enhanced communication and collaboration during the IA process.

posted by Anonymous at 12:09 pm

 

Ingenta's IAs report back from the European Information Architecture summit

Lucy and I recently enjoyed attending this year's EuroIA summit in Berlin - where nearly 200 Information Architects from all over Europe discussed the theme of "Building our Practice". The focus therefore was very much on tools, processes and techniques.

The keynote speech was delivered by Peter Morville, co-author of "The Polar Bear book", the classic text for all matters IA-related. It was only after reading the first edition of this book in 1998 that I realised that I was an IA - at last I had a job title! Morville stressed the continuing importance of search given that the search results page is typically the second most commonly used page on a site. Search is usually an iterative process, which is where good information architecture and a well-designed user interface should support the user's next steps in their information-seeking behaviour. This focus on search ties in with Morville's concept of Findability:"the quality of being locatable or navigable", the subject of his second well-written and thought-provoking book "Ambient Findability" Findability applies to physical objects as much as electronic data, and Morville gave an interesting real-life example of how a US hospital is using a wireless tracking device to locate misplaced wheelchairs: "A quick glance at the screen shows exactly where the tagged wheelchairs are located...Patients wait no more than a few minutes for a wheelchair, and we save $28,000 a month by eliminating searches". Morville's presentation can be found at http://www.semanticstudios.com/euroia.ppt.

However, findability does not exist in isolation: we rely on physical devices to act as the intermediary between us and information. Non-PC devices are on the increase (phones, PDAs etc) - and information needs to be adapted accordingly. The importance of this was demonstrated by Bogo Vatovec's mobile phone browser study to test how easily users could complete simple tasks such as purchasing a DVD on Amazon.com using eight content adaptation solutions (a technique whereby the content of the Web pages is transformed by changing the page layout, image sizes, navigational structure and removing code to make the page readable by a mobile browser). The results showed that in many cases it was impossible to complete the tasks and at best it was a very arduous process for the user - context is lost, information is lost, the ability to navigate is lost, and the systems keep crashing. Opera Mini (an installable browser that filters the website content through a special server) emerged as the leader for making web-based content comprehensible for a mobile audience, but even so it still took 65 clicks to purchase a DVD on Amazon - the worst example took 442 clicks!

This is a problem given that there are more browsers on phones in the world than on personal computers, as Steven Pemberton noted in his Closing Plenary. He has been working with the W3C to address the issues of producing markup languages that meet the needs of modern web content. Two examples of this work include XHTML2 ("which layers semantics in the same way that presentation is layered in CSS") and XForms (the next generation of Web forms). These new standards:
Pemberton advocates "Declarative programming" which is well known for requiring much less coding because "the computer takes care of all the boring fiddly detail".

As Information Architects, we look forward to seeing how these new standards translate into improved user experiences.

posted by Anonymous at 12:09 pm

 

And now for something completely different

Thursday, October 05, 2006

We've just released support for advertising on IngentaConnect. The first ads went live yesterday - see Current Genomics for an example. At the moment we're just showing banners at the top of the page, but we also have space available in the right hand navigation bar and in the page body itself. We've been careful to keep it fairly discreet (you won't see pop-ups or overlays suddenly appearing) so as not to obscure the main content, tools or navigation. We think it works pretty well!

Our publisher clients will be using this advertising space in various different ways. Some want to cross-promote their own publications, services and conferences to their readers. Others have been running ads in their print publications for many years and now want to migrate this important revenue stream over to the online version. For others it's a new source of income that they can use to counteract subscription erosion or to support experimentation with new business models.

Online advertising has many benefits over print: it can be more accurately targetted (geographically, by user type, etc), can be more flexible in terms of ad rotation and timeliness, and reaches out to non-subscribers as well as subscribers. It also allows you to track the effectiveness of your marketing: how many times your ad appeared, how often it was clicked on, which placements worked best, etc, thus enabling you to make informed decisions when planning your next advertising campaign.

We're really excited about the opportunities this will open up for our publisher clients, without detracting from the user experience or the main purpose of the site. If you're interested in advertising on IngentaConnect drop us a line!

posted by Kirsty Meddings at 9:02 am

 

The Team

Contact us

Recent Posts

Archives

Links

Blogs we're reading

RSS feed icon Subscribe to this site

How do I do that

Powered by Blogger