Wherein the IngentaConnect Product Management, Engineering, and Sales Teams
ramble, rant, and generally sound off on topics of the day
 

World Usability Day

Wednesday, November 15, 2006

The 14th of November was World Usability Day. A local (to Bath) Company called Web Usability Studios (WUP) was hosting an event as part of the festivities. I went along to represent Ingenta and check things out.

WUP's main area of expertise is organizational, labelling and navigation aspects of Information Architecture (IA) and the event focused on these areas.

The first activity was a card sorting exercise. This introduced the concepts of chunking and labeling information.

The next part of the presentation consisted of several videos of web site testing sessions performed by WUP. In each clip the tester was given a specfic task to perfom on the site being tested, for example finding a particular spare part for a product. The video showed the tester screen to see the interactions with the site plus a "headshot" video of the tester and their voice plus the voice of the person running the tests. Each of the videos demonstrated the need for good IA in different ways. Serveral of the testers failed their tasks completely despite being choosen as potential users of the sites under test.

The problems identified by the videos included:
Often the sites under test exhibited one or more of these problems causing the testers problems with navigation. This was linked to the Xerox Parc idea of "Information Scent". This broadly speaking uses a metaphor of hunting for information on a site rather than browsing or grazing. Users hunt down information by following a "scent". This translates into the user being more or less confident that a given interaction will lead them closer to the information they require to achieve their goals. Addressing the problems outlined previously and good IA can increase the level of "scent" availability to users of a site and therefore make it easier to use.

The concept of chunking was then added to these problems as with reference to some "after" tests on the new versions of the sites previously tested. The difference was marked. All the users achieved their goals often within seconds as opposed to minutes. A signigicant aspect of chunking, Millers "7 items plus or minus 2" rule was introduced. This is a psychological aspect of interaction where normally people can consider a finite amount of information about a given topic at any time, normally about 7 items. This obviously applies to navigation elements in a page as too many elements mean the user cannot process the navigation effectively.

The discussion broadened into other aspects of information architecture and design of sites such as:
After considering browsing, the discussion moved onto searching on sites for information. The presenters made the assertion that many users do not use a site's internal search to find information. Instead they prefer to browse (or hunt if the scent of information analogy is extended) for their information. This runs contrary to Ingenta's experience where deep linking to the article level by Google means that many of our users do not use our navigation features at all. Ingenta faces the challenge of preventing users from returning to Google and getting them to remain on our site once they arrive. In effect the "normal" drill down method of browsing/hunting for information is turned on its head.

This led to a discussion of "post search chunking". This is the splitting up of a user's search results into categories in some way instead of presenting a plain list of links. For an example try a search for a generic word on Amazon. The results are presented with results grouped into several categories such as books, DVDs etc. This can be a valuble tool in persuading users of a site to explore instead of hunt for a specfic piece of information.

Finally we were presented with a statistical analysis of our card sorting exercise, This showed how it is possible to work out statistically which schemes of grouping into chunks can be determined. Some of the results were surprising but this highlighted the need to test such things objectively.

Overall, the afternoon was very interesting and highlighted several of the important issues which our Information Architects have to consider.

posted by Rob Cornelius at 12:00 pm

 

And the nominees are ...

Friday, November 03, 2006

Some exciting news to round off the week! We (the All My Eye bloggers) have been shortlisted for the "Best Team in a Business Environment" award at the 2006 International Information Industry Awards. (Fingers crossed we win, and get to make emotional acceptance speeches thanking everyone from Tim Berners-Lee to Betty Martin).

posted by Charlie Rapple at 12:38 pm

 

TripleStore develops mental capacity of goldfish

Thursday, November 02, 2006

This week in the MetaStore team we've added inferencing using an OWL 1 Ontology.

An ontology is a statement of what there is, and how things fit together. Eg,
* In Ingenta-land, there are Articles, they have Keywords and Authors, they are published in Journals. Every Book must have at least one Author.
* In My-lunch-land, there are sandwiches, chocolate bars, cups of tea. Every sandwich must have exactly two and only two slices of bread.

Inferencing, in this context, means guessing new facts, based on known facts, using logical rules in an ontology.

So. If we have a database with the fact that John Steinbeck wrote Sweet Thursday, and an ontology which says that being an author is the inverse of having an author, then a computer can, all on its own, reason that Sweet Thursday was written by John Steinbeck. Super eh! Hal here we come.

Here's the machine-readable version:


SweetThursday foaf:maker JohnSteinbeck .
+
<owl:ObjectProperty rdf:about="&foaf;made">
<owl:inverseOf rdf:resource="&foaf;maker"/>
</owl:ObjectProperty>
+
Jena Reasoner
=
JohnSteinbeck foaf:made SweetThursday .

At the beginning of this project, we were very excited about OWL. We planned to mine new information out of our scholarly research data set. For example, if Author Bob wrote article A, and article B, and Author Bill collaborated with Bob on C, and wrote D on his own, perhaps A and D are related. Or was that B and D? My brain hurts.. either way, you get the picture...

The problem with was, as usual with our project, scalability. The Jena inferencer choked at 11 million triples... 190 million away from our full load.

Last week, Priya and I came up with a practical solution to this problem: a two stage approach.

1. Guess what bit of the model you might be intersted in, and hold a bit of it in memory. (Techie detail: I implemented this using a SPARQL CONSTRUCT query like this, and stored them in a Jena model.)

2. Give that to the Jena Inferencer to chew on, instead of the big fat data set.

Obviously, the success of this approach depends on how good your guess is in 1.

So, last week, I knew that this article was written by J Bhatt; this week, I know that he wrote all these too. Last week, I knew that this article was about bananas. This week, I know that so are all these.

1. Web Ontology Language (There is some explanation to do with Winnie the Pooh for why it isn't 'WOL'.. basically, it just sounded foolish).

posted by Katie Portwin at 5:25 pm

 

The Team

Contact us

Recent Posts

Archives

Links

Blogs we're reading

RSS feed icon Subscribe to this site

How do I do that

Powered by Blogger