One of the things I often do in the weeks following ALA conferences is check out the blog posts about sessions I missed attending. One such was the session on “Recent Trends in Catalog Architecture: ALCTS Catalog Form and Function Interest Group.” I don’t recall what we were doing instead of going to this session, but it looks like we missed a very interesting conversation. That happens far too often at the jam-packed Midwinter and Annual conferences—but thankfully these days the bloggers are keeping us all better informed. Laura Akerman has posted an interesting report of the IG’s session. Slides for this session are also available.

One of the inescapable conclusions to draw from reading the reports is that the separation of metadata from display considerations seems to be well underway, at least if you consider those folks in the innovation corner. This is very welcome news, particularly for one who has had too many frustrating conversations over the past few years with folks whose heads are so deep into MARC that the revolutionary idea that display decisions are not inextricably tied to metadata fields is completely foreign. Also, it seems clear from the report that the notion of strict MARC-ish boundaries around what kinds of metadata libraries might be interested in providing in a discovery interface is no longer quite so impervious.

I was struck by the blogger’s quote from the chair: “These presentations were varied but all concerned the architecture and functionality of multiple layers – “what happens (or needs to happen) in between” to transform, combine, and synchronize metadata.” It seems that everyone is traveling down this path these days, which is very good news indeed. In the search for appropriate metaphor that we’ve all done, Frances McNamara from the University of Chicago, the first presenter, called their aggregation of resources ‘stone soup’–a nice metaphor on the whole, though it does imply very hard lumps and not a lot of flavor. The second group of presenters, Joshua P. Barton & Lucas Wing Kau Mak from Michigan State University entitled their presentation “To Fix A Leaky Sink: Envisioning The Potential of Discovery Layers,” although this seems an oddly juicy metaphor for what blogger Laura Akerman described as primarily a “think piece.”

The third presentation was by Jennifer Bowen, of the eXtensible Catalog Project (full disclosure: Jon and I worked with XC as consultants early on in their development and at various points since). XC’s view of metadata management borrows a great deal from the work we did (and published about) during the days when the National Science Digital Library (NSDL) was young and doing interesting things (before it became a development shop for the Fedora CMS). [Links to the original papers can be found at our website.) From the point of view of libraries looking at the management of metadata as something that happens outside of an ILS, their service architecture is by far their best stuff. In addition to Jennifer’s slides, anyone interested in what XC is doing should take a look at their recently released screencasts, particularly the first and the third, which includes a detailed description of what the MST (Metadata Services Toolkit) really does. Our metaphor for this functionality has always been “The Metadata Washing Machine,” so we’ve really no business complaining about anyone else’s choices!

The last presentation was by Aaron Wood, of the University of Calgary, and the placement of this particular presentation is important, because it clearly points out that once you’ve created your soup and/or washed your dirty metadata, you still need to figure out how to present results to users in ways that are far different than what we’re used to in the typical ILS interface. Calgary uses the new Summon product from Serials Solutions (which, by the way, also builds on the notion of aggregated metadata services we pioneered in NSDL). The question that turns up in the blog post “how to prevent the local institution’s collections (print and digital) from becoming marginalized in search results when combined with a much larger number of full text resources (licensed journal articles etc.)” is right to the point, because it assumes, as we all should, that the usual alphabetic display that most of our ILS systems produce, that must be scrolled or paged through, is no longer good enough. The presentation points out something we found in NSDL several years ago, that when metadata records and full text are trolled for keywords, the full text always tends to rise to the top, and that’s not necessarily always a good thing from a user point of view. Metadata in general is a poor performer when simply treated as text by full-text indexers, not just because there’s not enough text, but also because full-text indexing generally perceives no additional value in controlled vocabularies and their relationship to well-defined properties (and the richness of the relationships that lie behind the vocabularies as well).

But I’m sorry I missed this session, and thanks to Laura Akerman and the Metadata Blog for providing such a good summary of the session.

By Diane Hillmann, February 10, 2010, 5:21 pm (UTC-5)

Add your own comment or set a trackback

Currently 1 comment

  1. Comment by Helle Lauridsen

    Aaron Wood, the last speaker mentioned, says in his abstract: Combined with the normalization or collapsing of metadata records representing the same resource into a single metadata-rich record, fully leveraging MARC and other metadata in big indexes should not only level the metadata playing field but make competition between records a non-issue.

    And this is what Summon does, through the advanced relevancy ranking performed over the huge Summon index, all data from various sources- full text, key words, subject terms are carefully weighed to provide a very even playing field indeed. Metadata in Summon is treated and weighed as metadata, whereas the full text indexing is weighed differently, also our static rank elements, including differently weighting content types, provide for query-independent measures for prestige in a similar way that leading web search engines employ elements that promote popularity.

Add your own comment

Follow comments according to this article through a RSS 2.0 feed