In the old days, when I was on MARBI as liaison for AALL, I used to write a fairly detailed report, and after that wrote it up for my Cornell colleagues. The gist of those reports was to describe what happened, and if there might be implications to consider from the decisions. I don’t propose to do that here, but it does feel as if I’m acting in a familiar ‘reporting’ mode.

In an early Saturday presentation sponsored by the Linked Library Data IG, we heard about BibFrame and VIVO. I was very interested to see how VIVO has grown (having seen it as an infant), but was puzzled by the suggestion that it or FOAF could substitute for the functionality embedded in authority records. For one thing, auth records are about disambiguating names, and not describing people–much as some believe that’s where authority control should be going. Even when we stop using text strings as identifiers, we’ll still need that function and should be thinking carefully whether adding other functions makes good sense.

Later on Saturday, at the Cataloging Norms IG meeting, Nancy Fallgren spoke on the NLM collaboration with Zepheira, GW, (and others) on BibFrame Lite. They’re now testing the Kuali OLE cataloging module for use with BF Lite, which will include a triple store. An important quote from Nancy: “Legacy data should not drive development.” So true, but neither should we be starting over, or discarding data, just to simplify data creation, thus losing the ability to respond to the more complex needs in cataloging, which aren’t going away, (a point demonstrated usefully in the recent Jane-athons).

I was the last speaker on that program, and spoke on the topic of “What Can We Do About Our Legacy Data?” I was primarily asking questions and discussing options, not providing answers. The one thing I am adamant about is that nobody should be throwing away their MARC records. I even came up with a simple rule: “Park the MARC”. After all, storage is cheap, and nobody really knows how the current situation will settle out. Data is easy to dumb down, but not so easy to smarten up, and there may be do-overs in store for some down the road, after the experimentation is done and the tradeoffs clearer.

I also attended the BibFrame Update, and noted that there’s still no open discussion about the ‘classic’ (as in ‘Classic Coke’) BibFrame version used by LC, and the ‘new’ (as in ‘New Coke’) BibFrame Lite version being developed by Zepheira, which is apparently the vocabulary they’re using in their projects and training. It seems like it could be a useful discussion, but somebody’s got to start it. It’s not gonna be me.

The most interesting part of that update from my point of view was hearing Sally McCallum talk about the testing of BibFrame by LC’s catalogers. The tool they’re planning on using (in development, I believe) will use RDA labels and include rule numbers from the RDA Toolkit. Now, there’s a test I really want to hear about at Midwinter! But of course all of that RDA ‘testing’ they insisted on several years ago to determine if the RDA rules could be applied to MARC21 doesn’t (can’t) apply to BibFrame Classic so … Will there be a new round of much publicized and eagerly anticipated shared institutional testing of this new tool and its assumptions? Just askin’.

By Diane Hillmann, July 10, 2015, 10:10 am (UTC-5)

Add your own comment or set a trackback

Currently no comments

  1. No comment yet

Add your own comment



Follow comments according to this article through a RSS 2.0 feed