Yes, it was laughable to imagine that I might blog from Midwinter—silly me. My schedule was insanely full and I didn’t even get to peek into the exhibit hall, much less visit and pick up some swag for the grandchildren. As things move along with RDA, and as I become even more convinced that despite its flaws we need to have it out and used, my conference program became even more ridiculously focused on anything RDA. So there I was at both CC:DA meetings, the RDA Testing meeting, the RDA Implementation meeting, and on, and on.
I posted briefly before I left for Denver on the work Metadata Management Associates (me and Jon) will be doing with ALA Publishing in relation to the RDA Online product. Some of this will involve integrating the RDAVocab registered elements, roles, relationships and value vocabularies, with the online product, thus ensuring that catalogers using the online product and applications using the Registry output will be linking to and/or referencing the same information (no synchronization issues!) Other tasks involve building XML schemas so that those seeking to build data—whether in the tool or in another application—have somewhere to start. We’ll be working on other bits and pieces to enable support for those trying to see their way forward from where they are to a world where RDA with a FRBR foundation is the way we do description. This is a tremendously exciting project for us and it was nice to see that when I explained it to others they thought so too!
The RDA Testing meeting, led by Beacher Wiggins, was an interesting one—more so than I had anticipated. The idea is that the effort will be led by LC, NLM and NAL, but it will be open and participants from a variety of venues (including library students and recent graduates) are encouraged to sign up. Beacher emphasized that testers will use the environments they have available, which for most means MARC systems, but if RDA Online can produce records as well, that’s an added opportunity, particularly for systems eager to figure out how these records might behave in newer systems. Testing will last about six months and include a training phase, an active record creation phase, and an assessment phase. After that, the three national libraries will make individual “go” or “no go” decisions. There will be a website at some point, most likely a subset of the website of the LC Working Group on the Future of Bibliographic Control.
There were a lot of questions about this testing effort, only some of which were asked or answered at the meeting. One of my concerns (which was too fuzzy and formless to ask at the meeting) has to do with the usefulness of testing how much time it takes to create a record under the old regime (AACR2 and MARC21) vs. the new regime (RDA and MARC21? or RDA and the RDA schemas and tools?)—you see part of the problem here. The deck seems pretty stacked towards the old and familiar, despite some attempts to create more balance by ensuring that some of the anticipated participants—the library students and recent grads primarily—will have as little experience with the old regime as they do with the new one. In all cases there will be “subjective” assessment solicited from participants as well as the “objective” results (the time invested in record creation). Part of the plan is that there will be a 20-30 item list of resources to be cataloged, and most participants will do one regime or the other (not both) for a particular resource, with hopes that aggregation of a large-ish number of results will provide a more reliable measure. I’m not sure the “timing test” is going to be particularly useful, given the number of uncontrollable variables likely to be introduced along the way—particularly in whatever technical environment is used—but I’m sympathetic with the notion that looking for a useful objective test in this situation is extremely challenging. It’s obvious to me that we’ll learn something from the objective and subjective portions of the testing, but whether what we learn is going to be clear enough to support either option with sufficient clarity for those who prefer evidence-based reality is quite another story.
Given the fact that the testing results are likely to be subject to various interpretations and caveats, it’s hard to imagine how any of the national libraries could justify a “no go” decision, and what that would actually entail, if one or more might be inclined in that direction. One hears rumors, which naturally can’t be attributed, that one or two are so inclined but it’s something I’m really curious about. Would a “no go” decision be something like “no go until 2011” or “we’re sticking with AACR2 and MARC21 until Hell freezes over,” and how long would that be supportable? Two years, five years, maybe? It all depends on how long and how messy one thinks the transition would be, and how many would be inclined to lag far behind the early adopters waiting for some undefined
tipping point. I tend to think that there will be enough of the early adopters, particularly in the open source development community, that by the time the “go”/”no go” decisions are made, the “go” decision would be almost inevitable. But I’m an optimist on this, as I’m told fairly frequently.
Of course, the timeline for testing is dependent on RDA Online being released on time (3rd quarter, 2009) complete with vocabularies. I’m an optimist on that, too, which is a good thing, since some of it depends on me continuing to work relentlessly on some of my task list. More on that topic in other posts …