CIG Member Poll for Group Name Change

At a meeting on 5th June Committee discussed changing the name of the Group.

We think that a new name would benefit the membership because it would:

  • Acknowledge the intrinsic skills of descriptive & subject cataloguing, classification & indexing but also highlight the work that CIG members already do in creating, disseminating and enriching metadata using various content standards;
  • Emphasise the link between all of these activities and the search, discovery & access experience;
  • Reinforce the relevance of the Group across the information profession & widen our appeal so that we can increase our membership and offer more support across our technical areas of expertise.

Please go here and help us in making this choice.

Helen Williams and Clare Hudson, from LSE Library Metadata Team, were two of the attendees at CIG’s June event with Terry Reese talking about MarcEdit and Metadata Trends. Here they share their reflections.

Terry Reese is a name well known to many of us so it was exiting to have an opportunity to hear him speak in person about MarcEdit developments and the future of metadata. We use MarcEdit regularly at LSE but often as part of well-established workflows. Terry regularly develops new features and functions for Marc Edit so hearing about some of the recent developments to was a useful opportunity to think about other ways we could use it.

Some of the recent developments Terry described included:

  • A wizard to guide the user through transforming xml data without having to know how to write xlst.
  • A tool for moving data to and from OpenRefine.
  • This is not as advanced as the clustering available in OpenRefine but does remove the need to transfer data out of MarcEdit. At LSE we think this might be useful for some subject heading clean-up work we’d like to do.
  • The ability to set MarcEdit to watch a folder. This is also a feature we’ll be investigating as it looks like something we could use to further automate our processes for loading ebook records.Terryreese1

There was plenty of opportunity for attendees to ask questions and these ranged over character encoding, regular expressions and undoing a change after saving it as well as some specific things that attendees were trying to accomplish.

Terry talked about his future plans for MarcEdit, which include an xml editor and updating tutorials and documentation and, after a break, about his views of the future of library metadata.

Back in 2002 Roy Tennant famously declared in Library Journal that MARC must die (reprinted here Helen was about 3 weeks into her first professional post at the time, and yet nearly 20 years on, metadata teams the world over are still working with MARC. So it was fascinating to hear Terry describe the future of cataloguing and metadata as a series of small steps forward rather than a giant leap.

With the advent of Linked Data cataloguers have envisaged a new distributed model of cataloguing, where rather than creating individual records we link out to external sources of information for various pieces of data, building up information about a resource in a different way – but this requires a new model of working, with open infrastructures and commitments on individual libraries and organisations to create and maintain trustworthy namespaces for everyone else in the ecosystem. Developing this infrastructure is a challenge that will need the support of the profession as a whole.

The prevalence of MARC is also one of the challenges of moving to a new model. With tools, systems and workflows all designed around MARC building a new model is a huge task. This is being attempted with the development of BIBFRAME, but any transition requires significant investment in skills and systems and looks likely to create a fractured library community where some have the resources to engage with the new model, and others do not.

So what can cataloguers do, and what are we doing in the Metadata team at LSE in terms of small steps forward given we operate in a time of continuous change? Firstly, the value of metadata needs to be understood by the wider profession in terms of the role it plays in resource discovery, and it’s our job to shout about that. We’ve invited any of our colleagues to come and see us if they’re not sure what we do – all of us in the team would enjoy having a captive audience on the value of metadata. We also have a team ‘purpose tree’ which depicts the mission, vision and strategy of our team, going on to link the goals, actions and roles in our team to the overall Library strategy. This is a document we’ve shared Library-wide so that colleagues can understand the centrality of metadata to the work of the Library.


Secondly, we can learn from external expertise so we keep up with Twitter and blogs, and make time to watch webinars. Earlier in the year, for example, we saw that HKUST had created something called Knowledge Cards, which is accessed from their Primo installation, and uses Wikidata to link users to contextual information. Other priorities mean we’ve not experimented with anything like this ourselves so far, but Helen meets regularly with our Online Services and Systems manager so that we can discuss these kinds of developments that we’re seeing in other institutions.

This leads to our final point, for now, that we need to increase our Linked Data knowledge. There are so many places we could start with this but Wikidata is increasingly being discussed in the cataloguing community as a source of Linked Data which libraries can both use and contribute to, so is something that’s caught our attention. We’re at the very early stages of looking at this – engaging in some OCLC material and taking part in a Wikipedia workshop that our Digital Library manager held earlier in the year. In addition, we’re thinking about making more use of URIs in name and subject fields in MARC records.

It was a really interesting event which gave us plenty to think about and some ideas for steps we can take to move forward.

Cataloguing and metadata skills in the publishing world: a visit to the Cambridge University Press offices

A blog post from Kim Taylor, cataloguer, who joined us at the visit to the Cambridge University Press offices:


On the 30th of April, I joined a small, friendly group of cataloguers and metadata professionals in a visit to the Cambridge University Press (CUP) offices. The day-long event was brilliantly organised and hosted by CIG Social Media Manager Concetta La Spada, who is the Metadata Specialist and Senior Library Data Analyst at CUP. As attendees, we represented a healthy mix of cataloguing experiences and perspectives, coming from academic, special and public libraries, as well as the vendor and freelancing side of the spectrum. Hence, none of us required convincing of the value of good metadata in promoting discovery of resources. Improved metadata equals improved discoverability, right? While obvious to cataloguers and metadata experts everywhere (certainly those reading this blog), this dictum has not been so readily embraced by the publishing world – that is, until recently, with institutions such as Cambridge University Press leading the way.


The visit began with a morning tour of the Press museum – where, amongst other exhibits, we were treated to the fascinating story of the 11th edition of the CUP Encyclopaedia Britannica that accompanied, and provided intellectual sustenance for, Sir Ernest Shackleton and his men on their ill-fated, though heroic, Antarctic expedition. While the documents and artefacts on display each held significance in their own right – including an assortment of steel punches designed by John Baskerville, of the Baskerville font fame, and a folio edition of the King James Bible (1638) – I was struck by the fact that they so pointedly traced the development of the world’s oldest publisher, whose first book was published well over four centuries ago (1584, to be precise).

DSC_4809_Press Museum

As enjoyable as it was to browse items from CUP’s collections, the afternoon sessions provided some of the even more enlightening moments of the day. Presented by various representatives from CUP’s marketing and content operations, these informative and informal talks offered unique insight into the world of publishing and, more importantly, the expanding role of bibliographic metadata within that world, particularly e-resources. We learned, too, that Cambridge, in another first, was also one of the earliest academic publishers to provide ‘in-house’ cataloguing – and for free, it should be mentioned – to its customers, a move that more publishers are now following. Specifically, the provision of accurate, comprehensive metadata to customers is now viewed as an integral component of CUP’s products and services.

The presentations by CUP staff covered a range of topics: from the challenges of compatibility between CUP’s many bespoke services and customer systems (some of which are still very print-centric) to the resistance on the part of some authors (and even some customers) in accepting the critical role of metadata in supporting access to and use of resources. Cambridge, like other publishers, has begun to place more onus on its authors to supply keywords and indexing of the works they submit for publication. This action underscores the growing commitment of CUP to facilitate the essential service provided by people like Concetta. It also helps to educate those librarians who might still need to be convinced of the value of quality metadata in supporting their work, particularly with respect to the ever-increasing challenges – and opportunities – of the linked data environment.

The work Concetta performs has helped to elevate the Press beyond its stature as one of the premiere publishers of academic content to that of a respected provider of comprehensive bibliographic metadata, confirmed by the decidedly positive reactions of customers to CUP’s efforts. In fact, the response further encouraged CUP to undertake retrospective enhancement of its existing bibliographic records, spanning over 20,000 titles.

The key takeaway of the day (apart from the much appreciated reusable, thermal bottle) was the unqualified commitment on the part of CUP to invest in cataloguing talent so crucial to this still relatively new initiative on the part of publishers – i.e., to provide not just published works, but quality metadata to promote the discovery and use of these works. As such, CUP, and publishers who follow suit, now find themselves in pursuit of the knowledge and skills offered by professional cataloguers and metadata experts, which is good news for the library and information profession overall and, not least, the employment prospects of those seeking work within it.


Terry Reese: MarcEdit & Metadata trends

Thursday 6th June 2019, 1.30 – 4.30 pm

An opportunity to hear Terry Reese, the internationally acclaimed developer of MarcEdit, speak about the latest functionality of the software and likely future metadata trends.


1 pm – Registration
1.30 – 2.15 pm – Latest MarcEdit developments
2.15 – 3.00 pm – Audience Question Time
3.00 – 3.30 pm – Refreshment Break
3.30 – 4.30 pm – The Future of Cataloguing & Metadata

Book here

Library Juice Academy course on MarcEdit: a report by Laura Cagnazzo

Why I wanted to take this course

I had been keeping an eye on the courses offered by Library Juice Academy for a while, but I had not been successful in obtaining financial support from my employer in order to enrol on one. That is why, as soon as I saw that the Cataloguing & Indexing Group was offering a sponsored place to attend the MarcEdit course, I decided to grab this opportunity! And luckily, as you can guess, my application was successful.

My current post at Abertay University involves cataloguing. I am particularly interested in metadata and I advocate for its key role in enabling discoverability. I had often heard about the great potential of MarcEdit in terms of improving metadata quality, enabling bulk editing of catalogue records. I wanted to attend the course mainly for the following three reasons:

  1. Acquire familiarity with this powerful tool and demonstrate to my colleagues that improving our library catalogue is not so time-consuming and complex, when using MarcEdit.
  2. Due to lack of time and staff, there is no plan in place for any retrospective cataloguing project. However, the quality of some of the older records is poor and I suspect that this may hinder the discoverability of the collections.
  3. Use MarcEdit to enhance records in order to share bibliographic information through services such as the National Bibliographic Knowledgebase.


Structure and content

The course was hosted on a Virtual Learning Environment (Moodle) and it lasted 4 weeks.

Each week included reading material and links to further resources, a group discussion, an area where to post questions, and an exercise. Missing deadlines for contributing to the group discussions meant losing points towards the final result, as I learned after missing the cut off date for the first week discussion! However, you could easily make up for it by responding to the inputs of the other participants. Furthermore, the grade required to successfully complete the course was “Pass”, and the instructor very nicely provided answer keys for each exercise, which were providential when I got stuck on a couple of occasions! There was also some flexibility for submitting the weekly assignments, which allowed participants to fit the exercises into their weekly schedule.

The key themes of each week are listed below:

  • Week 1: Introduction to MarcEdit & Basic Editing Functions
  • Week 2 Enhancing & Batch Processing Records
  • Week 3 Building MARC
  • Week 4 Regular Expressions in MarcEdit – by far the toughest and my favourite lesson!


The course involved lots of hand-on work, which is the best way to learn. The instructor, who was very responsive and helpful, provided useful feedback, tips and suggestions on how to tackle issues that course participants were experiencing at their institutions. Instructions were accompanied by screenshots, which made understanding and reproduction of commands much easier. The workload was reasonable and I really feel that there was a perfect balance between theory and practice. Unlike many other trainings I completed, on this instance I felt that, by the end of the course, I was confident enough to use the tool autonomously.

Learning outcomes

A few examples of how to employ MarcEdit that emerged from the group discussions were:

  • Getting rid of non-LC subject vocabularies terms
  • Generating call numbers automatically
  • Setting up task lists to clean up records obtained from vendors
  • Removing local fields
  • Changing fields 260 to 264_1 or 440 to 490 etc.
  • RDA enhancement of records
  • Switching between lower/upper cases
  • Editing fields, sub-fields and punctuation
  • And much more!

A key takeaway from the course: MarcEdit is very much based on trial and error and not only when you are a beginner. The range of operations that you can carry out with MarcEdit is astonishing and the more you learn, the more you realise this. However, it is essential to test each command and verify that it does what you need it to do, since it is easy to get confused or distracted. MarcEdit allows you to undo the last major edit carried out, but I wouldn’t personally risk spoiling a big batch of records without having a backup.



The System Librarian and I are currently looking into using MarcEdit to edit a set of cataloguing records which contain erroneous information in their 008, which cause display issues (books showing up as print in the discovery layer, rather than electronic). Well, I am confident on how to make the changes, but we will need to look at a way of integrating MarcEdit into Alma. Hopefully, this will be the perfect chance to test the benefits that MarcEdit can bring to our collection and the start of a long and happy relationship!

Now that I have completed the course, I can confirm that MarcEdit is an extremely helpful tool and I believe all cataloguers should be familiar with it. I suggest they should teach how to use it in Library Schools, if feasible (but it is a free tool, so I do not see why it shouldn’t be??)