User input and the small steps towards creating a new editorial interface

By Dr K Faith Lawrence, Data Analyst, November 2023

This blog from Dr K Faith Lawrence introduces the key user groups involved when we developed a new system for managing catalogue data, and how input from those communities fed into our design process.

Faith is a Data Analyst who works with colleagues in the in the Catalogue, Taxonomy and Data department to support data transformation, especially as part of the upload system.

Project Omega

Project Omega kicked off at the end of 2019. At that point the primary goal of the project was ‘develop a working Proof of Concept for a new system for managing catalogue data, using modern technology’ (Internal Press Release). As the project progressed, it became clear that ‘catalogue data’ was broader than we had first envisioned and there were wider improvements that could to be made across the whole system – and thus the idea of the Pan-Archival Catalogue was born. However, even within that wider vision of the project replacing the catalogue editorial management application is still a key consideration given the state of the existing system.

In line with the Government Digital Service best practice we took a user-led approach to designing the user interface and the editorial process. This blog will introduce the key user groups that we were working with and present a case study of how input from those communities fed into our design process for a replacement editorial system.

This blog is based on a presentation given at ARA2023 – Introducing a New Paradigm: Implementing New Approaches to Archival Description for our User Communities.

Terminology

While I have tried not to use technical terms in this blog, there are some occasions where it is necessary. To mitigate this and avoid any confusion, below is a list of some key terms and their definitions.

  • Series – Archival term for records of the same provenance that were created or used together. Formerly known as class.
  • Piece – The produce-able units (generally file or volume) within a series.
  • Item – Sub-divisions of a piece (e.g. a letter in a file, a photograph in an album).
  • Edit Set – A set of records which have been grouped together to the purpose of editing in the current editorial management system. We will refer to ‘Edit Set’ by default when referring to the existing system.
  • Work Set – The new name for an Edit Set (because of on one too many ‘edit Edit Set’ incidents). We will refer to ‘Work Set’ by default when talking about the proposed system.
  • PROCAT editorial – The current catalogue editorial system.
  • PET – The upload system that feeds into PROCAT editorial.

Putting the focus on the users

For the first part of the Omega project we were focused on the data model and the data transformation process (see Project Omega: Envisioning a new cataloguing system for The National Archives [YouTube] and Project Omega: An update on the new cataloguing system [YouTube]). However while these initial stages were more data-led, the user was not forgotten – from identifying requirements to test the fit of the available data model to starting to gather requirements and user stories the user was never far from our thoughts. What we lacked was expertise in this area, in comparison to data.

The government lays out best principles for the design of user services. These principles place the user at the heart of any process and says the following:

1. Start with user needs
Service design starts with identifying user needs. If you don’t know what the user needs are, you won’t build the right thing. Do research, analyse data, talk to users. Don’t make assumptions. Have empathy for users, and remember that what they ask for isn’t always what they need.

Guidance: Government Design Principles

Feeling that we weren’t able to fully gather and document the information we needed we brought in a Service Designer and User Experience (UX) Developer for a short project to analyse existing processes and design prototype interfaces using the new model.

Key user communities

Our first task was to learn about the users of our system. We were able to identify three key groups and the Service Designer ran a series of interviews with representatives of each to research their needs and the way they used the current system. Below are the groups we identified and the profiles we created for them.

1. Transfer Team

The transfer team is part of the Government Services, Strategy and Engagement Department currently made up of 12 people.
They are responsible for:

  • Publishing guidance and on-line training that enable departments to catalogue and prepare records, and to establish appropriate access conditions for these records;
  • Enabling departments to transfer their records in line with timescales set out in the Public Records Act 1958 and the transition to the 20-year-rule;
  • Intervening where necessary to help departments understand our requirements and keep on schedule;
  • Coordinating information between departments and the Advisory Council on National Records and Archives and its Secretariat;
  • Loading information about newly transferred records into Discovery; and
  • Preparing transferred records to be made accessible to the public at The National Archives, taking account of approved access conditions.

Their main uses of the current catalogue editorial system:

  • Transform spreadsheets of data to XML
  • Upload XML to editorial system
  • QA Data Checking
  • Accession new entries into the catalogue
  • Update accruing catalogue entries

Based on their interviews, our Service Designer produced these diagrams of Transfer Team needs and the workflow for accession of physical records (with actions and pain points).

2. Collections Expertise and Engagement (CEE)

The department is made up of 54 staff and volunteers split between three teams: ‘Modern Collections’ (Modern Britain, Community and Transport, Overseas and Defense & Visual Collections), MEMLAMP (Medieval, Early Modern, Legal & Maps and Plans), and ‘Strategic Operations and Volunteers’.

They are responsible for:

  • Expert knowledge of the records of The National Archives
  • Advice on how to access and interpret them

Main uses of the current catalogue editorial system:

  • Upload XML to editorial system
  • Correct / improve existing catalogue entries at piece and item level
  • QA Data Checking
  • Create new entries at piece and item level
  • Create relationships between entries
  • Download data for bulk correction
  • Upload data for bulk correction
  • Progress Edit Sets through editorial stages

Our main interaction with the CEE team revolves around catalogue enhancement projects. Based on their interviews, our Service Designer produced these diagrams of CEE TEAM needs and the workflow for enhancement of the catalogue (with actions and pain points).

3. Cataloguing, Taxonomy and Data (CTD)

The department is currently made up 14 people who are responsible for:

  • Delivering intellectual control, editorial development and daily management of our catalogue records, which is at the heart of virtually every activity which takes place at The National Archives
  • The National Archives’ descriptive and editorial standards for records in our custody. We have a quality control role in the accessioning of new government records as well as leading, supporting and participating in catalogue improvement projects.

Main uses of the current catalogue editorial system include everything mentioned on the previous slides plus:

  • Correct / improve existing catalogue entries at all levels of hierarchy
  • Create new entries at all levels of hierarchy
  • Create and manage relationships (including ordering) between entries
  • Associate authorities with catalogue entries
  • Redact and open catalogue entries
  • Publish data to public system
  • Delete catalogue entries

Based on their interviews, our Service Designer produced these diagram of Editorial Team needs and the workflow for the editorial process (with actions and pain points).

Differences and overlaps

If we compare the uses of the current editorial system (see this Venn diagram) we can see that, while the CTD team overlaps with the others, the needs of the Transfer and CEE teams are largely distinct. This is something that we will need to take into account as we move forward with our designs.

Key stakeholder communities

In addition to the key users, we also identified a number of key stakeholders who either contribute to the editorial process or make use of the data that the editorial process provides:

  • Archive Sector Leadership – Share authority file information with the catalogue editorial system
  • Collection Care – Take a copy of part of the catalogue and expand on it with information around physical preservation and restoration
  • Digital Archiving – Get identifiers for series and connect metadata to catalogue at record level
  • Digital Services – Index and publish the latest version of the catalogue to the public through the National Archives public web site
  • IT Operations – Support current catalogue management and publication system as well as supporting the document ordering system and other related systems
  • Web Archiving – Connect instances of websites to catalogue at series level.

From needs to iterative designs

Our initial foray into service design and user experience resulted in a greater understanding of our users and a number of design possibilities culminating in a prototype. Some testing of components of the prototype took place as part of the iterative design process.

As we moved forward it became clear that we needed to simplify our designs and aims for our initial development as we were moving to a more iterative approach (see Project Omega: First sight of the new cataloguing system). Our first two iterations encompassed a very linear workflow of [Log In] <-> [List] <-> [Record Details], first implementing the front end and then the supporting services. With our third iteration we took a big step forwards in the complexity of the workflow that we were creating and this brings risk.

Testing Risks

Lisa Bodell argues that ‘change cannot be put on people. The best way to instil change is to do it with them. Create it with them.’ (Why Companies Resist Change, Lisa Bodell). It is well known that people are generally resistant to change with the bigger the change the bigger the risk of rejection. This is one of the reasons that it is important to design with your users rather than impose changes on them.

There is another side to this – since users are resistant to change, if something isn’t actively causing a problem then they are unlikely to challenge it especially in comparison to things actively causing them problems. This means that it can be hard to innovate rather than just reproduce because you are moving away from the users stated wants. Even if we focus on needs rather than wants, the key for the user is the way that we implement that need within the interface.

If we wish to innovate, something that we would need to do to make significant improvements, it would mean that we might have to actively go against what the users ask. What is important is not to completely remove risk but to work with it and understand how to mitigate it.

Our way of mitigating risk is through testing. So we can divert from what the users have said they want – as long as we test it and can show that, when offered an alternative, the new option is viable and the improvements offset any negatives.

While still relatively small, we expanded our testing for Iteration 3 with assistance from User Researchers in other teams who were willing to consult with us. As we saw above the user needs from the three key teams overlapped but also had significant differences. for this reason it was important that we made sure to have volunteer testers from each of the teams so that we could get a variety of inputs. We also made an effort to have a mix of long-term and newer users as newer users are less likely to be as embedded in the existing process. This series of tests was the first time that we were able to bring in a National Archives volunteer to be part of the testing as well as members of The National Archives’ staff.

Test sessions CTD CEE CEE Volunteer Transfer
Test sessions 1 2 (+ pilot) 2 0 2
Test sessions 2 1 1 0 0
Test sessions 3 3 (+ pilot) 3 1 2

Another change that we brought in with this increased testing was in the observers. It is considered good practice for developers to regularly sit in on user testing sessions. This both allows the developer to see how the users interact with the system and allows the developer to give their insights which are likely to have a different perspective to non-developers.

As we didn’t have a trained user researcher to conduct the sessions we ran them with the following structure:

Test Sessions 1 & 2: Service Owner ran the session, 1 Developer + Product Owner took notes

Test Sessions 3: Service Owner ran session, Product Owner took notes + 3 devs each took notes at two sessions and Delivery Manager, Project Head and Designer from Digital Archiving each took notes at a session

In the third set of sessions we expanded the observers to include people from another team (Digital Archiving) and from the project governance (Delivery Manager and Project Head). As with the developers, this both increased the variety of perspectives on the user responses and allowed us to share knowledge of the project in a different way.

Case study in design and challenging the user process

In this section we will examine how the testing process, and explicitly testing potential risks, influenced our design choices for the third iteration. One of the key processes that is part of using the catalogue management system is gathering together the records that need to be worked on so that they can be edited and progressed through the editorial process.

Current process

The current process is shown in this diagram. Screencaps of the current editorial management system (top) are paired with the available actions that can be taken (below). The actions relevant to this process are shown in green.

These actions can be condensed into a number of steps:

  1. Create empty Edit Set (screens 1-3)
  2. Go to Edit Set (screens 3-4)
  3. Select records to add to Edit Set (screens 5-8)
  4. Populated Edit Set (screen 8)
  5. Add more records if needed (repeat screens 5-8)

If we consider the possible journeys and how they interact with the user needs we identified above we can relate the journeys to which teams are most likely to be carrying out the tasks.

User journeys by team

  • Add existing records into a new Work Set – CTD, CEE
  • Add existing records into an existing Work Set – CTD, CEE
  • Create new records in a existing Work Set – CTD, CEE
  • Create new records in a new Work Set – CTD, CEE
  • Upload new data from a spreadsheet into a new Work Set – Transfer, CTD
  • Upload new data from a spreadsheet into an existing Work Set – Transfer, CTD [rare]
  • Upload corrections or expanded information from a spreadsheet into a new Work Set – CTD, CEE
  • Upload corrections or expanded information from a spreadsheet into an existing Work Set – CTD, CEE [rare]

These journeys fall into three types – working with existing records, creating new records by upload and creating new records directly. Of these three, working with existing records is the simplest. Since we want to implement the smallest slice of end-to-end functionality as possible for the third iteration our focus is on the first two of these.

Proposed new process – Version 1

The hypothesis put forward was that by changing the process we could simplify the process and reduce the steps:

  1. Select records to add to Work Set
  2. Define Work Set details
  3. Populated Work Set
  4. Add more records if needed

Reducing the steps should make the process simpler and easier for the user, as such an improvement on the existing process, but this is not a change that the users have requested and it would represent a significant change to the understood workflow.

This diagram shows our initial envisioning of the pages that would be needed to support the new workflow and how the users would move between pages following the supported user journeys.

Key changes from current system

  • No longer need to create an empty Edit Set first to add records to

User journeys supported

  • Create Work Set
  • Amend Work Set
  • Amend Work Set details
  • View Work Set
  • Edit Record details

Risks

  • Will the new order of events work for users?
  • Do users need an empty Edit Set?

We developed a prototype, using the GOV.UK toolkit which would allow us to test the design. We identified two pages as being key to the new workflow.

  1. The user dashboard is the first page that the user is taken to when they log in (see inset diagram on this dashboard image). It has a list of their Work Sets. Since this version of the prototype was developed before the name change they are labelled at Edit Sets. This page contains the start point of the user journey so it must be easily identifiable by the user. The start point is the same whether you want to find records to add to either a new Work Set, creating one in the process, or to an existing Work Set.
  2. The search results page allows the user to select the records that they wish to include, and a dropdown menu (indicated on this search results image) from which they can choose the Work Set they want to add the records to. The dropdown also offers the option to choose a ‘New Edit Set’ as the default and selecting this takes the user to the Work Set details page to fill in the values for a new Work Set.

Feedback

The initial response was largely negative. Some of the confusion was just due to the process not being what the tester was used to, but there were issues with users missing or nearly missing the drop down box and having to be prompted to select the desired Work Set from the list. There was also confusion with the naming of the options with one user understanding ‘New Edit Set’ to be the name of an existing Work Set rather than being the option to create a new Work Set.

However, implementation aside, feelings on the new process (record selection first) was mixed but more balanced with some users liking, and even preferring, the new way of working and a lot of the negative response relating to initial confusion about the new process, which the testers generally overcame very quickly. While some users did prefer the old process, this suggested that the problem might be with the signposting rather than the process itself any continued exploration of this avenue would be worthwhile.

When asked testers about no longer needing to create an empty Work Set as part of the Work Set creation process opinion was more divided with some people happy to lose the additional step and others felt that an empty edit set was an important part of their process (see responses to ‘Would you want to create an empty Edit Set?’). However, we noted that most of the use cases mentioned as requiring an empty Work Set involved other user journeys – uploading data from spreadsheets and creating new records.

Proposed new process – Version 2

Based on the feedback we received we made a number of changes and ran a new set or tests to see what difference those changes made, which you can see here.

Key changes

  • Addition of creation of empty Work Set option
  • Separated options on dashboard for clarity
  • Addition of more example data to better represent how page might be expected to look

New user journeys supported

  • Create Empty Work Set

The new dashboard and search results pages kept the basic functionality of the original design but made a few content changes to try and better signpost the workflows for the user and added the option of creating an empty Work Set back in.

Feedback

Feedback from users repeated concerns that the drop down choice would be easy to miss, and this was born out by user observation. There was also feedback about what the default option on the drop down should be and what options should be available. While we were only able to test this version with a very minimal number of people (2) their responses reinforced the concerns that we had got from the earlier testing.

Proposed new process – Version 3

It was clear something wasn’t quite working so Version 3 kept the new process but offered an overhaul of how it would be presented. The original design had tried to reduce the number of pages needed to a minimum in order to reduce the work for the developers and with the intention that it would also simplify the system for the users. However what that testing showed us was that rather than simplifying things, the workflow put a heavier cognitive load on the users because of the choice points within the journey.

The new version of the workflows separated the journeys completely. These images shows the new proposed journeys, ‘Create new Work Set‘ and ‘Expand existing Work Set‘.

As part of the separation, we moved the start point of the user journey to add records to an existing Work Set to the Work Set page rather than the Dashboard since every user we tested looked on the Work Set page for the option no matter which page they were on when asked to perform the task of adding records to the Work Set. This was a clear indicator to us that we were actively going against the user’s mental model of the processes rather than working with it.

We also changed the way that creators were added to records, as the original version just offered a dropdown selection. We knew this was not fit for this purpose, but had not had time to develop an alternative at the point we did the original Iteration 3 testing. This led to a new system of pages shown here.

Two concessions we did make to conciseness was to remove the separate search page and instead put the search bar on the Dashboard and Work Set pages respectively (see search results pages below) and to rule the creation of an empty Work Set out of scope for this iteration. We recognised that for some users the creation of new Records into was very important and that the current process for that required an empty Work Set in which the new Records would be created. However as we were not supporting that user journey during this particular iteration it wasn’t clear whether or not that process would remain the same or change in the same way that the Work Set creation process was changing. It therefore made more sense to table that question until a later time when we could address it specifically.

Also out of scope was the uploading of records. During the testing we had noticed the confusion that this caused some users. Members of the Transfer team especially expected to interact with the editorial management system primarily through this process which aligned with the research into their team usage (see 1. Transfer Team and User Journeys by Team). Not having this option available caused significant disconnect for a number of users outside CTD, the prime users of the workflows involving editing current records. To mitigate the confusion we added a dummy link to the future upload journey (see Dashboard below) to make it clear that it was a separate path, albeit one that was not yet available to them. We hoped this would help the user orientate themselves within the system more easily.

Key changes

  • Removed separate search page
  • Two separate start points rather than choice point to lower cognitive load on user
  • Empty Work Set ruled out of scope for this iteration
  • Dummy link to show that ‘upload’ is a different user journey

User journeys supported

  • Create Work Set
  • Amend Work Set
  • Amend Work Set details
  • View Work Set
  • Edit Record details
  • Add Creator to Record
  • Remove Creator from Record

The dashboard and the Work Set page now both contained search bars which acted as the start point for their respective workflows.

We had concerns that the search pages on the different workflows might be easy to confuse (an issue that had previously arisen in the feedback between the search results page and the Work Set page). Labelling on the pages was used to differentiate between them and ensure the user was always aware of which path they were on.

Findings

We found that the users were mostly open to reversing the sequence, with the initial learning curve quite steep as users found the start points hard to identify. The combined search added to this initial stumbling block, and users either did not see or understand the labelling used to signpost the search bar as the start point. Seeing the search results was less confusing coming from a search box than being taken to a search from the initial link but it was hard for users to leave their expectation from the current system behind leaving them very unsure if they were doing the right thing. Further work needs to be done here.

However, once users had done it once they picked the new workflow up very quickly going forwards and found it very intuitive. We were able to confirm this by swapping the order of the tasks (create a new Work Set vs add Records to and existing Work Set) for half the sessions and it was inevitably the first task where the tester had difficulties, while the second was much smoother, rather than one task being favoured over the other.

The testing confirmed our hypothesis that the ‘Add to existing work set’ start point is now where users expect to find it and that the separated journeys were much easier and less confusing for the testers as the choice they made was upfront and very clearly distinct.

We had noticed from the first two rounds of testing that the journey was largely specific to CTD users which meant that users from other teams needed better support to situate them and explain the task during the session as they were benign given a more unfamiliar task. Improving the testing script allowed us to better contextualise users on the journey we were testing, and this was helped by the addition of the dummy upload link to differentiate that action from what was being asked of the testers. In doing this we recognised that many of the Transfer team volunteers wanted to upload new Records and that the CEE team volunteers were also interested in being able to create new Records, both of which are out of scope for Iteration 3.

Equally, we recognise that with both the search results and Work Set details pages separated, this has created more work for the developers to implement and maintain but feel that the users comfort with the system should be the primary driver.

Risk Review

If we consider our risks that we highlighted at the beginning of the Iteration 3 testing process, we can see that the new order of events does work for most users and advantages were identified in selecting Records before creating the grouping container for the selected Records. However the cognitive leap from the current process to the new process is not being supported well enough in the current design.

The risk of users rejecting the system as not meeting their needs without the ability to create an empty Work Set has not fully been explored as this has been postponed until the workflow around new Record creation can be more fully explored. At which point either the functionality will be added back in or a new process will be developed and supported.

Conclusion

The case study I have detailed here is just one of the strands that we investigated during the Iteration 3 user testing. Others such as being able to edit the Work Set details, functionality not currently available, and the Record creator pages were more unmitigated successes but all provided valuable insights. Having a solid understanding of the user groups helped us understand where our testers were coming from and how their different experiences with the current system might influence their perceptions of the new system they were being asked to evaluate.

One risk that I have previously not mentioned but of which we have been aware during the design and testing of the first three iterations is whether the GOV.UK toolkit can supply our needs since it was intended for a much more transactional system that the editorial management system embodies. The initial designs discounted the GOV.UK toolkit as they were much more complex than the toolkit is designed to support and we would have needed to do a lot of work to extend the toolkit to support our needs. In the long term this may still be the case. However in the short term our focus is on a much more simple and reduced editorial management application and so far the toolkit, thanks to some extensions developed by other parts of government (most noticeably HMRC and Dept of Justice), has met our needs with minimal extension on our part. A number of testers also commented favourably on the use of the toolkit, recognising it from other government pages that they had used. Whether that is a win for us or for the GOV.UK team I don’t know, but I am happy to take any wins I can get, as the design challenge only gets more complex from here.