Connecting AI Models & Users to Yale Library Data
Michael Appleby, Chelsea Fitzgerald
Yale University Library
Yale University Library has initiated an ongoing project to make the library’s data available to AI via an emerging standard called the Model Context Protocol, or MCP. Anthropic, which originated the specification, introduced MCP as “a USB-C port for AI” that “provides a standardized way to connect AI models to different data sources and tools.” Major AI vendors, including OpenAI and Google, have announced that they intend to support the protocol.
The creation of MCP is tied to the emergence of the broader trend of agentic AI. In the agentic approach, tools are described to Large Language Models (LLMs), which then autonomously select which tools to employ to answer a user’s prompt or question. Given tools to query catalog data, for instance, the LLM would have the option of performing a search and using the resulting bibliographic metadata in its response to the user. While a search tool could be implemented over any technology, semantic search is a commonly used approach in conjunction with LLMs. Techniques such as Retrieval Augmented Generation (RAG) often rely on semantic search to retrieve relevant text in response to a user's query.
In this presentation, Michael Appleby will discuss implementing MCP and semantic search to expose library data in a user’s preferred environment, and Chelsea Fitzgerald will discuss how user research has and will continue to inform the development of this human-centered method of accessing the catalog.
User research through the summer and fall of 2025 will generate findings to answer the following questions and inform iterative design: How might the use of authoritative library metadata enhance the utility of LLMs for students and researchers? Does this emerging standard, MCP, allow library users to more seamlessly transition between a general discussion of a subject with the LLM to the exploration of bibliographic, archival, and digital materials available from the library and, if so, do users feel this enhances their discovery experience? Why or why not? How might users employ MCP-aware tools to complete their academic work and discover library resources?
Stakeholder and user interviews are being conducted to answer these research questions and build human-centered AI. For testing, Claude has been chosen as an MCP-aware tool, and laptops have been accordingly configured and made available for initial in-person moderated testing. Library staff and library users have been recruited to interact with this MCP-aware tool. Think-aloud task analysis and structured interviews will be conducted to ascertain internal stakeholder and user goals; barriers or pain points; and suggested improvements to inform iterative design.
A second round of user research, scheduled for early fall 2025, will involve a smaller group of library users (n= 5-10) participating in a diary study, where users will keep a log at regular intervals of their interactions with an MCP-aware tool that connects them to the library's data. The diary study will generate richer qualitative insights about how users leverage this approach to discover library resources, as well as their expectations, goals, and strategies related to discovery.
The limitations and strengths of this project reflect a rapidly evolving field. We are assuming that major AI vendors will increase support for MCP over time, and that the process of configuring MCP tools will be simplified.
Autocat Cataloguing Assistant
Yves Maurer
National Library of Luxembourg, Luxembourg
Overview
Even with new forms of interacting with library materials, such as fulltext search, semantic search or chatbots, the library catalogue with correct metadata remains the main entry point for library users. The National Library of Luxembourg (BnL) and its partner libraries in the bibnet.lu library network have a collective catalogue based on MARC21, RDA, Rameau and Dewey that allows their users to find and reserve books and other documents.
Since 2009, the BnL has had the legal deposit mandate for publications made available digitally. While collecting those publications has been made more efficient and comprehensive over time, they have not been catalogued fully and hence are not findable in the collective catalogue nor in the national bibliography. The accelerating growth of digital publishing means that any attempts to process them in the traditional cataloguing workflow are unmanageable. This is why the BnL is developing a cataloguing assistant, with the aim to fully automate metadata creation as far as possible.
Related work
Assisted cataloguing is an evolving field and there are multiple actors working on solutions. An important first step was taken by the national library of Finland’s Annif (Suominen 2022) to produce subject indexing from 2017 onwards. More recent efforts include big library system vendors like ExLibris (Exlibris, 2024) and national libraries like the National Library of Latvia’s ReCataloguer (Bolšteins, 2025) or the national library of Belgium’s PowerApps approach (Lowagie, 2025). It is an open question to which extent the tool use of the big AI platforms can and will integrate cataloguing as a possible behaviour for their chatbots. So even plain, zero-shot chatbot queries can already produce decent MARC21. This capability is being accelerated by the wider adoption of standards like the model context protocol (Anthropic, 2024), which may make it possible for chatbots to efficiently query existing bibliography databases and do copy cataloguing, respectively learn from examples.
Presenting Autocat
The presentation will showcase the approach taken by the BnL to produce a working solution as a collaboration between the IT department, the heritage department responsible for the content and the cataloguing experts in the library. Consideration of the specificities of the cataloguing rules for the bibnet.lu library network and the multilingual aspect are important cornerstones of this project. Indeed, documents are for the most part in German, French, English or Luxembourgish but many are also in other languages used by the international community in the country. Most cataloguing tools are monolingual and that is not applicable here. The use case will be explained and then a short discussion will present the different approaches that have been tried, including LayoutLM (Xu, 2020), GPT4o (OpenAI) and a custom fine-tuned GPT4o along with each method’s strength and weaknesses.
The ground truth, necessary for fine-tuning a model, was gathered from the few examples in the existing catalogue where paper documents were catalogued and their electronic equivalent was linked. Even then it proved to be nontrivial to use the ground truth for evaluating any given method, so the challenges with automatic evaluation will be presented and the results of the human evaluations. While the digital heritage department and the expert cataloguers are highly motivated to participate in the development and evaluation of the tool, the project team has tried to limit the number of repetitive human interventions necessary so that the available expert time can be used most efficiently.
For the fields that are not inferred by the Vision-Language Model, a brief overview will show how these are extracted and integrated into the final record. Finally, the presentation will focus on how the tool is integrated into the overall library workflow and how it will enable the digital curators to efficiently make the digital legal deposit publications available to library users.
Key references
Suominen, Osma, Juho Inkinen, and Mona Lehtinen. "Annif and Finto AI: developing and implementing automated subject indexing." Bibliographic control in the digital ecosystem.-(Biblioteche & bibliotecari, 2612-7709; 5) (2022): 265-282.
Exlibrisgroup. AI Metadata Assistant Preview Initiated for All Alma Customers, 2024, https://exlibrisgroup.com/announcement/ai-metadata-assistant-preview-initiated-for-all-alma-customers/
Bolšteins, Matīss. National Library of Latvia. Automated Retrospective Cataloguing at the National Library of Latvia (NLL), https://www.cenl.org/wp-content/uploads/2025/02/Automated-ReCataloguing-at-NLL.pdf
Lowagie, Hannes. National Library of Belgium. AI-Powered Cataloguing, Facet Publishing, ISBN 9781783308071
Anthropic. Introducing the Model Context Protocol, https://www.anthropic.com/news/model-context-protocol
Xu, Yiheng and Li, Minghao and Cui, Lei and Huang, Shaohan and Wei, Furu and Zhou, Ming. LayoutLM: Pre-training of Text and Layout for Document Image Understandin, 2020. https://www.microsoft.com/en-us/research/publication/layoutlm-pre-training-of-text-and-layout-for-document-image-understanding/
Ordo - agentic AI for library workflows
Martin Malmsten, Viktoria Lundborg
National Library of Sweden, Sweden
Emerging capabilities in AI can provide vast opportunities for change. However, integrating new capabilities in workflows that have stood the test of time for centuries can be a challenge.
The National Library of Sweden has an ongoing project looking at ways to integrate large language models in library workflows in general, and for the purpose of subject analysis in particular. Rather than emulate human tasks we try to find novel ways to change the subject systems as well as the processes we employ today. This has ranged from fully automated creation of new subject systems from digitized books (Malmsten, Lundborg, et. al 2024) to more traditional application of large language models to existing systems. A challenge in this work has been applying new methods to traditional workflows, partly because of lack of transparency, but ultimately because of the low level of overlap between current and newly formed methods. A constant, and fair, question has been why the automated methods reach a certain conclusion, a question that often has no clear answer. The challenge, and our aspiration, is bridging the gap between strong traditional tools (like the human will to categorize using decade old workflows) and newly formed automated capabilities. These two extremes can provide a solution within their own context separately, but changes in manual workflow and ultimately organization require buy-in from leadership as well as those potentially impacted by the change.
At the same time agentic AI, i.e large language models that has (or can emulate) a type of agency, is fast becoming the standard. These models can combine information from multiple sources when needed, e.g looking up information on Wikipedia, using APIs, etc. This allows for a situation where the model itself does not need to be trained on specific knowledge, but can instead get access to it depending on the current question.
Given these circumstances we developed Ordo, an agentic AI that uses library infrastructure and can interact with staff when needed using a standard organisational group chat, e.g Slack or Microsoft Teams. Rather than relying on the underlying language model to function as an all-knowing oracle Ordo uses the emerging agentic features of language models to search out more information when needed. Examples of this include getting a list of other books by the same author, get a list of users (or other bots) that might provide more information given a context, get the full text of the first chapter, get an image of the cover, etc.
The agentic approach in this case serves a dual purpose: it enables automation using multiple backend capabilities, but it also provides transparency and fine-grained insight into the process. It allows a user to take the reins when needed, perform an audit at any time or simply leave it as is while observing the inner workings of the process.
An example: When asked for a subject analysis of the book “Feminism on the big screen” Ordo will use one of its external functions to produce a list of candidate headings from a given subject heading system. The total number of headings in Swedish subject headings (SAO) is over 30 000 which will not fit into the context of the language model, and it would be exceedingly slow to attempt. Therefore we make use of an API that attempts to find fewer reasonable candidates. This API uses semantic similarity to match either a title or a paragraph of text to existing subject headings or index terms. The list of candidates is then returned to Ordo which will attempt to choose the most appropriate heading.
[See comment for access to image]
Figure 1 - User initiated workflow: Ordo uses its agentic capabilities to get relevant information
Features shown by Ordo correlates with a general need of making the current use and administration of subject heading systems in library catalogs more effective. From a catalogers perspective getting to consider a list of candidates in the indexing process is helpful in making a large system more manageable. It is not uncommon for relevant headings to be placed in different parts of the system, without hierarchical or relational connections. From an editorial perspective analyzing the lists of eligible indexing candidates also raises the question of how granular a subject heading system should be for it to work properly in new environments with semantic search tools. This implies the need for a leaner subject heading system that would be easier for both humans and AIs to apply consistently.
We also extend the capabilities of Ordo to update bibliographic records, either when prompted by a human or as part of an automatic workflow. This is realized using the agentic capabilities of the underlying large language model.
The implications of connecting agentic AI to a group chat, however, goes further since it allows the AI to also call on multiple humans for help when needed. It might need clarification or help from an expert in order to proceed. Sometimes it needs help from a second human when it does not understand the first.The normal order of operation is thus flipped in that the AI contacts a human when needed rather than the other way around.
The traditional way of cataloguing books exists, of course, in stark contrast to the pipelines being built today to deal with massive amounts of unstructured data with little or no accompanying metadata. This is the reality for many national libraries dealing with electronic legal deposits. The end result can be separate flows of information depending on source. Integrating Ordo in such an automatic description pipelines would give staff insight into these flows and the ability to audit when needed. It also allows Ordo to “call for help” when in doubt.
[See comment for access to image]
Figure 2: Ordo as part of a pipeline for automated description of digitized books
We evaluate the performance of Ordo using a digitize-first approach and attempt a fully automatic workflow for born digital (or digitized) materials while maintaining a clear audit trail, the ability for staff to follow the process, correct mistakes and provide feedback to the bot.
Enabling a bot to initiate contact to real users without explicitly being instructed to do so carries with it severe security implications. It could potentially open up for both humans and the bot to be tricked into performing harmful actions. To mitigate this we have set rigorously tested guardrails, such as only answering questions when Ordo is explicitly referenced, as well as always being very explicit about the fact that a user is talking to a bot. Security lessons learned includes what to do when your AI starts answering questions that was not directed to it and starts roping in staff to deal with a perceived case attempted social engineering from a user.
References
Malmsten, M., Lundborg, V., Fano, E., Haffenden, C., Klingwall, F., Kurtz, R., … Börjeson, L. (2024). Without Heading? Automatic Creation of a Linked Subject System. In New Horizons in Artificial Intelligence in Libraries (pp. 179–198). De Gruyter Open.
The usage of hardware resources for automatic subject cataloguing at the German National Library – an analysis and outlook for future challenges
Christoph Poley
Deutsche Nationalbibliothek, Germany
New and fast growing AI methods and hardware resources help to find suitable solutions to enhance the traditional work in libraries. At the German National Library (DNB), the development of automatic subject cataloguing was started in 2009. A huge and ever-growing number of online publications needs to be indexed.
Currently, two main use cases must be addressed: automatic classification and subject indexing using standardized vocabularies. The second use case includes the indexing with GND descriptors. Due the large size of the vocabulary it forms an XMLC-problem (extreme multi-label classification) with special requirements for the hardware environment.
The types of literature and text material that have to be processed are quite heterogeneous: article data, books, dissertations and much more in German and English. The characteristics and qualities such as text length and language level differ. There is e.g. scientific literature as well as literature for children and young people that has to be processed. This needs to apply specialized methods and models that work best for each use case. A proven approach is to combine different methods and models with fusion algorithms in order to improve results.
Today, the DNB runs the second software generation to process text data automatically. It is based on a modular architecture, open source software and self-developed components. The Annif toolbox (annif.org) provides the core methods for automatic suggestions. Basically, new methods and approaches become part of the toolbox and help to produce better results in the long term. The next generation within reach is the usage of transformer models. They are accompanied with a further increase of hardware requirements. The usage of graphic processing units (GPU) will become mandatory.
Every use case has to be served by adequate software and hardware resources. Each method needs its resources for data management, model training, for daily use in test, approval and productive environments and for research. Therefore, the hardware requirements are based on many factors, e.g.: How many and which methods and models are put together to fuse them best? What about the size of the training data and models? How long is the average processing time for each publication? How many publications have to be processed each day? What about repeated processing of publications or processing of baselines for new use cases? Which hardware requirements are available in total? What must be done to serve them in future?
The answers are very heterogeneous and can rarely be given at once. Reasonable solutions only can be realized when suitable hardware resources are available. AI methods are very greedy for hardware. Despite ever faster components, the resources are very cost-intensive, limited and increasingly they pollute the environment through considerable CO2 emissions.
The ability to adapt hardware environments to evolving AI requirements remains a key challenge for many institutions. In particular, libraries are still in the early stages of understanding what is required to build infrastructures capable of keeping pace with the rapid development of AI methods. Developing lasting, scalable solutions will require technical innovation and a well aligned strategic and operational planning.
At the same time the discussion of use cases for Large Language Models (LLMs) speeds up in the library context and for automatic subject cataloguing. The use of LLMs almost exceeds all previous defined hardware requirements irrespective of the expected results. In our library, first steps were already taken as part of the DNB’s AI project. Experience was gathered there to find out what can be calculated on DNB's own hardware infrastructure and where the challenges and limitations are.
In this short presentation I want to give an overview on the current hardware resources for automatic subject cataloguing at the DNB in connection with the methods for the use cases we have. The presentation is about the experiences made at the DNB with lexical and machine learning approaches and the different hardware requirements for training, production and research. It will also give an insight in some adjustments to reduce the processing time and the usage of valuable resources. Finally, the first experiences with LLMs at the DNB will be addressed, focussing on hardware requirements and ideas and how to solve the current and future hardware challenges – as building blocks and leading themes for an evolving strategic and operative hardware concept.
Surfacing and Tracing Contributors in Large Video Collections
Owen King1, Kelley Lynch2, Karen Cariani1
1GBH Archives, United States of America; 2Brandeis University, United States of America
Some archival objects contain within themselves expressions of key metadata, if you know where to look and how to extract it. For example, television programs often include scenes with on-screen text that describes the program in question: The main title card gives the title of the program. The closing credits describe authorship and production. And lower thirds (“Chyrons”) frequently name the individuals featured on the screen. This presentation focuses on the Chyrons as a source of information about contributors appearing on the screen: news anchors, reporters, program hosts, politicians, celebrities, and other people of interest. The presentation has two inter-related goals: The first goal is to provide a comparative assessment of the accuracy of several multimodal models and model pipelines for the task for transcribing and processing video scenes with on-screen text. We will provide two cross-cutting assessments: both comparing the models against each other, and comparing the performance of the models against video collections partitioned by originating region and era. The second goal is to demonstrate how these models and their outputs can be integrated within a human-in-the loop workflow for identifying on-screen contributors and creating longitudinal records of their contributions as occurring over years of television appearances. The workflow begins with the machine learning pipeline consisting of scene identification for digital videos, automatic text recognition, and analysis of text from identified scenes. The cataloging side of the workflow then employs an interface for human review of extracted data and data aggregation methods for tracing the roles of individual contributors over time.
The context of this project is the American Archive of Public Broadcasting (AAPB), a collaboration between GBH Archives and the US Library of Congress to preserve the history of US public radio and television. To date, the AAPB has digitized and preserved over 100,000 digital video items. However, the quality of metadata varies considerably, with many items lacking any information about the people appearing in the videos. Because of the current paucity of data and the size of this collection, archivists at GBH have aimed to use AI systems to find novel ways to improve discovery and access. However, instead of relying solely on now-popular methods of vectorization to implement search over unstructured data, the AAPB has sought to use AI to optimize its longstanding processes for creating traditional item-level catalog records. To this end, GBH Archives collaborates with computer science researchers at Brandeis University to implement and deploy AI-based tools for multimedia analysis. The Brandeis Lab for Linguistics and Computation has created a platform called Computational Linguistics Applications for Multimedia Services (CLAMS) which develops and packages AI models and provides an interoperability layer among them [1]. The CLAMS ecosystem is the source of the AI-based tooling that GBH Archives uses to support its cataloging processes.
The AAPB’s AI-assisted video cataloging workflow begins with scene-detection, specifically detection of various types of scenes with text, especially program slates, title cards, Chyrons, and credits sequences. This relies primarily on a fine-tuned CNN-based computer vision model. The output of this step has been used for the last 18 months to present to human catalogers key video scenes containing information of highest value in the cataloging process. More recent development, in the last 6 months, has focused on extracting structured information from those scenes with text. For this key information extraction (KIE) task [2], there are several options. A traditional approach uses OCR models to read the text from the scenes, and traditional NLP methods to analyze the text. A more state-of-the-art approach enlists a large multimodal model (LMM) to perform both the text recognition and analysis [3, 4]. We will report on an experiment that compares the accuracy of these two approaches. To measure performance across different populations, we have curated a dataset from a single AAPB collection, solely with items from PBS Hawaii, many of which have distinctive visual and linguistic features not found in other AAPB collections. Our experiment consists of a cross-cutting comparison of the performance of two KIE pipelines across videos from PBS Hawaii and videos from the rest of the AAPB.
Even with new AI models powering advances in computer vision, OCR, and NLP, the process of extracting information from sometimes blurry video remains error prone. So, creation of high quality catalog records requires human archivists to review, correct, and collate data produced by pipelines of CLAMS apps. To facilitate this, we have been developing a simple browser-based user interface that allows a cataloger to see both a still image from an identified scene and information extracted from it, along with UI elements for editing and exporting corrected and approved metadata values. Sitting atop the CLAMS output, this interface allows expeditious creation of accurate, human-verified, item-level records of the people appearing in television shows.
Our presentation culminates by ascending from the item level back up to the collection level. An interesting fact about collections of broadcast television programs is that many of the same people appear again and again, over the course of years, with different roles and different relationships to the communities of which they are members. With accurate contributor records across a large collection, a novel method of distant viewing [5] becomes available for purposes of archival research. In particular, it becomes possible to uncover and trace the path of an individual in their public-facing (through the lens of a television camera) persona over an extended period, spanning many hours of footage and many years of a person’s life. We demonstrate this by showing how our metadata workflow provides the data to construct a visual and descriptive longitudinal representation of the careers of political figures over multi-decade periods.
We intend this presentation to be accessible to an audience that has basic familiarity with computer vision, NLP, and LAM metadata. We hope it will be of particular interest to stewards of large image and multimedia collections.
References
[1] Verhagen M, Lynch K, Rim K, Pustejovsky J. The CLAMS Platform at Work: Processing Audiovisual Data from the American Archive of Public Broadcasting. In Proceedings of the Thirteenth Language Resources and Evaluation Conference 2022 Jun (pp. 2498-2506).
[2] An S, Liu Y, Peng H, Yin D. VKIE: The Application of Key Information Extraction on Video Text. arXiv preprint arXiv:2310.11650. 2023 Oct 18.
[3] Liu Y, Li Z, Huang M, Yang B, Yu W, Li C, Yin XC, Liu CL, Jin L, Bai X. OCRBench: on the hidden mystery of OCR in large multimodal models. Science China Information Sciences. 2024 Dec; 67(12):220102.
[4] Liu H, Li C, Li Y, Lee YJ. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024 (pp. 26296-26306).
[5] Arnold T, Tilton L. Distant viewing: analyzing large visual corpora. Digital Scholarship in the Humanities. 2019 Dec 1; 34(Supplement_1):i3-16.
Oxford English Dictionary: creating a useful, trustworthy AI search assistant
Elinor Hawkes
Oxford University Press, United Kingdom
The Oxford English Dictionary is widely considered the global authority on the English language, documenting the history of English from the Middle Ages to the present day. Scholars from institutions around the world come to the OED to understand what a word means now and what it used to mean in the past. Digital humanities researchers can use the OED’s dataset to track word usage over time and determine how changing semantics reflects changing society[1]. To support scholarly use, each dictionary entry is itself rigorously researched by a team of lexicographers, and there is a high threshold of evidence required before a word can enter the dictionary.
Whilst the dictionary itself looks to the past as much as the present, the OED as an organization has always looked forwards, and has often embraced new technology when it has been proven to help dictionary editors or readers to do more, and to do it better[2]. For instance, the OED has existed in electronic form since the mid-eighties, and started making use of electronic corpora in the early-nineties[3]. This has continued into the 21st century when OED.com was one of the earliest dictionary websites, and we continue to seek new ways to embrace technology in order to support both our staff and end-users.
In 2024 the OED began investigating how AI could be used to benefit the users of OED.com. After canvassing ideas from departmental staff, we elected to build an assistant that would help users navigate OED.com’s complex Advanced Search feature. We could see from our usage reporting that use of Advanced Search was relatively low, and this remained the case despite a major UI refresh in 2023. We wanted to see whether a tool built using AI could simultaneously drive more usage of Advanced Search whilst remaining a trustworthy tool that meets the high standards of the OED.
This paper outlines our experience developing an AI-powered search assistant and launching a pilot version in January 2025. It covers the full process, from identifying the problem through to evaluating the pilot’s performance, highlighting the decisions, challenges, and insights that shaped the project.
Firstly this paper discusses how we identified the problem that we wanted AI to solve. This involved generating and validating ideas, focusing on areas of the user experience that could be meaningfully improved. Each idea was assessed on feasibility, user impact, and alignment with our goals. To help ensure we were building something that would genuinely be useful, we gathered user feedback on both OED.com’s current search functionality and broader attitudes toward AI. We designed surveys and interviews, selected a diverse group of respondents, and reviewed external research to understand the wider landscape. These insights helped us shape the assistant’s role and functionality.
Once we had articulated and validated the problem, we then progressed to prototyping. Based on our initial user research we were clear that the assistant must not be capable of hallucinating or showing any bias, which led us to a Retrieval Augmented Generation (RAG) based model. We were also clear that the model must be able to demonstrate how it formulated a response. This part of the paper will address our prototyping approach, including articulating requirements, building a test framework, benchmarking performance, and selecting a suitable large language model (LLM).
The final part of the paper will focus on launching a pilot version of the prototype for public use. It will cover the considerations and constraints we had when designing the UX, especially as we were introducing new technology to a potentially sceptical audience. It will also cover how we have measured the performance of the pilot by looking at the KPIs involved. Finally we will look at how OED can continue to utilise and expand on this technology by looking at ways we might improve the tool in the future.
[1] See for example Lopez, Alessandra Zinicola, ‘To make mangoes of melons: Using the evolution of form and senses to understand historical cookbooks’, published at https://www.oed.com/information/using-the-oed/academic-case-studies/the-oed-and-research/to-make-mangoes-of-melons-using-the-evolution-of-form-and-senses-to-understand-historical-cookbooks/ (accessed 9 May 2025)
[2] Weiner, Edmund, 'The Lexicographical Workstation and the Scholarly Dictionary', in B T S Atkins, and A Zampolli (eds), Computational Approaches to the Lexicon (Oxford, 1994; online edn, Oxford Academic, 31 Oct. 2023), https://doi.org/10.1093/oso/9780198239796.003.0015, accessed 9 May 2025.
[3] Gilliver, Peter, 'Towards OED3: 1989–', The Making of the Oxford English Dictionary (Oxford, 2016; online edn, Oxford Academic, 22 Sept. 2016), https://doi.org/10.1093/acprof:oso/9780199283620.003.0015, accessed 9 May 2025.
|