ARL Monitor: Public Edition
Summer 2025
|
Marcel LaFlamme, Director, Research Policy and Scholarship, ARL
|
The ARL Monitor is a quarterly newsletter providing intelligence and insight on the research environment. Aiming not to break the news but to offer analysis and contextualization of thoughtfully curated content, the ARL Monitor helps set the context for member engagement with issues related to scholars and scholarship. Please encourage your colleagues to sign up for the ARL Monitor.
If you have questions or suggestions, please email me at marcel@arl.org.
From the Government Affairs section of this issue, the Gold Standard Science executive order and updates to the NIH’s Public Access Policy loom large in the US policy context, even as restrictions on international student enrollment take effect across North America. Around the Ecosystem presents resources on open science and artificial intelligence, including a just-released preprint from ARL staff. ARL/CNI AI Researcher in Residence Natalie Meyers offers the next installment of her column AI in 800 Words, which features an interview with Ismael Kherroubi Garcia, founder of the Responsible AI Network. Finally, The Global View examines the results of surveys on research integrity training carried out in seven countries.
Read on for more details!
|
Implementing Gold Standard Science
|
The Trump administration issued an executive order on May 23 entitled “Restoring Gold Standard Science,” which directs US federal agencies to align their policies and processes with a vision of science that is reproducible, transparent, and communicative of uncertainty. While the language of the order reflects many of the priorities of a scientific reform movement that includes libraries, critics expressed concerns about excluding research that does not meet a narrow evidentiary standard and about designating political appointees to enforce compliance.
The following month, the White House Office of Science and Technology issued guidance to agencies for implementing the order, including the development of metrics and evaluation mechanisms. Agencies have until August 22 to submit a report outlining their implementation plans, which will be closely scrutinized by the research community.
|
Accelerating Public Access at NIH
|
This spring, the US National Institutes of Health (NIH) announced plans to move up the effective date of the agency’s revised Public Access Policy, which requires the results of NIH-funded research to be freely available through PubMed Central immediately upon publication. Then, earlier this month, NIH announced that it would introduce a cap on the costs that the agency will cover to publish funded work open access in a scientific publication.
These moves have elicited a range of reactions from scholarly publishers, and libraries can expect to field questions from researchers about their publishing options until greater clarity is available. For ARL, affordability and access to publicly funded research remain key priorities, including fully funding PubMed Central as a vital piece of research infrastructure. Even as debates over specific business models and pricing structures play out, as an association we see value at this moment of profound uncertainty for the research enterprise in working with publishers on issues of shared concern.
|
Regulating International Student Mobility
|
A report from the Migration Policy Institute notes that “tensions between rising demand for international education, the financial imperatives of maintaining or growing international student admissions, and the challenging politics of international migration are becoming increasingly apparent.” In the United States, expanded screening and vetting of applicants for student visas is predicted to have a chilling effect, with 73% of institutions responding to one survey indicating that they expect international enrollment to drop this fall.
Meanwhile, caps placed on Canadian study permits in 2024 drove the number issued downward, with extensions of existing permits outpacing new approvals. Immigration, Refugees and Citizenship Minister Lena Metlege Diab has echoed concerns about the impact of a spike in international student arrivals on housing availability. Consultations to be held with institutions and provincial governments this summer will inform how study permit levels are set going forward.
|
|
|
|
The Repertoires blog series distills actionable insights for library leaders from book-length studies of research communities and their ways of working. By understanding emerging and impactful configurations of how scholarship gets done, libraries can ensure that the services they offer reflect the epistemic diversity of the scholars they support.
A July 2025 post about The Ends of Research: Indigenous and Settler Science after the War in the Woods (Duke University Press, 2023), by
|
|
| anthropologist of science Tom Özden-Schilling, explores strategies for sustaining scholarly inquiry in the face of institutional upheaval.
Drawing on Özden-Schilling’s account of forest researchers in northwest British Columbia, libraries can assess the needs of field research stations and consider collecting the records of standalone initiatives at risk of data loss. Libraries can also show up for early-career researchers moving into business, government, or nonprofit work by prioritizing investments in openness that do not depend on institutional affiliation.
|
|
|
Gauging Library Support for Open Practices
|
As the umbrella category of open scholarship expands to include new research practices, libraries are evolving their service offerings to keep pace. A May 2025 article in the Journal of Academic Librarianship reviewed more than 3,700 publications discussing activities that support openness at academic libraries in the United States.
Extending back to 2010, the review finds that support for open access and open-source software were the most discussed activities, while support for citizen science was the least discussed. The article also signposts a trend toward integrated, “wrap-around” services that extend across the entire research cycle.
|
Enabling Public Participation in Scholarship
|
The Association for Advancing Participatory Sciences held its biennial meeting in Portland, Oregon, at the end of May, where I presented preliminary research on library services to support engaged scholarship and societal impact.
The meeting’s keynote featured the executive director of iNaturalist, a digital platform that helps users identify the plants and animals around them while generating data for science and conservation. The talk distills five lessons from the platform’s growth to date, which are relevant to researchers seeking to enlist the public in data collection: focus on a critical challenge, make participating easy and fun, make it social, build a new kind of scientific instrument and make it open, and don’t just crowdsource science, crowdsource solutions.
|
Disclosing AI Use in Scholarly Publishing
|
Within months of the release of ChatGPT in 2022, scholarly publishers reached an initial consensus: while AI was neither to be credited as an author nor cited as a source in its own right, the use of AI tools by human authors was to be permitted as long as it was disclosed.
Natalie Meyers and I have posted a preprint that considers the evolution and limitations of AI disclosure by scholarly authors, and highlights emerging approaches informed by practices of software citation, use of contributor role ontologies, and expectations around reproducibility. The manuscript is under consideration for publication in an anthropology journal later this year.
|
Signaling Expectations for AI Developers
|
Courts are starting to establish precedent for how the training of AI models intersects with copyright law. Meanwhile, Creative Commons has launched a major initiative called Signals, a suite of machine-readable tools that creators will be able to use to indicate the conditions under which they want their content to be reused by AI. Acknowledging that the legal force of these tools will vary across jurisdictions, Creative Commons is instead framing them as a social proposition akin to defining “manners for machines.”
Signals does not aim to restrict the type of use that machines can undertake; instead, the preliminary elements that have been developed reflect different dimensions of reciprocity, including credit, financial support, and other forms of contribution. Consider registering for a town hall on Tuesday, July 29, or Friday, August 15, to learn more and get involved.
|
|
|
by Natalie Meyers, ARL/CNI AI Researcher in Residence
|
The AI in 800 Words column explores artificial intelligence and its relevance to research libraries through brief interviews that give attention to opportunities, challenges, governance, ethics, and the research enterprise.
In this installment, I speak with Ismael Kherroubi Garcia, founder of the UK-based Responsible AI Network.
Tell us about yourself and the work you would most like to bring to ARL members’ attention.
I consult on AI ethics and research governance through my consultancy, Kairoi, and I volunteer through the Responsible AI Network (RAIN), which I established as a Fellow of the Royal Society for the Encouragement of the Arts, Manufactures and Commerce (RSA). Of most interest to ARL members could be RAIN’s theory of change and our companion visualization of responsible AI. It’s still a quiet output; it’s not one that's been shared too much.
What problem were you trying to solve?
We were motivated to address the confusion(s) surrounding AI, from utopian to dystopian views on the futures we imagine. Responsible AI can be a nebulous term. The challenge is that anything with “AI” in it invites different, often self-confident but unfounded opinions. I always use the word confusion about AI. If we look at the news these days there’s so much confusion. Confusion in the Buddhist tradition, as I understand it, is one of the three evils and consists in not understanding reality. Not understanding reality means we make bad decisions. I’m not a Buddhist, but I perceive that what we have around AI right now is confusion. So RAIN set out to define responsible AI in terms of a life cycle that’s inclusive of diverse stakeholders who are striving for social justice and who inform narratives and governance around AI. Then, we diagrammed that interplay in the figure you see below.
|
Visualization of responsible AI by RAIN, licensed under CC BY-SA 4.0
|
Can you help us navigate the visualization?
Moving outward from the center, we have design, development, deployment, and adoption as one version of the AI life cycle. Next, there are AI narratives and AI governance practices sandwiched between the technical life cycle and its many different stakeholders. That purple ring is like the jam between two slices of bread; it’s a bit sticky and oozes everywhere, bringing everything together. The next ring identifies stakeholders. Here we see researchers, civil society, funders, regulators, and so on. The researcher part of the diagram is where I expect that librarians might fit. The outermost ring is informed by RAIN’s theory of change, whereby stakeholders strive for futures in which innovations are equitable and social impacts are intentional. In this ring, we also highlight that we are reliant on nature and make explicit that we need to be aware of the environmental costs and impacts of AI decisions.
Do you have advice for research libraries about AI?
The notion of AI narratives refers to the fact that we have a history of mythology. We’ve brought things to life in our fictions, from Frankenstein to Terminator. Now, we have these very modern fictions that inform how researchers, administrators, and librarians think about AI. The AI narratives part of the visualization points to all of the stories librarians might be exposed to through the media or through colleagues who may be extremely excited or very scared about AI. My advice is to not succumb to AI hype or to the purely commercial interests of AI vendors: you know how to do your job, so don’t let them tell you otherwise! Librarians need to be capable of critically evaluating AI narratives and, when necessary, pushing back. One of the projects I always like to mention in this context is Better Images of AI, which was created by Kanta Dihal and Tania Duarte. The images featured there aim to challenge the stereotypes and misconceptions of AI, and to show its societal impacts.
I’ll illustrate this really simply. If you go to Google Images and you type in “AI,” you’re going to find blue brains, or robots and people shaking hands. There might even be a robot using a laptop, raising the question: what went wrong for the user interface to be so bad that we now need a robot to use a laptop? There’s a risk when depictions of AI promote narratives that are incorrect. But I think librarians, data stewards, and researchers have the capacity to tell other stories that challenge poorly conceived notions about AI.
Is there anything you are reading that you’d like to share with our audience?
Yes, How to Do Things with Words by J.L. Austin, a philosopher of language. The book approaches statements like “I promise” or “I do” during a wedding ceremony as declarations of new realities. I can say, “I promise I will make it to our next meeting.” When I attend, you know we had a promise and that I acted on it. A promise needs to be fulfilled. We do things with words in ways that a text written in a statistically probable way cannot. So I’m reading this book and thinking, “Generative AI chatbots, they don’t do anything with words.” That’s something they can’t achieve, which humans can.
Is there someone you’d like to see interviewed here? Send your suggestions to nmeyers@arl.org.
|
|
|
Research Integrity Training Around the World
|
Training is one of the most common strategies for promoting research integrity at the institutional level. In a May 2025 white paper, Springer Nature presents the results of surveys it conducted on research integrity training in Australia, Brazil, China, India, Japan, the United Kingdom, and the United States. Access to training varies by country and does not appear to correlate with retraction rates, which points to some of its limitations as a solution for integrity issues. Respondents expressed a desire for greater coverage of topics related to data management and sharing, as well as guidance on authorship, which represent opportunities for library involvement.
A 2013 ARL SPEC Kit on responsible conduct of research training found that most of the responding libraries were already providing some training, particularly around citation and plagiarism. A recent scoping review pointed to the use of role-playing scenarios, tailoring by discipline or research method, and multiple exposures as best practices for designing effective training.
|
|
|
Manage your preferences | Opt Out using TrueRemove™
Got this as a forward? Sign up to receive our future emails.
View this email online.
|
1025 Connecticut Avenue NW #1200 | Washington, DC 20036 US
|
|
|
This email was sent to .
To continue receiving our emails, add us to your address book.
| | |
|
|