ARL Monitor: Public Edition
Spring 2025
|
Marcel LaFlamme, Director, Research Policy and Scholarship, ARL
|
Established in 2022, the ARL Monitor is being relaunched as a quarterly newsletter providing intelligence and insight on the research environment. Aiming not to break the news but to offer analysis and contextualization of thoughtfully curated content, the ARL Monitor helps set the context for member engagement with issues related to scholars and scholarship. Please encourage your colleagues to sign up for the ARL Monitor.
If you have questions or suggestions, please email me at marcel@arl.org.
This issue of the ARL Monitor introduces the sections that will organize the newsletter’s content going forward. In Government Affairs, an analysis of grants from the US Institute of Museum and Library Services (IMLS) to ARL member libraries quantifies the Association’s collective financial exposure to grant terminations, while ARL’s feedback on the revised Tri-Agency Open Access Policy on Publications underscores our strong support for advancing open scholarship in Canada. Around the Ecosystem gathers a collection of resources around trust, as the theme for this year’s Spring and Fall Association Meetings. ARL/CNI AI Researcher in Residence Natalie Meyers debuts her column AI in 800 Words, which will feature brief interviews about artificial intelligence and the research enterprise. Finally, The Global View looks to the Netherlands for lessons on collective action around research assessment reform.
Read on for more details!
|
What the Attack on IMLS Means for ARL
|
Since its creation in 1996, the Institute of Museum and Library Services (IMLS) has been an invaluable partner to US research libraries and archives, offering competitive grants that have fostered innovation and created a foundation for evidence-based practice. Following the March 14 executive order targeting IMLS for elimination, ARL has been closely monitoring reports of agency staff being placed on administrative leave and grants being terminated. Even as legal challenges to these actions are pending, ARL has been gathering information about their impact on our members.
On April 8, IMLS sent out a first wave of termination notices for active competitive grants, including four grants for which ARL was an awardee. ARL has identified 21 grants to 15 member libraries that have been or could be affected, from programs to upskill an inclusive library workforce to the building of digital research tools and the preservation of at-risk collections. The total value of these grants to ARL members exceeds $5 million, of which over $2 million has not been disbursed. As of this writing, 16 of the 21 grants we are tracking have been terminated, and conversations are underway about how best to respond as an association.
|
Where US Research Policy May Be Headed
|
Even as cuts to and constraints on the research enterprise have dominated headlines in the early days of the Trump administration, inklings of a more constructive research policy could be seen in a March 26 letter to Michael Kratsios, upon his confirmation as director of the White House Office of Science and Technology Policy (OSTP).
The letter tasked Kratsios with meeting three challenges: securing the nation’s position as “the unrivaled world leader in critical and emerging technologies,” revitalizing the research endeavor by “pursuing truth, reducing administrative burdens, and empowering researchers to achieve groundbreaking discoveries,” and ensuring that scientific progress and technological innovation “fuel economic growth and better the lives of all Americans.” Research libraries have stories to tell about each of these challenges, and the letter points toward some useful ways of framing those contributions for the current policy landscape.
|
How Canadian Funders Aim to Advance Open Access
|
In February, Canada’s three federal research funding agencies—the Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council—released a draft revision of their Open Access Policy on Publications. Under the revised policy, all agency-funded, peer-reviewed research articles are to be immediately and freely available online via deposit to a Canadian repository.
In close consultation with the Canadian Association of Research Libraries (CARL), ARL provided feedback on the draft revised policy. Our recommendations include investments in technology and automation to streamline the deposit process, communication with publishers about the rights that grant recipients retain, and a transparent approach to monitoring for compliance. The final version of the revised policy will apply to grants awarded on or after January 1, 2026.
|
|
| |
The Repertoires blog series distills actionable insights for library leaders from book-length studies of research communities and their ways of working. By understanding emerging and impactful configurations of how scholarship gets done, libraries can ensure that the services they offer reflect the epistemic diversity of the scholars they support.
A March 2025 post about Tabula Raza: Mapping Race and Human Diversity in American Genome Science (University of California Press, 2024), by
|
|
|
anthropologist of science Duana Fullwiley, examines how race gets encoded in the categories that structure entire fields of inquiry.
As experts work through these legacies, libraries can encourage scholars to adopt relevant community guidelines and to continue their own efforts toward reparative and inclusive description. Libraries can also protect their integrity by developing research impact services that retain a critical outlook.
|
|
|
Making Sound Decisions About Replication in Science
|
Key figures in the Trump administration have signaled an intention to devote more resources to the replication of biomedical research. Research libraries support replicability and reproducibility through a range of services, and ARL is exploring opportunities to work in coalition on ensuring that resources for replication studies are allocated in keeping with scientific priorities. Science reformers, higher education leaders, and legislative champions may all have a role to play in advancing this agenda.
A January 2025 article in PNAS proposes to treat decisions about whether and how to replicate a study as the outcome of a deliberative process where epistemic and nonepistemic values are weighed in light of contrasting attitudes about the scientific literature. The framework the authors develop also helps to explain the conditions under which different replication tools, such as registered reports and Many Labs experimental designs, may be most effective.
|
Flagging Retractions in Open Repositories
|
A January 2025 article in Learned Publishing, by bibliometrics researcher Frédérique Bordignon, argues that research integrity advocates have overlooked the role of open repositories in upholding the reliability of the scientific record. Drawing on an analysis of the multidisciplinary French repository HAL, Bordignon finds that 91% of repository records did not reflect corrections or retractions issued by publishers.
While the version of a work deposited in a repository may not be identical to the published version, readers may nonetheless benefit from awareness of post-publication actions across the linked record of versions. STM’s Content Update Signaling and Alerting Protocol (CUSAP) initiative is one effort at tackling this issue, which has engaged with repositories directly.
|
Using Artificial Intelligence in Evidence Synthesis
|
Research library staff are often involved in supporting the design of systematic reviews and meta-analyses in the health sciences and beyond. Even as resources for acquiring these skills become more widely available, attention is turning to how artificial intelligence can be used to augment human expertise in assembling and synthesizing an existing evidence base.
On Tuesday, June 3, the joint AI Methods Group established by Cochrane, the Campbell Collaboration, JBI, and the Collaboration for Environmental Evidence will offer a webinar (free; login required) presenting their recommendations and guidance on the responsible use of AI in evidence synthesis. The session is aimed at evidence synthesists, methodologists, and AI developers, as well as leaders of organizations involved in evidence synthesis.
|
Enhancing Public Trust in Science
|
A January 2025 commentary in Social Studies of Science, by science policy scholar Shobita Parthasarathy, defends “inviting non-scientists to participate throughout the scientific process, from shaping research questions to designing technologies to advising policymakers about how to incorporate their knowledge into evidence-based decisionmaking.”
Other scholars have proposed that scientists should listen to non-scientific communities about their concerns and then translate them into scientific discourse. But Parthasarathy argues that this approach cannot solve and may even worsen the problem of public distrust in science, by erasing “the emotion, urgency, and complex understanding of problems and solutions that usually characterize the interventions of affected communities” and by failing to credit community members for their contributions.
|
|
|
by Natalie Meyers, ARL/CNI AI Researcher in Residence
|
The AI in 800 Words column explores artificial intelligence and its relevance to research libraries through brief interviews that give attention to opportunities, challenges, governance, ethics, and the research enterprise. My goals for the column are to engage interviewees from different sectors about their work on, with, and against AI. Please send your questions or suggestions to nmeyers@arl.org, and together we’ll bring this column to life.
To get us started and better acquainted, in this first column I’m delving into my own experiences and perspectives using a series of prompts that future interviews will build on.
What are some recent accomplishments you want to bring to the attention of ARL members?
I co-chair the Research Data Alliance (RDA)’s Artificial Intelligence and Data Visitation Working Group (AIDV-WG). Our mission is to contribute to building ethical, legal, social, and technical frameworks for artificial intelligence. In particular, we’re examining the potential of an approach known as data visitation, because licensing users and their software to analyze data in its original location minimizes risks, creates efficiencies, and ensures that analysis is performed on up-to-date, authentic, unaltered datasets.
What problem were you trying to solve?
We were looking for solutions to technical challenges in data visitation by AI models, as well as exemplary structures for AI governance. Luckily in RDA, the problem space is often understood as socio-technical. This means working toward solutions in collaboration with colleagues from around the world and becoming better informed about approaches that may be well suited to more and less resourced countries or mature and new disciplinary formations.
Who did you make it for?
Members of the RDA and stakeholders of the European Open Science Cloud Future project, which supported our work. The outputs were featured at an RDA webinar on The Role of AI in Building Responsible Open Science Infrastructures and at the RDA’s 23rd Plenary in Costa Rica, where working group members from around the globe gave updates on AI policy in their regions.
Is that who turns out to be the most interested?
Interest has been broader than anticipated. I have delivered the content to workshops held by the Open Modeling Foundation, the FORCE11 Scholarly Communications Institute, and to groups as diverse as emergency room surgeons exploring AI advantages to improve patient outcomes at the Coalition for National Trauma Research; informatics scholars at AMCIS in Panama; female data scientists at Women in Analytics; and to the Kahee Institute, a socio-legal impact organization focused on international human rights law, policy, advocacy and capacity building in the context of emerging technologies.
What motivated you to do this work?
As an AIDV-WG participant, I had noticed that model developer and deployer rights, as well as rights for instructors and learners in higher education, were either burdened or ignored outright in many of the policies we tracked. The liability, expense, and compute time needed to teach someone to develop AI models seemed to be constantly rising, leading to dramatic imbalances between university and industry-based research.
|
The proportion of notable models developed in academia compared to industry has dropped so far over the past twenty years that ~90% of all notable models are now developed by industry compared to just ~20% in 2003.
Meanwhile, model-related patents granted to US applicants has also fallen, from 40% in 2010 to 14% in 2023.
|
These are the trends that drove me to seek a better understanding of today’s AI landscape.
Who have you worked with along the way?
My collaborators Eyiuche Ezigbo from Nigeria, Ronit Purian from Israel, Yeyang Su from China, and Shiny Martis from France challenged and inspired me with their dedication to protecting and promoting human rights through AI Bills of Rights. Over the past few years of work together, we came to appreciate each other’s day-to-day challenges as well as our aspirations for a better world.
Another serendipitous lift came from an early reviewer of our work, John C. Havens, who is the founding executive director of the IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems. John inspired us, broadened our perspective, and encouraged us to strengthen our message.
Who or what inspires you?
Visual art! Here’s a painting by Mario Carreño (Cuba 1913–Chile 1999) that captured my attention during a visit to the Museo Ralli in Santiago, Chile.
|
Through an exploration of our networked desires and capabilities in an increasingly digital world, Carreño’s painting offers a near perfect diagram of today’s human in the loop (even if the artist was born a century before ubiquitous LLMs!).
What AI tool(s) do you use most frequently?
Perplexity.
What are you reading or listening to that you’d like to share with our readers?
The New Empire of AI, by Rachel Adams, and Qualityland, by Marc-Uwe Kling.
Interviewees scheduled next for this column include:
Geralyn Miller, Microsoft AI for Good Lab; Gabriela Constanza Arriagada Bruneau, Pontificia Universidad Católica de Chile; P. Elif Ekmekci, TOBB University of Economics and Technology.
Is there someone you’d like to see interviewed here? Email recommendations to nmeyers@arl.org.
|
|
|
Research Assessment Reform in the Netherlands
|
Critiques of research assessment as being focused on overly narrow outputs and goals are increasingly going mainstream. In a February 2025 article in Minerva, sociologist of science Alex Rushforth uses frame analysis to draw out the assumptions and beliefs of stakeholders from across the Dutch science system, following the launch of the Recognition and Rewards initiative in 2019. Rushforth shows, for instance, how adopting reforms at the national level was repositioned by advocates from a risk to a source of competitive advantage.
Last year’s change in government in the Netherlands led to funding cuts for national open science efforts, placing the country’s continued leadership on research assessment reform in question. However, as institutions in the United States and Canada begin to rethink local approaches to hiring and advancement, the Dutch case will remain an instructive one.
|
|
|
Manage your preferences | Opt Out using TrueRemove™
Got this as a forward? Sign up to receive our future emails.
View this email online.
|
1025 Connecticut Avenue NW #1200 | Washington, DC 20036 US
|
|
|
This email was sent to .
To continue receiving our emails, add us to your address book.
| | |
|
|