Hello, and welcome to the second edition of Prolog, the GRAIL Network newsletter! Here, we'll provide regular updates on AI-related happenings within the U.S. federal government and European Union, topical writing from the Center for Democracy & Technology (CDT) and R Street Institute, and work and perspectives from within the Network. More about the Governance & Research for Artificial Intelligence Leadership Network is available on our website.
We hope you’ll share questions, comments, news items, and your recent work with us at info@grailnetwork.org. We think you should help inform what we feature, and we want to hear from you! If you have a minute send us a note using this short anonymous survey.
| |
🗣️ Eager Learning: Sharing Network Expertise
Through a series of webinar-style briefings GRAIL is hosting for congressional and other governmental staff, we hope to improve policymakers’ understanding of the field of AI and how the applications growing out of it might impact society. We hope you’ll join us on February 26 for the first briefing in the series, and stay tuned for the announcements of two more.
The first three briefings will feature GRAIL members Ryan Calo, Suresh Venkatasubramanian, James Bessen, Rob Seamans, Margaret Hu, and Andrew Selbst, but we want to continue this series and feature as many GRAIL members as possible. If you would like to propose a topic for a congressional briefing, please send us a paragraph description at info@grailnetwork.org.
| |
🏅 Named-Entity Recognition: New Work from Network Members
- Alarith Uhde, et al., designed a social practice-based, worker-centered, and well-being-oriented self-scheduling system that gives healthcare workers more control during shift planning, discussed further in “Design and Appropriation of Computer-supported Self-scheduling Practices in Healthcare Shift Work.”
- Corrine Cath-Speth, et al. undertakes a series of qualitative interviews with trans and/or non-binary users of Voice-Activated AI (VAI) to explore their experiences and needs as they relate to representation in this pre-print, “Speaking from Experience: Trans/Non-Binary Requirements for Voice-Activated AI.”
- Joshua Kroll examines the various ways in which the principle of traceability has been articulated in AI principles and other policy documents from around the world in “Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems.”
- Baobao Zhang, et al., investigates what drives AI researchers' immigration decisions in a new report, “The Immigration Preferences of Top AI Researchers: New Survey Evidence.”
- José Hernández-Orallo, et. al., describes a methodology for categorising and assessing AI research and development technologies, mapping such technologies to Technology Readiness Levels and introducing graphs to compare readiness to generality, in “AI Watch Assessing Technology Readiness Levels for Artificial Intelligence.”
| |
📰 Data Mining: Contributions from CDT and R Street
| |
🆕 Welcome our New GRAIL Network Members
- Dr. Shobita Parthasarathy is a professor of public policy and women's studies, and director of the Science, Technology, and Public Policy Program at the University of Michigan. She is particularly interested in the development, implementation, and governance of innovation that serves social equity and justice goals. Her current projects focus on the politics of inclusive innovation in international development, the development and governance of COVID-19 testing, and envisioning an equity-focused innovation system.
Dr. Parthasarathy is also the director of the University of Michigan's Technology Assessment Project (TAP), which uses analogical case study methods to anticipate the equity, social, and ethical dimensions of emerging technologies and inform their governance. Check out TAP’s first report on the use of facial recognition technology in schools, and her podcast, The Received Wisdom, which focuses on issues at the intersection of tech, science, society, and policy.
- Dr. Ben Green is a postdoctoral scholar in the Michigan Society of Fellows, an Assistant Professor at the University of Michigan’s Ford School of Public Policy, and an Affiliate at Harvard’s Berkman Klein Center for Internet and Society. His research looks at the role of data and algorithms in efforts to reform public policy, bridging perspectives from computer science and science and technology studies. Dr. Green’s recent and ongoing work includes a book about “smart cities,” multiple experimental studies looking at how people make use of algorithmic predictions in practice, and several papers taking a critical look at algorithmic fairness and criminal justice risk assessments.
| |
🤯 Intelligence Explosion: Applications in AI
| |
⚙️ Committee Machine: What We're Tracking
Legislation
- The National AI Initiative Act was signed into law in December as part of the National Defense Authorization Act. Among other things, the Act establishes a new AI Initiative Office within the White House’s Office of Science & Technology Policy. This new AI Office will house an interagency coordination program and a new AI Advisory Council, which will include members from academia and civil society. The Act also instructs the National Institute for Standards and Technology to develop a “voluntary risk management framework for trustworthy artificial intelligence” and to “participate in the development of standards and specifications for artificial intelligence.” Learn more here.
- Washington State Bill 5116: This bill would impose several obligations on state agencies that procure, develop, or use automated decision systems, including provisions aimed at non-discrimination, accountability, transparency, and means to challenge and seek redress for erroneous determinations. The bill is still relatively early in the legislative process and is currently with the senate Ways and Means committee.
- New York City Council Int -1894: This bill would regulate the use of automated employment decision tools. In particular, it would “prohibit the sale of such tools if they were not the subject of an audit for bias in the past year prior to sale, were not sold with a yearly bias audit service at no additional cost, and were not accompanied by a notice that the tool is subject to the provisions of this bill. This bill would also require any person who uses automated employment assessment tools for hiring and other employment purposes to disclose to candidates, within 30 days, when such tools were used to assess their candidacy for employment, and the job qualifications or characteristics for which the tool was used to screen. Violations of the provisions of the bill would incur a penalty.”
Executive and Federal Agency Actions
- OMB Agency Memo: After asking for feedback on a draft in early 2020, the Office of Management and Budget released a memo outlining its guidance to federal agencies on their approach to regulating the use or development of AI applications. The memo strongly encourages a deregulatory approach, but it remains unclear whether the Biden administration will maintain that stance.
- Executive Order on Promoting the Use of Trustworthy AI in the Federal Government: This EO instructs agencies to compose inventories of their current AI applications and sets up a process for interagency coordination.
- Administrative Conference of the United States (ACUS) — ACUS adopted a series of statements to guide agencies, including on agencies’ uses of AI (beginning at 6616). The statement addresses transparency, harmful bias, technical capacity, obtaining AI systems, data, privacy, security, decisional authority, and oversight.
Court Cases
- Van Buren v. US: The Supreme Court may soon release its opinion on the question of whether someone “exceeds authorized access,” within the meaning of the Computer Fraud and Abuse Act, if they disregard non-technical access limitations such as terms of service or company policies when using a computer system they are authorized to use. This decision could impact many forms of data collection, research, and auditing.
- Google v. Oracle: The Supreme Court is also expected to release its opinion on whether Google’s reproduction of certain elements of the Java Application Programming Interface infringed Oracle’s copyright. Although this fight between two massive companies involves a damages award of $9BN, it could have extremely broad implications for the future of software, copyright, and interoperability, including AI applications.
| |
|
|
|
|