Mark your calendars. The Hub’s next webinar on October 23 looks at the upcoming UK Global Conference on AI Safety. We will discuss the real risks posed by AI with Gina Neff, Cambridge University, Director Minderoo Centre for Technology and Society; Katie Shilton, Professor Information Science at the University of Maryland and Tom Goldstein Volpi-Cupal, Professor of Computer Science at the University of Maryland. The British Government laid out its plans here.
Adam Zable and Susan Aaronson’s paper on public participation in AI strategies was published by CIGI on September 21. Read it here.
Susan Aaronson finished a paper, under review at CIGI, on generative AI and data governance called Data Dysphoria. Read it here.
Aaronson lectured on generative AI and trade at the WTO and at the UK Department of International Trade and Innovation, Science and Technology on September 21. Aaronson also lectured at Georgia Tech on EU/US AI cooperation and on September 29 the Higher Education Leadership Initiative for Open Scholarship on the impact of generative AI on openness.
|
|
|
Digital Trade and Data Governance News
|
-
The European Commission designated six tech giants - Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft - as gatekeepers under the Digital Markets Act (DMA), requiring them to ensure compliance with DMA obligations for their core platform services within six months and potentially face fines up to 10% of their global turnover for non-compliance.
|
Artificial Intelligence News
|
-
The French National Assembly proposed a law to regulate AI's use in relation to copyright. The bill would require AI software to obtain permission from authors or rights holders before using their works, mandate AI-generated creations be labeled as “generated by AI” and attribute the original authors, and introduce a taxation system to ensure fair compensation.
-
Senators Richard Blumenthal and Josh Hawley are set to announce a comprehensive framework for regulating AI in the US, including licensing, auditing, oversight, liability for privacy and civil rights violations, and data transparency. The framework will be unveiled during an AI hearing featuring tech executives and will be followed by the introduction of legislation.
- Governor Gavin Newsom of California signed an executive order to study and prepare for the development, use, and risks of AI in the state.
-
The British government announced details on its Global Summit on AI Safety.
-
Fifteen more companies, including Nvidia, Salesforce, IBM, and Palantir, joined the White House's voluntary pledge to address AI risks, commit to adopting technology for identifying AI-generated images, and share safety data with the government and academics.
|
-
Alex Engler proposes a novel AI regulatory framework for the US called the Critical Algorithmic Systems Classification (CASC). This system would empower federal agencies with the authority to comprehensively regulate and audit algorithmic systems used in critical socioeconomic determinations, ensuring consumer and civil rights protections are upheld while addressing the diverse challenges posed by algorithmic decision-making across various sectors | Brookings
-
The 2023 Global Artificial Intelligence Infrastructures Report, authored by J.P. Singh, Amarda Shehu, Caroline Wesson, Manipriya Dua, and featuring a foreword by David Bray, provides a comprehensive analysis of AI policies from 54 countries. The report reveals diverse strategies, introduces the concept of “AI Wardrobes,” and underscores the importance of interdisciplinary collaboration to address trust in AI and its societal implications. | Stimson Center
-
Nicolas Köhler-Suzuki maps digital trade in the European Union and assesses the prominent role the EU plays in digital trade as a leading exporter and importer of digitally deliverable services, highlighting the need for a strategic understanding of this strength and its impact on global digital trade regulations. | Institut Jacques Delors
-
In an extensive investigation, researchers Jen Caltrider, Misha Rykov, and Zoë MacDonald reveal that numerous top car brands fail to adequately safeguard customer privacy. These companies have delved into the data business, collecting vast amounts of personal information, tracking activities, and potentially sharing and selling these data points. This poses privacy and security concerns for car owners and highlights the need for greater transparency and protection in the automotive industry. | Mozilla Foundation
-
Michael Garcia highlights the potential cybersecurity risks accompanying the widespread adoption of extended reality technologies in the United States, emphasizing the need for collaborative efforts between the government, industry, academia, nonprofits, and international partners to address these challenges and avoid national security pitfalls. | New America
-
Stefaan G. Verhulst, Laura Sandor & Julia Stamm discuss the pressing need for a reimagined approach to data consent, particularly in the context of forced migrations and extensive data collection. They propose shifting from traditional consent models to a “social license” framework that prioritizes ongoing community engagement and acceptance. The article explores various participatory methods and suggests establishing a Social License Lab for Data (Re)Use to foster a fair and transparent data ecosystem while empowering individuals and communities.
|
|
|
Manage your preferences | Opt Out using TrueRemove™
Got this as a forward? Sign up to receive our future emails.
View this email online.
|
1957 E St NW None | Washington D.C., DC 20052 US
|
|
| This email was sent to .
To continue receiving our emails, add us to your address book.
|
|
|
|