On August 7, The Hub held its 38th webinar with Dr. Anu Bradford of Columbia Law School on her new book comparing AI regulatory models.
|
The rise of generative AI has introduced pressing concerns about data governance, web scraping, and openness. To delve into these complex issues and discuss potential solutions, The Hub, NIST-NSF Trustworthy AI Institute, the Ostrom Workshop at Indiana University, CIGI, and others are collaborating to host a free conference focused on generative AI and data governance. More details will follow! For any questions, please contact Adam Zable, Director of Emerging Technologies, at ajzable@gmail.com.
|
|
|
-
The EU’s Digital Services Act just went into effect, and platforms like Meta and TikTok are starting to allow (EU-based) users to switch off AI-driven personalization features.
-
The US Consumer Financial Protection Bureau is set to announce plans to regulate data brokers in the surveillance industry due to concerns about illegal data collection and sharing.
-
The Office of the Australian Information Commissioner and 11 other data protection authorities issued a joint statement addressing concerns about data scraping risks on social media and other public sites, stressing the importance of safeguarding personal information and promoting user awareness.
|
Artificial Intelligence News
|
-
The US Senate grapples with privacy reforms as AI's expansion raises concerns over personal data protection and the extent of regulation.
-
France is aiming to establish itself in the global AI industry by focusing on open-source AI, with President Macron announcing funding for a “digital commons” and a push for domestic AI champions.
-
The Biden administration brought leading AI companies to make a series of voluntary commitments on AI safeguards. But is this a step towards regulation or merely a rule-writing ploy by the tech industry?
-
President Biden's science adviser and director of the White House Office of Science and Technology Policy, Arati Prabhakar, stresses the importance of addressing AI risks like transparency and bias through both voluntary commitments and government action.
-
The New York Times began blocking OpenAI’s ability to use NYT content for training ChatGPT. See Nick Vincent’s analysis of such data strikes here.
|
-
The European Commission published a Communication on “Web 4.0 and virtual worlds: a head start in the next technological transition.” Here’s an explainer.
-
The European Parliament’s Internal Market and Consumer Protection Committee released a draft report highlighting the need for regulations, interoperability, and user protection in virtual worlds.
|
Research and Analysis
Highlights
|
-
Matt Sheehan investigates pioneering Chinese AI regulations, explaining the process and structure of how they get made and offering valuable insights for policymakers elsewhere | Carnegie Endowment for International Peace
-
Matthew Hutson looks at the major approaches to AI regulation around the world, as the EU emphasizes risk, the US balances innovation, and China focuses on control, each reflecting different societal priorities and challenges | Nature.com
-
Isabella Struckman and Sofie Kupiec explore the motivations behind the signatories of the Future of Life Institute's Open Letter from March which called for a pause on AI training | Communications of the ACM
-
Melissa Heikkilä explains the new research that says AI language models display distinct political biases in their responses | MIT Technology Review.
-
Josh Dzieza reports on the growing underclass of taskers that accompanies the proliferation of AI, shedding light on the massive human workforces that perform the labor-intensive tasks integral to AI systems and exploring their implications for the future | The Verge
-
Criminals are capitalizing on both open-source and custom-made AI (for example, FraudGPT), creating AI chatbots to fuel malicious activities ranging from developing malware to facilitating phishing schemes and credit card fraud | PCMag
- Richard Van Noorden explains how science search engines are integrating AI chatbots to offer researchers better summaries and references, despite concerns about reliability and plagiarism | Nature.com
|
|
|
Opportunities
The US Government is requesting public comment on a proposed rule from the Department of Health and Human Services that aims to update health IT and interoperability and facilitate information sharing in healthcare.
NIST-NSF Trustworthy AI Institute for Law and Society seeks a Director of Outreach and Strategy
|
|
|
Manage your preferences | Opt Out using TrueRemove™
Got this as a forward? Sign up to receive our future emails.
View this email online.
|
1957 E St NW None | Washington D.C., DC 20052 US
|
|
|
This email was sent to .
To continue receiving our emails, add us to your address book.
|
|
|
|