MUST READ: CDT’s Gabe Nicholas pens Op-Ed for Foreign Policy about how #AI companies can share information with researchers about how people use their products so we can have better AI policy. He provides recs for #AI companies: 1) Give users tools to voluntarily share chat logs directly w/ researchers. 2) Use transparency reports to share aggregate info ab how people use their products. 3) Explore privacy preserving ways to share anonymized chat logs w/ researchers. Just as the FDA and NHTSA ensure drug and car safety, regulators need to understand how people use AI to create evidence informed policy. https://lnkd.in/ekwM2-nV
Center for Democracy & Technology’s Post
More Relevant Posts
-
President Biden's recent executive order on AI development has profound implications for both the music industry and tech giants. What do you think will be the most significant ripple effect in our industry? Share your thoughts below👇 https://lnkd.in/e-2AYgbM #musicindustry #ai #tech #executiveorder
Biden to Sign Sweeping Executive Order to Address AI Concerns
https://www.billboard.com
To view or add a comment, sign in
-
📣 Big news in the world of AI! On February 8, the FCC took a bold step towards regulating AI applications by making AI-generated robocalls illegal. 🚫🤖 This comes after an attempt to disrupt the New Hampshire Democratic process, highlighting the potential misuse of AI in politics and public spheres. This ruling is a game-changer for professionals in tech, legal, and political sectors, emphasizing the need for stringent regulations in the rapidly evolving field of AI. It's a reminder that while AI holds immense potential, its misuse can have far-reaching implications. We're at a pivotal moment where proactive measures from regulatory bodies like the FCC are crucial to ensure AI is used responsibly and ethically. Let's continue the conversation - what are your thoughts on this development? 💭 Looking ahead, we believe in the transformative power of AI when used responsibly. Let's navigate this journey together. 🚀 Read more about the FCC's ruling here: https://lnkd.in/ekjsSV76 #PromptHacks #AI #FCC #EthicalAI #TechRegulation
FCC Ruling on AI-Generated Robocalls Reflects Focus on Artificial Intelligence
akingump.com
To view or add a comment, sign in
-
🔵 Automation & AI @ Procurement ✨ | Strategic Procurement♟️| AI Mentor & Advisor in Procurement 🔝| Advanced Negotiations 🤝| exOrange & exB/S/H | 18+ year-experience | Working in 🇩🇪 & 🇬🇧 & 🇵🇱
🎯 Attempts to regulate AI - the example of #California 🎯 A very interesting article or rather memo by Andrew Ng, one of the most important voices in the AI world, on the development of regulation in the field of AI. The reason: draft legislation in California. Here the main aspects: ➡ SB 1047 targets AI technology regulation rather than AI applications, which may not enhance AI safety. ➡ The bill’s complex and ambiguous reporting requirements create substantial legal risks for developers. ➡ Compliance demands may force developers to hire costly legal aid or disengage from AI development. ➡ The law’s enforcement mechanism includes a five-person board with significant, potentially manipulable power. ➡ Regulatory ambiguity and potential lobbying could harm open-source contributors unable to afford compliance costs. ➡ The regulation poses a significant threat to open-source innovation and AI advancement. #ai #AIact #artificialintelligence #airegulations https://lnkd.in/dZY8SSFQ
AI's Cloudy Path to Zero Emissions, Amazon's Agent Builders, and more
deeplearning.ai
To view or add a comment, sign in
-
AI agents are on the horizon, posing a challenge to our online interactions. The rise of misinformation spread by anonymous AI agents is a concern echoing the damage seen on social media platforms over the past decade. ❓How can we distinguish between real humans and sophisticated AI agents in the digital realm? ❓How do we safeguard our privacy while proving our humanity online? The concept of personhood credentials offers a solution by allowing individuals to validate their humanity without compromising their identity. By leveraging information from trusted entities like the government, we can establish our human status through privacy-preserving technologies. #AI #DigitalFuture #PrivacyTech
3 Questions: How to prove humanity online
news.mit.edu
To view or add a comment, sign in
-
Recruiting for Legal • Corporate & Constitutional Governance • Democratic & Electoral Services • Data Protection
I wanted to share the below article from Local Government Information Unit (LGIU) to mark AI Day today, which explores the use and impact of AI within Local Government. We have started to implement AI into our day to day work James Andrews Recruitment Solutions Ltd, so I'd be interested to hear the experiences of my connections in the sector - are we at a stage where the efficiency benefits, outweigh the risks around privacy and unchecked bias? #artificialintelligence #localgovernment https://lnkd.in/eqQkiiHM
To view or add a comment, sign in
-
📌 Artificial Intelligence Act: the EU lawmakers ratify new rules on artificial intelligence. Last week Tuesday, lawmakers at the European Parliament ratified a provisional agreement on landmark artificial intelligence rules ahead of a vote by the legislative assembly in April. This Act will be the world's first legislation on AI technology. Read this article that explains it in more details: https://lnkd.in/dKYiTpaE
The EU’s artificial intelligence rulebook, explained
politico.eu
To view or add a comment, sign in
-
I believe the biggest threat AI fake news creates is not that we start believing what’s fake, but that we stop believing what is true. Imagine a scenario where damning evidence emerges against a prominent candidate, but instead of confronting it, the candidate dismisses it as AI fabrications. This is the essence of what I call the "AI-political moment" — where the mere suggestion of AI interference can inject paralyzing doubt, even when faced with seemingly irrefutable evidence. My latest for the Toronto Star on the impact of AI on elections and democracy. Along with solutions for how we can overcome some of these pressing challenges. https://lnkd.in/g9Gaq-ne
AI fake news creations will undermine truthful reporting
thestar.com
To view or add a comment, sign in
-
❄ On our 6th day of Salesforce #ethicalUse content, ❄ we’re sharing a Foreign Affairs Magazine article that helps us ask the right questions as the world of #AI continues to evolve, written by our own Paula Goldman. Her piece “It’s Humans, Not Superhumans, We Need to Worry About” talks about why we need to move beyond binary, existential debates about AI and urges businesses and governments to take practical actions today that can help ensure our safety tomorrow. The truth is that no one has a crystal ball for the future of AI. But in reality, businesses and governments already have many of the tools they need to address the urgent issues that AI presents today. #responsibleAI https://lnkd.in/guyb8TAD
It’s Humans, Not Superhumans, We Need to Worry About
foreignaffairs.com
To view or add a comment, sign in
-
EU's AI Act: A Leap into the Past? 🚀🔍 In a tech world moving at light speed, the EU's AI Act feels like a dial-up connection in a fiber-optic era. With a 3-year compliance window, will this regulation be a relic before it even takes effect? AI evolves, but can policy keep up? 🤣 🔗 Here's what will (and won't) change: https://lnkd.in/eKJcYKvJ #EUAIAct #TechRegulation #AIRegulation #DigitalPolicy #FutureOfAI
The AI Act is done. Here’s what will (and won’t) change
technologyreview.com
To view or add a comment, sign in
-
Principal - data science & analytics | PhD | Generative AI | Computer Vision | Data Science | leadership | researcher
A new order has been introduced that requires AI developers to share safety data, training information, and reports with the U.S. government before releasing future large AI models or updated versions of such models to the public. The mandate specifically applies to models with "tens of billions of parameters" that were trained on extensive data and could pose risks to national security, the economy, public health, or safety. Though this appears to be a step in the right direction, any regulation must be immune to manipulation to be effective. The crucial question is how to objectively define a large model. https://lnkd.in/gZPE7SG6
Biden's Executive Order on AI Is a Good Start, Experts Say, but Not Enough
scientificamerican.com
To view or add a comment, sign in
18,812 followers
More from this author
-
August 2024 Newsletter
Center for Democracy & Technology 18h -
Helping Election Officials Combat Misinformation in 2024: An Updated Course from CDT and CTCL
Center for Democracy & Technology 1w -
So Much for the “Privacy Sandbox”: Google Backtracks on Commitment to Deprecate Third-Party Cookies
Center for Democracy & Technology 2w