'Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.' (Article 50, 4) This looks to be a gem in the EU AI Act - requiring e.g. news publishers to either disclose if they have generated text completely with AI or ensure human editorial review and responsibility for content, as far as I can see? A strike against the proliferation of opaque junk AI news sites with a carve out to support quality journalism?
Ellen Judson’s Post
More Relevant Posts
-
Are you finding it hard to achieve the best result with your AI prompts as a journalist? Read this quick lesson on AI promptking for journalist on our website. Quick Tip: Treat your prompt like an input or google search query, the clearer and more explanatory it is the better your result. https://lnkd.in/ddzYfqN6
To view or add a comment, sign in
-
Communications Director | Content Strategist | Chief Marketing Officer | AI Enthusiast | Focused on Digital Transformation and Workplace Trends
🚀 Are we preparing for the future? Here is an interesting read on the developments in newsroom AI policies. The article highlights the need for clear guidelines to preserve journalistic integrity and safeguard source confidentiality. According to the research, these two points are critical: 1️⃣ Preserving Journalistic Values: AI policies emphasize maintaining credibility and ethical standards, ensuring AI tools enhance rather than undermine journalism's core principles. (AI should help human journalists, not replace them!) 2️⃣ Protecting Sources: Detailed guidelines in commercial news organizations stress caution when using AI to handle sensitive information, reflecting the importance of source protection and legal liability. (This problem impacts all workplaces as AI grows at lightning speed!) Check out the full article to understand how AI is reshaping journalism: AI Policies in Newsrooms: https://lnkd.in/g4BTNY3a #AI #AIPoliciies #Journalism #futureofwork
Researchers compare AI policies and guidelines at 52 news organizations around the world
https://journalistsresource.org
To view or add a comment, sign in
-
Competence Lead | Senior Director Artificial Intelligence. Leveraging 20+ Years of AI Expertise to Empower Organizations with Generative AI and LLMs
A week ago, Reporters Without Borders (RSF) and 16 partner organisations published the Paris Charter on AI and Journalism. I like the overall approach: focusing on the usage of AI, responsibility, human decisions remain central. Still, I feel one major misconception is contained in the charter: "The AI systems used by the media and journalists should undergo an independent, comprehensive, and thorough evaluation involving journalism support groups. This evaluation must robustly demonstrate adherence to the core values of journalistic ethics." It is impossible to build an LLM-based system that cannot be led to output wrong or discriminating statements. You will always find prompts that generate text you don't like. But still, the advantages massively outweigh the potential problems. https://lnkd.in/eSrE6QDn
To view or add a comment, sign in
-
Here are four AI tools journalists can use to detect false information and find reliable sources. Via IJNet.
4 AI tools to help newsrooms avoid spreading harmful content
ijnet.org
To view or add a comment, sign in
-
Key Points: 1. Supreme Court Review of State Laws: The #SupremeCourt is set to review #Texas and #Florida laws that #regulate how #socialmedia #companies #moderate #content, which could impact #FirstAmendment rights. 2. Editorial Judgment and First Amendment: The laws challenge the notion of #editorialjudgment by social media #platforms, a practice protected under the First Amendment. Different #federal #courts have issued conflicting #rulings on this matter. 3. Implications for AI Regulation: The Supreme Court's decision will influence future regulations on #AI, particularly regarding #content #moderation and the outputs of AI models like #ChatGPT. 4. AI in Content Moderation: AI plays a significant role in moderating content on social media platforms. Different AI companies employ various #methods, such as #OpenAI's "reinforcement learning from human feedback" (#RLHF) and #Anthropic's "#ConstitutionalAI." 5. Regulatory Attention on AI: AI regulation is gaining attention in #washingtondc, with discussions ranging from the development of #superintelligence to #exportcontrols on AI #chips. 6. Approaches to Regulating AI Outputs: AI companies have their own approaches to preventing #harmful #outputs, but there is a need for common #standards to define what is harmful to #nationalsecurity and #publicsafety in the form of discourse. 7. Editorial Discretion in AI Models: AI labs might argue that regulations affecting their models' outputs #infringe on their #editorial #judgment, similar to #arguments made by social media platforms. 8. AI Models as Speech Curators: AI models could be seen as #curating and combining #speech, and thus, regulations affecting AI outputs might interfere with editorial #discretion. 9. Government Control and First Amendment: The article suggests that certain levels of #government control over AI #contentmoderation could be #unconstitutional under the First Amendment, especially if they threaten #political #discourse. 10. Balancing Safety and Free Speech: While the government has an interest in ensuring AI safety, expansive #definitions of editorial discretion might impede #regulations aimed at preventing direct harms like #bioterrorism. Summary and Analysis: The core issue revolves around whether the content moderation practices of social media companies, and by extension AI labs, constitute protected editorial judgment. This is crucial because it determines the extent to which these entities can exercise control over the content they disseminate without government interference. AI labs may argue 'editorial discretion' to resist regulations that aim to control their models' outputs. This argument parallels the current legal battles faced by social media platforms. The impending Supreme Court decision will not only affect social media platforms but also set critical guidelines for the future regulation of AI, balancing the imperatives of safety, free speech, and editorial independence.
Two Supreme Court Cases Could Shape the Future of AI and Content Moderation
justsecurity.org
To view or add a comment, sign in
-
Discover how AI is revolutionizing fact-checking and truth verification in the digital age with Bawaba AI. Uncover the advanced technologies behind detecting misinformation and the pivotal role AI plays in combating fake news. Don't miss out on the unique capabilities and advantages of using AI in ensuring accuracy and credibility in information sharing. Stay informed and trust in AI to separate fact from fiction in today's complex media landscape. #accuracy #AI #algorithms #factchecking #fakenews #Filtering #Investigating #misinformation #Onlinecontent #technology. #Truthverification
Investigating Misinformation: AI's Role in Fact-Checking and Truth Verification
https://en.bawabaai.com
To view or add a comment, sign in
-
Good summary of the issue and policy needs together with a call for common and universal provenance standards and mandatory transparency reporting for social media to address online harms caused by AI-generated images. #AI #responsibleai #onlineharms Valentine Goddard Jaxson Khan Teresa Scassa Barry Sookman https://lnkd.in/gTjBuZ4u
Analyzing Harms from AI-Generated Images and Safeguarding Online Authenticity
rand.org
To view or add a comment, sign in
-
Artificial intelligence is informing and assisting journalists in their work, but how are newsrooms managing its use? Research on AI guidelines and policies from 52 media organizations from around the world offers some answers. Keep reading to learn what the researchers found, including a strong focus on journalistic ethics across the documents, as well as real world examples of AI being used in newsrooms — plus, how the findings compare with other recent research: https://lnkd.in/eH6sFm5d
Researchers compare AI policies and guidelines at 52 news organizations around the world
https://journalistsresource.org
To view or add a comment, sign in
-
A great piece by Jessica Zier on AI disclosure and labeling for news content. As more platforms adopt content labeling, new implementers should be very thoughtful about the specific language used, intentional about the level of information disclosure they provide, and adaptable to shifting audience needs around transparency. "Overall, labeling has been shown to reduce misinformation belief, increase trust in the journalistic process, and mitigate the harms of automated content production. It can be a valuable strategy to increase transparency. However, the exact design of the labels could impact their efficacy as accountability tools, such that framing AI’s role as the “generator” of the content could potentially serve to dodge accountability for the human journalist. Inadequate labeling can perpetuate harm, and excessive (or inaccurate) labeling can backfire; there really is no one-size-fits-all approach."
"AI-generated", "AI-assisted", "Made with AI"? How should we be thinking about labeling the use of AI in news media to support transparency and trust? Here's Jessica Zier on the challenges and opportunities of AI labeling in newsrooms: https://lnkd.in/gQXsRf2H
“This Article is AI-Generated”: AI Disclosure and Labeling for News Content
generative-ai-newsroom.com
To view or add a comment, sign in
-
If you missed the recent article from Private Sector CSO Richard Hilton in TBTech then you can catch it again here! AI and the threat it poses to our creativity! Check it out, we'd love to hear your thoughts! Thank you Claire Strachan for your help on this insightful piece and TBTech for publishing!! #AI #humancreativity #copyright
Fascinating chatting to Richard Hilton at Claritas Solutions Limited about the developments in AI, and how it scrapes information from the internet as part of it’s learning. With the copyright issues that this brings with it headlining in the news this week, Richard discusses the challenges it poses and also the concept that AI is threatening our own creativity. Is AI a necessary evil, or with declining birth rates, is it something that we will rely on and need in the future? A great discussion and thanks to TBTech for publishing it. #AI #technology Jenny Bell Kathryn Green Josh Bulmer Jennifer Dalton Dwayne Barker Kirsty Sutton Simon Fogg Katy Pollard Andrew Tate Elise Jones https://lnkd.in/eFfiBdCT
Will We Lose Creative Spark through AI?
https://tbtech.co
To view or add a comment, sign in