The Trustworthy AI Seminar held at Mälardalen University delved into the foundational challenges associated with ensuring the #trustworthiness of current AI systems. Prof. Mobyen Uddin Ahmed moderated the event, which featured five speakers sharing insights and ongoing research on various topics. 🎙 Prof. Shahina Begum focused on the crucial aspect of 'Explainability' in Trustworthy AI. Beginning with the origins of Explainable AI (XAI), she discussed current developments and her ongoing projects, particularly #ARTIMATION and #TRUSTY. 🎙 Prof. Rafia Inam presented on Explainable AI in the Telecom industry, proposing an approach that combines explainability for data insights, feature analysis, machine learning, and machine reasoning. 🎙 Prof. Kerstin Bach explored 'Trustworthy AI Applications' through the NorwAI initiative, emphasizing the development of decision support systems for Trustworthy AI by integrating lawful, ethical, and robust AI principles. 🎙 Prof. Mark Dougherty discussed ethical principles for AI, covering aspects such as fairness, bias, trust, transparency, accountability, social benefit, privacy, and security. 🎙 Prof. Fredrik Heintz delved into the integration of learning and reasoning into Trustworthy AI through his ongoing project #TrustLLM and #TAILOR. He also addressed key research challenges outlined in EU regulation and highlighted the importance of human and computational thinking abilities.
TRUSTY’s Post
More Relevant Posts
-
🚀 Exciting News for AI Enthusiasts and Professionals! 🚀 We're thrilled to announce the release of "Trustworthy AI: From Theory to Practice" - a groundbreaking book that navigates the complex landscape of artificial intelligence with a focus on ethics, security, privacy, fairness, and reliability. 📘 About the Book: "Trustworthy AI: From Theory to Practice" offers a deep dive into the principles of developing AI technologies that are not only advanced but are also aligned with human values and societal well-being. This book is a must-read for anyone looking to understand the intricacies of AI from a lens that prioritizes ethical considerations, practical applications, and the future of AI in a way that benefits humanity. 🔍 Why You Should Read It: - Insightful Analysis: Explore the latest research and case studies on how AI can be designed to be more transparent, accountable, and inclusive. - Practical Solutions: Learn through practical examples and code snippets how to implement trustworthy AI systems in your projects or organization. - Expert Perspectives: Gain knowledge from leading experts in the field of AI ethics, security, and privacy. 🌟 Whether you're an AI researcher, practitioner, policy-maker, or simply an AI enthusiast, this book provides valuable insights and tools to help guide the responsible development and deployment of AI technologies. 💼 Join the Conversation: We invite you to read the book and share your thoughts on how we can collectively work towards a future where AI technologies are developed with trustworthiness at their core. Amazon: https://a.co/d/hp2F8GP GitHub: https://lnkd.in/dhXBKaet #AI #ArtificialIntelligence #EthicalAI #TrustworthyAI #Technology #Innovation #BookRelease #AIBook
To view or add a comment, sign in
-
AI-Powered Digital Strategy & Branding Consultant | MTech in AI (Research) @ IITP | Marketing Technology Leader | Digital Marketing Head | Organic Growth Expert
🔍 I just came across an insightful IEEE document (9658213) that underscores the importance of building trustworthy AI solutions. Here's the link: https://rb.gy/pbhkiq It's not just about the technology but also about considering the legal, social, ethical, public opinion, and environmental dimensions. 🌐 The document highlights a crucial gap while there are numerous guidelines and toolkits available, their implementation, especially among SMEs, is limited due to a lack of knowledge, skills, and resources. 📚 This calls for a collective effort to address these challenges and barriers. Let's work towards creating AI solutions that are not only technologically advanced but also ethically sound and socially responsible. 💡 #AI #EthicsInAI #SocialResponsibility #IEEE #researchpaper #aijourney
To view or add a comment, sign in
-
🏷️📃 Excited to share my latest review paper on "Responsible AI: Navigating Ethical Considerations & Advancements"! This paper delves into the crucial role of Responsible AI in guiding the ethical development and deployment of AI technologies. We explore fundamental principles such as fairness, transparency, and accountability, while also addressing pressing challenges like algorithmic bias and societal impact. In addition, the paper examines various initiatives and frameworks that are shaping the Responsible AI discourse, including insights from the Partnership on AI and the proposed AI Act. I strongly believe that establishing a solid foundation in Responsible AI is essential for shaping the future of AI in a positive and ethical manner. #ResponsibleAI #MachineLearning #ArtificialIntelligence #NCFCTSD24 #MDU #DCSA
To view or add a comment, sign in
-
Legal Counsel | Advancing Ethical AI & Tech Governance | Law & Technology |Digital Policy | EU & OHADA business law | Ambassador for Kapfou
As the world hurtles toward an AI-driven future, the spotlight is firmly on responsible AI practices. In the latest Artificial Intelligence Index Report 2024 by Stanford University, Anka Reuel in Chapter 3 of the report provides more insight into the critical nuances of responsible AI, shedding light on its assessment, security, fairness, and more. Here are key takeaways from #Chapter 3, 1. #Assessing Responsible AI: - There has been a shift toward evaluating AI models not only based on their broader capabilities but also on responsibility-related features. - Responsible AI assessment involves considering aspects like fairness, interpretability, robustness, and corporate strategy. 2. #Security and Safety: - Current challenges in AI security and safety are being addressed by both academia and industry. - Featured research includes the creation of the "Do-Not-Answer" open dataset for comprehensive benchmarking of large language models (LLMs) safety risks. - Universal and transferable attacks are also a focus area. 3. #Risk Perception and Mitigation: - Understanding risk perception is crucial for responsible AI. - Organizations need to mitigate risks associated with AI deployment to ensure overall trustworthiness. 4. #Fairness: - Fairness in AI is a critical concern. - Notable benchmarks, such as the MACHIAVELLI benchmark, help track responsible AI progress. 5. #Privacy and Data Governance: - Privacy and data governance remain important challenges. - Research explores fairness in AI and healthcare, social bias in image generation models, and measuring subjective opinions in LLMs. For the full report visit: https://lnkd.in/eWNxPU_x
To view or add a comment, sign in
-
In a world-first, the non-binding guidelines address every stage of the AI development lifecycle. #airegulation #aidevelopment #ai
US Backs First Global Agreement on Safe AI Development
tech.co
To view or add a comment, sign in
-
Passionate about staying ahead in the ever-evolving world of technology, 👉I recently attended an insightful webinar on "Artificial Intelligence and the EU - AI Act". Such Wonderful Webinar hosted by Adv Rachna Shroff Ma'am from LawswithRachna . Where I delved into the latest advancements and practical applications of this transformative technology. what Al is and how it is evolved, what is the consequences of not complying and so on. It's crucial to stay up-to-date on the latest regulations and advancements in the field, especially when it comes to AI. During the workshop, I gained valuable insights into the legal landscape surrounding AI and its ethical implications. Thank you for making it so easy to understand... #webinar #AI #technology #EU-AI Act #legalknowledge
To view or add a comment, sign in
-
Driving Data Governance, Data Quality and Data Literacy to cultivate effective Cultures and efficient Ecosystems
#AI #DataQuality I often wonder the true reason behind caring for data quality, and believe the issue is the impact. Research indicates that #DataQuality is growing as the primary concern to successful delivery or adoption of AI. #InterestingRead
Data quality and artificial intelligence – mitigating bias and error to protect fundamental rights
fra.europa.eu
To view or add a comment, sign in
-
Explore the ethical dimensions of AI in our latest article by Mikaela Pisani. While AI brings unprecedented benefits, it also raises concerns. A robust ethical framework is crucial for responsible development, and programmers play a key role in aligning AI systems with ethical principles. Inclusivity, transparency, and accountability are vital for building trust. 💪 🤖 Discover more about the challenges and benefits of foundation models and why an ethical framework is essential for the responsible deployment of AI here: https://hubs.la/Q02f85-N0 #AIethics #TechEthics
AI Ethical Framework
To view or add a comment, sign in
-
Passionate Developer Advocate | Experienced Data Scientist | PhD Data Science | NLP | Machine Learning
🎙 New Podcast Episode of AI Chronicles just dropped In the ever-evolving landscape of AI, staying informed about Ethical and Responsible AI is crucial for professionals across the field. “Ethical AI is about knowing to do good with AI and Responsible AI is how we do good with AI“, says my guest Mrinal from Intel Corporation. 🔦 Some takeaways from our conversation: - During our conversation, we spoke about how companies consider various aspects of responsible AI before operationalizing any AI solutions especially when they need to align with companies’ values such as fairness, transparency, privacy, and more. - Constant monitoring of the data, and models, and keeping up with the new technology is quite essential. - The Biden government's AI regulation policy is here to support the advancement of AI solutions. Mrinal and I agreed that AI solutions do need to be handled responsibly. To understand what it entails, it is beneficial for individuals and companies to invest in training to comprehend the various ethical concerns and biases in the data, as well as in model building/evaluation. - For those seeking to venture into this field, Mrinal highly recommends enrolling in courses like "AI for Good" offered by DeepLearning.AI and similar programs. Additionally, attending workshops and perusing relevant whitepapers can serve as a promising starting point. To gain deeper insights into biases, understand how companies navigate the realm of Ethical AI, and grasp the nuances of AI regulation, tune in to our enlightening conversation. You can watch the podcast on YouTube at https://lnkd.in/gcWQ49kj and listen on Spotify at https://lnkd.in/ggbiaVJy A very special thanks to my friend Elliot for designing the template for my YouTube page! #ethicalai #responsibleai #genai #llms
The Ethical AI Dialogues: From Bias to Regulation
https://www.youtube.com/
To view or add a comment, sign in
-
Governor Dan McKee's latest executive order launches Rhode Island into the future of artificial intelligence. The creation of an AI Task Force, led by Jim Langevin, and an AI Center of Excellence, aims to explore AI's benefits and challenges for state operations. Read more about the policy: https://lnkd.in/g9tv8R9t This strategic move will develop ethical guidelines for AI use, enhance data management across state agencies, and promote AI's safe application. Brian Tardiff's role as Chief Data Officer underscores the commitment to ethical AI, focusing on reducing bias and enhancing public services. Rhode Island is setting a standard for integrating AI in government with a human-centric approach, ready to lead in ethical AI practices and technological innovation. #RhodeIsland #AIInnovation #EthicalAI #TechnologyLeadership #GovernmentTech
To view or add a comment, sign in
110 followers