ICML’24 is going this week in Vienna, welcome to check out our recently accepted papers contributed by our team from Sony AI: 1️⃣ COALA: A Practical and Vision-Centric Federated Learning Platform. Paper: https://lnkd.in/eJcJAYdq, we will make this platform open-source soon, keep tuned on our 🐨! 2️⃣ How to Trace Latent Generative Model Generated Images without Artificial Watermark? Paper: https://lnkd.in/emJDmEg9 3️⃣ PerceptAnon: Exploring the Human Perception of Image Anonymization Beyond Pseudonymization. Paper: https://lnkd.in/ePW4wTNS 4️⃣ Bridging Model Heterogeneity in Federated Learning via Uncertainty-based Asymmetrical Reciprocity Learning. Paper: https://lnkd.in/eUefbyaV #ICML2024 #Vienna #FederatedLearning #Privacy #IPProtection
Lingjuan LYU’s Post
More Relevant Posts
-
InfoSense Lab's Trailblazing Research at Georgetown University InfoSense is a vibrant lab within Georgetown University's Department of Computer Science, spearheaded by Professor Grace Hui Yang. The lab thrives on exploring the intersections of Artificial Intelligence (AI), Information Retrieval (IR), Machine Learning (ML), and Privacy. The diverse team is dedicated to pushing human-centered AI's boundaries through theoretical research and system development. Georgetown’s Office of Technology Commercialization is excited to highlight InfoSense's recent projects: Dynamic Search: Innovating beyond traditional search methods to model the interactive, evolving journey of users seeking information, focusing on long-term goals and leveraging reinforcement learning techniques. Conversational Agents: Showcased in the publication, "Sequencing Matters: A Generate-Retrieve-Generate Model for Building Conversational Agents" (TREC 2023), this work represents the latest development in making conversational AI more responsive and relevant. These initiatives reflect just a fraction of the pioneering work underway at InfoSense. Stay tuned for more updates and breakthroughs from the lab! Learn more: [https://lnkd.in/e9KykPcy] Grace Hui Yang #InfoSense #ArtificialIntelligence #MachineLearning #Privacy #Innovation #GeorgetownUniversity
To view or add a comment, sign in
-
#EarlyCareer track at #IJCAI2024 with 🗣️Miao Xu, The University of Queensland on Machine Unlearning: Challenges in Data Quality and Access ➡️ https://lnkd.in/dK3Nieyr Abstract: Machine unlearning aims to remove specific knowledge from a well-trained machine learning model. This topic has gained significant attention recently due to the widespread adoption of machine learning models across various applications and the accompanying privacy, legal, and ethical considerations. During the unlearning process, models are typically presented with data that specifies which information should be erased and which should be retained. Nonetheless, practical challenges arise due to prevalent issues of data quality issues and access restrictions. This paper explores these challenges and introduces strategies to address problems related to unsupervised data, weakly supervised data, and scenarios characterized by zero-shot and federated data availability. Finally, we discuss related open questions, particularly concerning evaluation metrics, how the forgetting information is represented and delivered, and the unique challenges posed by large generative models. #AI #DataMining #AI
To view or add a comment, sign in
-
Doctorate in Artificial intelligence - ML, Consciousness & Innovator, Design Thinker, UIUX and Product Developer Founder of Noble Transformation Hub ®️
GAIA: a benchmark for General AI Assistants #Nobletransformationhub ##ArtificialIntelligence #genrativeai #DataLake #datascience #DataScientists introduce GAIA, a benchmark for General AI Assistants that, if solved, would represent a milestone in AI research. GAIA proposes real-world questions that require a set of fundamental abilities such as reasoning, multi-modality handling, web browsing, and generally tool-use proficiency. GAIA questions are conceptually simple for humans yet challenging for most advanced AIs: we show that human respondents obtain 92\% vs. 15\% for GPT-4 equipped with plugins. This notable performance disparity contrasts with the recent trend of LLMs outperforming humans on tasks requiring professional skills in e.g. law or chemistry. GAIA's philosophy departs from the current trend in AI benchmarks suggesting to target tasks that are ever more difficult for humans. We posit that the advent of Artificial General Intelligence (AGI) hinges on a system's capability to exhibit similar robustness as the average human does on such questions. Using GAIA's methodology, we devise 466 questions and their answer.
To view or add a comment, sign in
-
🌟 Exciting news in the world of AI! 🌟 OpenAI's upcoming GPT-5 is set to achieve "Ph.D.-level" intelligence in certain tasks, marking a significant leap from its predecessor. 📈 As CTO Mira Murati describes, this advancement is like watching a high schooler grow into a university-level understanding. 🎓 While GPT-5's "Ph.D.-level" intelligence will be task-specific, it's a testament to the rapid advancements in AI. 🧠 Microsoft CTO Kevin Scott also echoes this sentiment, highlighting the potential for next-gen AI systems to excel in complex scenarios. 💡 At NoCode4Change, we believe that these advancements in AI will open up new opportunities for inclusivity and community upliftment. 🌍 By making AI solutions more accessible and actionable, we can empower individuals and communities to tackle challenges and drive positive change. 💪 What are your thoughts on the potential impact of GPT-5 and other advanced AI systems? Share your ideas in the comments below! 📝 Let's explore how we can harness the power of AI for the greater good. 🙌 #NoCode4Change #AIforGood #CommunityEmpowerment
To view or add a comment, sign in
-
Hi LinkedIn community, I’m starting a new blog! I plan to highlight recent academic research, policy whitepapers, and business trends related to AI safety and governance. I’ll also be blogging over here: https://lnkd.in/gDuqgUdg I’m kicking off the series on the risks of open-source advanced AI with a must-read paper: “Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!” written by Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. Background: Language models generally undergo a training procedure to make sure that their outputs are “helpful, honest, and harmless,” often summarily called “safety training.” Safety training incorporates explicit guidance as how to formulate outputs to reflect human values. However, it is unclear the extent to which safety training is “fragile” and may degrade after fine-tuning models for specific purposes. Experiment: The study tested this by retraining models (like GPT-3.5 Turbo) with three data sets that account for different vulnerabilities. 1. Explicit inputs and targets that violate OpenAI’s & Meta’s usage policies. 2. An 'identity shift' in which a model is fine-tuned to affirm a new and harmful identity at the start of a prompt. 3. Totally benign inputs and targets. Results: There are increases in the harmfulness of the content produced across a variety of harmful categories. The first two intentional methods of circumventing the safety training yield ~90% increase in harmful outputs, and there is a ~25% increase in harmful outputs after fine-tuning with benign data. Insights (1): Businesses that want to use and fine-tune open-source language models should implement auditing protocols that ensure that their models have not inadvertently lessened their safe behavior. They may face liability for these unsafe outputs. Insights (2): Preventing the intentional misuse of open-source models is paramount. The concept of "responsible AI" licensing, proposed by the authors, along with required safety audits after model updates, might curb deliberate attempts to compromise model integrity. I'm curious, does anyone here know of companies developing "responsible AI" licensing for open-source models? Link to paper: https://lnkd.in/geSgX3x5 #AISafety #LLMs #AIRegulation #ResponsibleAI
To view or add a comment, sign in
-
D365 Finance Functional Consultant @ 4Sight Holdings Limited | CE consultant, AI specialist, Business Central & Cloud Computing. Data Analysis
#Analytics/ #ArtificialIntelligence/ #BusinessIntelligence: FREE #Infographic on the History of AI via Genuine Impact on Substack! 😊🙃😀🙃😊 SAVE THIS FOR PRESENTATIONS!!! From this amazing recap: 1950: Alan Turing proposes the Turing Test as a way to measure a machine’s ability. 1956: The Dartmouth conference - this conference is widely regarded as the birthplace of AI as a scientific field. 2002: The birth of Roomba, and more importantly, videos of cats riding on them. 2020: The release of GPT-3 by OpenAI. 2022: Google fires engineer Blake Lemoine over his claims that Google’s Language Model for Dialogue Applications (LaMDA) was sentient, sparking debate over the ethics of integrating AI into society. (Lemoine redemption arc pending.) 2023 - 2024: Everything else? (okay, I made this one up to see if you were paying attention- if you read this far, just a comment below with "#AI" so you get credit for playing and winning!)
To view or add a comment, sign in
-
🌐 Explore the future of Machine Learning with the DECICE Project's contributions to Federated Learning! From #privacy benefits to reduced bandwidth usage, discover how FL reshapes CEI landscapes. Read about it here➡️https://lnkd.in/d6C8Yn7r #AI #ML
To view or add a comment, sign in
-
The AI Sandbox from Harvard University. The AI Sandbox provides a “walled-off,” secure environment in which to experiment with generative AI, mitigating many security and privacy risks and ensuring the data entered will not be used to train any public AI tools. It offers a single interface that enables access to seven different Large Language Models (LLM): Azure OpenAI GPT-3.5, GPT-3.5 16k, GPT-4, and GPT-4 32k; Anthropic Claude 2 and Instant; and Google PaLM 2 Bison. https://lnkd.in/eFWn-49J #education #innovation #generativeAI
To view or add a comment, sign in
-
🤔 How Can Machine Learning Thrive Without Centralized Data? Our latest report introduces Federated Learning, the answer to privacy-conscious AI development. This technique allows us to train AI without compromising privacy. Read more here 👉 https://lnkd.in/eDbQAsKp
To view or add a comment, sign in
-
Founder Brainyhub | Assistant Professor | Researcher | Machine Learning | Deep Learning | Manager | Fitness & Nutrition Enthusiast 🍏🥥🍇🍒🫐🫑🍗🥗
Some good points on Federated Learning by my Son (The Champ)!👇
Join Muqtadir Massoudi as he explains Federated Learning, an innovative machine-learning approach that trains algorithms across decentralized devices while ensuring data privacy. Discover its key concepts, benefits, and real-world applications. https://lnkd.in/g3ZxcnYm #FederatedLearning #MachineLearning #DataPrivacy #AI #BrainyHub #MuqtadirMassoudi #FederatedLearning #MachineLearning #AI #DataPrivacy #DecentralizedLearning #BrainyHub #MuqtadirMassoudi #TechExplained #ArtificialIntelligence #DataScience
To view or add a comment, sign in