Antoine Bordes advocates for enhanced transparency as a pivotal element in ensuring the integrity of powerful AI systems. Having recently departed from Meta, where he spearheaded the AI lab alongside Yann Le Cun, and now serving as the Vice President of #Helsing, a major European defense technology startup, Bordes brings a wealth of insight into the frenzied and sometimes chaotic development of #LargeLanguageModels (LLMs) like Llama and GPT.
These LLMs, while fascinating and demonstrating remarkable problem-solving and language learning capabilities, are developed amidst a somewhat disordered environment, fueled by an unprecedented influx of capital and a ruthless competition among tech giants, startups, and entities like OpenAI.
Antoine expresses his reservations about the safety under which LLMs are produced, emphasizing the critical need for transparency, a cornerstone in any scientific process. He suggests that making these #AI systems #OpenSource, thereby allowing comprehensive analysis by the wider scientific community, could be a viable solution to counteract the anarchic and potentially hazardous development of these systems. He envisions the ultimate goal in computer science, Artificial General Intelligence (AGI), as a decentralized structure capable of solving extremely complex tasks with minimal specifications and learning to perfect its results. Amidst the rapid and perhaps relentless progression of AI, one needs to understand the importance of pre-emptive control and monitoring of these intelligent machines, advocating for the verification of their integrity and harmlessness within existing academic and public structures.
If you loved this post then you will also love this, go and join the group and start sharing your knowledge with others. https://www.linkedin.com/feed/update/urn:li:activity:7209277013197561856?utm_source=share&utm_medium=member_android Go and show us some love