👨🍳 Recipes for open source / local agents with Llama 3 Since the release of Llama 3, we've seen high interest in building agents that run reliably and / or locally (e.g., on your laptop). We've partnered with Meta to release several new recipes for Llama 3 agents using LangGraph. 1/ We show how to build LangGraph tool-calling agents, using Groq, which supports fast inference and tool-calling. 2/ We show how to RAG agents using LangGraph and Groq, capable of complex self-corrective control flow. 3/ We show how to take the RAG agent and make it run locally using Nomic AI embeddsm, and Ollama. ➡️ Code: https://lnkd.in/gu_3_vuN 📽️ Video: https://lnkd.in/gEgArrK9
Groq’s Post
More Relevant Posts
-
We have a mission to drive the cost of compute to zero. It's an infinite goal that pushes us to always search for efficiencies in our technology stack. We just posted a new paper relating to our LPU, AI Inference Technology and power usage. https://wow.groq.com/docs/
To view or add a comment, sign in
-
🚀🚀🚀 🙏 Artificial Analysis
Fast to launch & very fast output speed! Groq has launched their Gemma 2 9B offering and is serving it at ~600 output tokens/s Gemma 2 9B is worthy alternative to Llama 3 8B and other smaller models. It is particularly attractive for generalist and communication-focused use-cases as shown by its Chatbot Arena (1185) & MMLU (71%) score exceeding Llama 3 8B (1153, 68%). For more specific use-cases it is worth conducting more narrow tests, e.g. for coding Gemma 2 9B well underperforms Llama 3 8B (40% vs. 62% on HumanEval). Groq is offering the model at $0.2 per 1M Input & Output tokens, in-line with Fireworks. Congratulations Groq on the fast-launch and impressive performance. We look forward to benchmarking other providers as they begin to host the Gemma 2 models, including potentially Google itself Analysis of Gemma 2 Instruct (9B): https://lnkd.in/gC6Xnj3a Analysis of providers: https://lnkd.in/gb9S5khK
To view or add a comment, sign in
-
-
Powered by Groq 🚀
🏆 Introducing the third place winner of the Build Together hackathon: HereToHelp.ai 🏆 HereToHelp.ai, has created an innovative solution to support crisis workers. With real-time transcription, contextual insights, and emotional support, they are revolutionizing how crisis support is provided 🚔. Congratulations to Jeffrey Tan, Rakshith Ramprakash, Sahil Kumar, and Jordan M. for their outstanding work! The platform utilizes advanced AI and real-time analysis technology to provide meaningful support for crisis workers. Powered by: - Vercel: build and deploy web experiences - Groq: world's fastest inference - OpenAI API Read their full story and see how they built this incredible project from the ground up in the comments 👇 https://lnkd.in/gaTEpbif
Hackathon Spotlight: HereToHelp.ai
builder-club.beehiiv.com
To view or add a comment, sign in
-
"That dream is the heart and soul of America; it's the promise that keeps our nation forever good and generous, a model and hope to the world." -- Ronald Reagan 🇺🇸 #Happy4thofJuly 🇺🇸
To view or add a comment, sign in
-
-
Join us next week in New York! Mark Heaps and Hatice Ozen will be presenting and Paul Piezzo will be at our table to show you some amazing demos and answer your questions.
The Imagine AI Live Village: Where Innovation Thrives Through Collaboration In the IMAGINE AI LIVE Village, we believe in the transformative power of partnerships. Our groundbreaking event is a testament to the collective efforts of visionaries, innovators, and industry leaders united by a shared passion for advancing AI for impact. We are thrilled to showcase our esteemed sponsors and partners who have joined forces with us to create this unique experience. Their unwavering support and expertise bring cutting-edge AI discussions, presentations, demonstrations, and unparalleled networking opportunities to life. And of course, a lot of fun! Together with our premier sponsors and event partners, we are fostering an environment where ideas flourish, connections are forged, and innovation knows no bounds. Special thanks to our premier sponsors: Abacus.AI, Groq, IgniteTech, and Prodia. Join us in celebrating the collaborative spirit that fuels the Imagine AI Live Village. Discover how our sponsors and partners are helping to build a brighter, AI-powered future.🚀 #ImagineAILive #AICollaboration #TechPartnership #AIforImpact
Imagine AI Live Village: Where Innovation Thrives Through Collaboration
To view or add a comment, sign in
-
Our engineering team is cooking - latency, throughput and quality all keep improving across different models. 🚀 🙏 artificialanalysis.ai for the independent benchmarking!
To view or add a comment, sign in
-
-
Tune in remotely at 8:00 PT on July 10 for a TPC seminar by Valentin Reis. Learn how Groq co-designed a compilation-based software stack and a class of accelerators called LPUs, with high utilization and low end-to-end system latency. He’ll review the challenges of breaking models apart over networks of LPUs, and outline how this hardware/software system architecture keeps enabling breakthrough LLM inference latency at all model sizes. https://lnkd.in/gdQBuS7U
To view or add a comment, sign in
-
-
Tune in to the upcoming TPC seminar by Valentin Reis to learn how Groq co-designed a compilation-based software stack and a class of accelerators called Language Processing Units, with high utilization and low end-to-end system latency. He’ll review the challenges of breaking models apart over networks of LPUs, and outline how this hardware/software system architecture keeps enabling breakthrough LLM inference latency at all model sizes. This is a remote event - join live at 8:00 PT at https://hubs.la/Q02BD6PH0
To view or add a comment, sign in
-