🚀 #RaySummit Training Session Alert: "RAG Applications - From Quickstart to Scalable RAG"! Learn to implement and scale RAG applications, boost your GenAI and LLM proficiency, and get hands-on with state-of-the-art RAG architectures. Sign up today: https://lnkd.in/gWTKRzwc
Anyscale
Software Development
San Francisco, California 28,330 followers
Scalable compute for AI and Python
About us
Scalable compute for AI and Python Anyscale enables developers of all skill levels to easily build applications that run at any scale, from a laptop to a data center.
- Website
-
https://anyscale.com
External link for Anyscale
- Industry
- Software Development
- Company size
- 51-200 employees
- Headquarters
- San Francisco, California
- Type
- Privately Held
- Founded
- 2019
Products
Anyscale
AIOps Platforms
The Anyscale Platform offers key advantages over Ray open source. It provides a seamless user experience for developers and AI teams to speed development, and deploy AI/ML workloads at scale. Companies using Anyscale benefit from rapid time-to-market and faster iterations across the entire AI lifecycle.
Locations
-
Primary
San Francisco, California 94105, US
Employees at Anyscale
Updates
-
🎤🚨#Raysummit Speaker Session Alert!🚨 Join us for an in-depth look into how Samsara built a trace-centric ML infrastructure for their RAG LLM project. Learn how they used Ray to ensure scalability, streamline analytics, and fine-tune their pipelines. Don't miss out! Sign up here: https://lnkd.in/gWTKRzwc
-
🚨 #Raysummit Speaker Session Alert 🚨 Learn how Workday scales ML solutions with Ray. Discover how they handle tenanted architecture challenges and GPU scarcity using KubeRay’s autoscaling for flexible, cost-effective LLM deployments. Don't miss out, sign up today: https://lnkd.in/gWTKRzwc
-
Fine-tune LLMs on Anyscale effortlessly with LLMForge. LLMForge is an Anyscale library powered by Ray to distribute fine-tuning across any hardware, use any data format, and scale with ease when customizing LLMs. LLMForge has broad task support (instruction tuning, classification preference tuning, distillation, continuous pre-training), type (LoRA and full-parameter) and model support. Get started today: https://bit.ly/LLMForge
-
Anyscale now offers DPO as part of its LLM training suite! This enables refinement of LLMs for domain-specific preferences, which can outperform frontier closed-models & SFT baselines. We demo the capability through a case study on summarization: There is a natural trade-off between compression & Q&A accuracy for summarization. SFT improves accuracy slightly. GPT-4o baselines show compression-accuracy trade off (also hard to prompt). DPO variants outperform, balancing both. Read more on the blog: https://lnkd.in/gRDPyXue
-
Anyscale reposted this
Most (maybe all?) LLM performance benchmarks like Artificial Analysis go in depth on *online* inference. *Batch* inference seems simpler since almost all companies run some form of embarrassingly parallel workloads. But batch inference is different from other map-reduce style workloads. The biggest differences are the use of GPUs and often mixed CPU + GPU compute. Today, tons of energy is going into improving systems for batch inference, and batch inference is just as important as online inference (companies have accumulated tons of data over the years, and if you want to use AI to extract insights or make decisions based on that data, you will need to process that data in a batch fashion). A bunch of companies are speaking at #RaySummit on their best practices for batch inference: Apple: "petabyte scale embedding generation" Roblox: "efficient batch inference at Roblox" Uber: "large scale generative AI batch prediction with Ray and vLLM" eBay: "pioneering next-gen AI platform" Handshake: "job description parsing with batch LLM inference" https://lnkd.in/deWkzHEC
-
🚨 #Raysummit training session alert 🚨 "Scalable Generative AI with Stable Diffusion Models - From Pre-Training to Production" Join to dive deep into creating scalable Generative AI solutions. You'll learn pre-training techniques, mastering Stable Diffusion Models, and how to efficiently transition from development to production. Sign up today! https://lnkd.in/gWTKRzwc
-
Anyscale reposted this
1000 contributors!!! 🤯🤯 One of the earliest contributors to Ray was Ant Group / Alipay. They were the first serious Ray user, and contributed a lot to the hardening of Ray in production. Today, they use Ray for a huge range of workloads ranging from batch inference to model serving to online learning to graph processing. They'll also be speaking about their Ray-based *agent* framework at #RaySummit.
🚀 Ray just hit 1000 contributors! Our community is thriving 🙌—all thanks to you! Let’s keep pushing the boundaries of AI together! #opensource #AI
-
🚀 Ray just hit 1000 contributors! Our community is thriving 🙌—all thanks to you! Let’s keep pushing the boundaries of AI together! #opensource #AI
-
🚨 Speaker Session Alert! 🚨 We’re thrilled to announce tech visionary Marc Andreessen of Andreessen Horowitz as a keynote speaker at #RaySummit! Join us for Marc's insightful perspectives on the future of AI Sign up now: https://lnkd.in/gWTKRzwc