AI21 Labs’ Post

View organization page for AI21 Labs, graphic

Everyone's been asking ❓Does having a long context window mean that the model actually does something useful with it? ❓Why use long context when you can use #RAG? AI21 Labs’ Co-CEO & Co-Founder, Yoav Shoham, tackles these questions and shares how our team used an aggressive new long context benchmark from NVIDIA to optimize Jamba-Instruct. Unlike the traditional needle-in-the-haystack evaluation, RULER measures how models hold up across the entire length of their context window on real-world, complex #longcontext tasks. Turns out, most language models don’t do so well. #Jamba-Instruct is the only model to maintain consistently high performance across the longest context window AND is optimized for enabling enterprise use cases at the most competitive price. Read more about how Jamba-Instruct was built to excel on long context use cases for the enterprise and why long context and RAG are not an either/or. 🔎 https://lnkd.in/dNRmGGdx

  • No alternative text description for this image
Efrat Aran

Product Data Scientist At AI21 Labs

1mo

Thanks for sharing:)

Like
Reply
Ramin HaghjouSarvestani

Lead Python AI Developer & Scrum Master | Delivering Innovation & Agile Development

2mo

Thanks for sharing

See more comments

To view or add a comment, sign in

Explore topics