Jo Kristian Bergum’s Post

View profile for Jo Kristian Bergum, graphic

Chief Scientist at Vespa.ai

I'm looking at the MTEB leaderboard this AM. Amazingly, mxbai-embed-large-v1 ranks at 12 despite its small size relative to the other Billion-sized parameters models. In addition to strong performance for a relatively small size, it comes with MRL and BQL flexibility, which can slash cost of storing and searching the embedding representations (with a slight degradation in accuracy) https://lnkd.in/d5jq9a2R

  • No alternative text description for this image
Tom Aarsen

🤗 Sentence Transformers, SetFit & NLTK maintainer, MLE @ Hugging Face

4mo

Luckily, you can now filter by model size 😄

Nicolaj Søndergaard Mühlbach

Sr. Machine Learning Scientist | PhD, Postdoc in econometrics, machine learning, and NLP

4mo

Giovanni Rizzi this is the model I was showing you

Shubham Pawar

LLM Research Lead @ Qloo | Redefining Cultural Taste with LLMs

4mo

mixedbread.ai models are great! 👏❤️

🎯Renato Ghica🎯

Trusted Technology Advisor | AI/ML Innovator Driving FinTech Transformation | Expertise in Finance, Banking, Capital Markets, Wealth & Risk Management | ex SVB, JPM, Citi, BofA

4mo

Good stuff. 👀 Also, I haven't ran a T-test on this particular set of data, but I would guess that these 11 LLMs have scores that are not statistically significant from each other, at the 95% or even 90% 2-talied confidence level. ❓ What do you think, Jo Kristian Bergum

See more comments

To view or add a comment, sign in

Explore topics