Ohad Levi’s Post

View profile for Ohad Levi, graphic

Co-founder & CEO at Hyperspace | High-Performance Search | Domain-Specific-Computing

Every LLM company is a search company, and search is hard. This title of a blog published by Theory Ventures is one of the most accurate descriptions of the evolution of search as we see it now. The challenge with search performance is the gap between how we consume data and how we process it. This gap is growing bigger by the day as legacy search solutions are simply not designed to deal with it. The growth in unstructured data, such as audio, video, and other forms, is remarkable. Today, 90% of enterprise data is unstructured. This explains the massive hype around vector search solutions that are popping up like mushrooms after the rain. What we need to realize is that the database is only a small part of the solution. The big picture lies in the way we access, process, and retrieve these massive data points with relevancy and without compromising real-time standards. So, how do we search differently to address the explosive growth in data? The answer is cloud-native hybrid search powered by domain-specific computing. Domain-specific computing skips the standard software semantics, cache hierarchy, and other CPU abstractions. It then implements the core parts as a custom datapath processor. Together with a new software stack, this runs search and information retrieval workloads hundreds of times faster. Interested in learning more? https://lnkd.in/dSdeTZ_N #elasticsearch #vectorsearch #vectordatabase #database #lexicalsearch #keywordsearch #llm

Saurabh Rai

Software Engineer Building Resume Matcher (4.5K+ Stars on GitHub)

1w
Kord Campbell

I make AI write code.

1w

Search is only hard if there are a lot of documents! (Captain Obvious here) I wonder if there is a way to re-think use cases in terms of how many documents are indexed or need to be accessed during inference/discussion? A short discussion with an LLM doesn't really require anything but saving history, and sending it all over in one shot. And source code is indexed differently than PDFs. We probably don't want to chop a function in half and embed it...

See more comments

To view or add a comment, sign in

Explore topics