𝗜𝗡𝗧𝗥𝗢𝗗𝗨𝗖𝗜𝗡𝗚 𝗣𝗟𝗔𝗬𝗚𝗥𝗢𝗨𝗡𝗗𝗦 🚀 You can now chat with your pdf after deploying your open-source model. You can deploy the model to SlashML Cloud or Your Cloud. Try it out: https://lnkd.in/e4B24Yp2 #llama3 #chatwithpdf
slashML
Software Development
Montreal, Quebec 197 followers
LLM infrastructure that supercharges entreprise use cases
About us
slashML is an MLOps platform that enables day one deployment of open-source generative models directly in your cloud, with built-in cloud cost observability, auto-scaling, and support for custom LLMs.
- Website
-
https://slashml.com
External link for slashML
- Industry
- Software Development
- Company size
- 2-10 employees
- Headquarters
- Montreal, Quebec
- Type
- Privately Held
- Founded
- 2023
Locations
-
Primary
Montreal
Montreal, Quebec, CA
Employees at slashML
-
Muneeb Ghani
Strategic Leader | Driving Innovation & Transformation in Finance and Insurance Technology | Board Advisor slashML.com | Ernst and Young (EY)…
-
Priyanka Ghosh
AI/ML & Data Science | Advisory board member | Mentor | Speaker | MS | MBA
-
Faizan K.
CTO SlashML (reducing friction one bash script at a time)
-
Jneid Jneid,
Co-founder and CEO at SlashML
Updates
-
𝗜𝗡𝗧𝗥𝗢𝗗𝗨𝗖𝗜𝗡𝗚 𝗣𝗟𝗔𝗬𝗚𝗥𝗢𝗨𝗡𝗗𝗦 🚀 Now you can chat with open-source models that you have deployed in your private cloud. Try it out for free https://lnkd.in/eF8PDNuW #llama3 #chatwithdocs
-
Introducing SlashWorkspaces, or SlashMLWorkspaces, or workspaces by SlashML, or SlashWorkspaceML. So hard to come up with a name, but so easy to connect your local notebook to a GPU, and fine-tune a language model in a min. Get started for free before we attach auto-scheduling to workspaces. https://lnkd.in/eF8PDNuW #finetuning #jupyter #smalllanguagemodels
-
Today, we are thrilled to announce our partnership with NVIDIA Inception. With this collaboration, we are one step closer in empowering enterprises in regulated industries to build and deploy production-ready AI applications at scale, in a secure and compliant way. #nvidia #opensource #LLM
-
𝗦𝗹𝗮𝘀𝗵𝗠𝗟: 𝗔𝗜 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 𝗠𝗮𝗱𝗲 𝗦𝗶𝗺𝗽𝗹𝗲 𝗮𝗻𝗱 𝗦𝗲𝗰𝘂𝗿𝗲 Certifications don't guarantee control—if your AI is hosted elsewhere, you're at risk. SlashML lets you deploy models in your cloud, without needing a team of DevOps experts. 𝗕𝘂𝗶𝗹𝘁 𝘄𝗶𝘁𝗵 𝗘𝘅𝗽𝗲𝗿𝘁𝘀 In late 2022, during a critical LLM project, we faced complex configurations and cloud management challenges. We weren't alone—over 100 data scientists and AI leaders from regulated industries helped us build SlashML to simplify AI deployment. Key Features: • Automated MLOps in your cloud • Pre-configured AI workspaces for production-ready apps • Multi-cloud deployment and monitoring • Fine-tune models with your own data, securely in your cloud Deploy and fine-tune in your cloud with few clicks https://www.slashml.com/ 𝗜𝘀 𝘆𝗼𝘂𝗿 𝗼𝗿𝗴𝗮𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻'𝘀 𝗔𝗜 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 𝗳𝘂𝗹𝗹𝘆 𝗶𝗻 𝘆𝗼𝘂𝗿 𝗰𝗼𝗻𝘁𝗿𝗼𝗹? Schedule a call with us https://lnkd.in/esrdnbPT for a complimentary risk-free assessment with our CTO and see how we can increase the speed of AI adoption in your organization. #MLOps #opensource #LLMs
-
Customer success story ⬇️⬇️⬇️ PannaGPT: Streamlining Grant Applications with Fine-Tuned LLMs Panna, a one stop shop platform for all your grant writing needs, developed PannaGPT to significantly reduce the time and effort required for grant writing and submission. By using fine-tuned LLMs and RAG through slashML, the platform can now generate proposals for thousands of grants across Canada and the US. PannaGPT enables analysts to create multiple applications for a single grant. For each question, analysts can: - Choose the LLM that generates the most relevant response - Include/exclude previous questions and answers as context to reduce hallucinations. - Customize and save prompts for later reuse and constant benchmarking of the models This human-in-the-loop approach ensures accuracy while significantly reducing manual writing. By implementing PannaGPT, Panna has achieved a 30% reduction in grant writing and submission time, and significantly increased the number of customers that they can serve. This demonstrates the potential of integrating fine-tuned, smaller LLMs into specialized business processes for increased efficiency and output. This easily makes Panna the market leader in grant fulfillment. And if this isn’t enough, we have a lot of state-of-the-art projects in the pipeline, which will make them light years ahead of any other alternatives 🚀 #LLM #RAG
-
slashML reposted this
Every day we are getting closer to building THE LLM app development platform for entreprises at slashML. Companies want to orchestrate apps from agents, with AI workflows and a RAG engine. We are building that! Let’s chat about your use case ⚙️ https://lnkd.in/eiPmzQXW
-
slashML team is growing. Hiring another full stack software developer (node technologies) - REMOTE Our startup, focused on AI infrastructure and has recently secured pre-seed funding from one of the prestigious startup accelerators in US. RESPONSIBILITIES • Design and develop front-end applications using next.js, tailwind. • Rapidly prototype front-end components based on requirements • Build reusable components and libraries. REQUIREMENTS • Proven 2+ years of experience in developing web applications using ReactJS. Internships experience is also valid. • Strong proficiency in JavaScript, TypeScript and ReactJS. • Familiarity with RESTful APIs. • Experience with version control systems (Git) and continuous integration. • Excellent problem-solving and communication skills. • Ability to take ownership of projects, and can work independently. • Strong mentality of shipping faster. • Curious about Generative AI and the infrastructure required to scale them. BONUS SKILLS: • Experience with Nest, and next • Some experience with deploying an app. • Knowledge of python and how data apps in python are built. Send your resume and relevant portfolio to [email protected], with the heading SlashML-hiring-4-kyu
-
Ok first there was whisper.cpp, then came llama.cpp, then came Ollama, then OpenLLM, then came OpenWebUI, then came llamafile. Then google took all of that and created a new product called localllm. https://lnkd.in/gyjgZp_X
New localllm lets you develop gen AI apps locally, without GPUs | Google Cloud Blog
cloud.google.com