We’re excited to announce a telemetry integration with OpenLIT! Check out the integration docs here: https://lnkd.in/gBCdt4mK
Guardrails AI
Software Development
Menlo Park, California 3,324 followers
Our mission is to empower humanity to harness the unprecedented capabilities of foundation AI models.
About us
Our mission is to empower humanity to harness the unprecedented capabilities of foundation AI models. We are committed to eliminating the uncertainties inherent in AI interactions, providing goal-oriented, contractually bound solutions. We aim to unlock an unparalleled scale of potential, ensuring the reliable, safe, and beneficial application of AI technology to improve human life.
- Website
-
http://www.guardrailsai.com
External link for Guardrails AI
- Industry
- Software Development
- Company size
- 2-10 employees
- Headquarters
- Menlo Park, California
- Type
- Privately Held
- Founded
- 2023
Locations
-
Primary
801 El Camino Real
Menlo Park, California 94025, US
Employees at Guardrails AI
Updates
-
🧵In our last blog post, we introduced a new streaming architecture that revolutionizes validation with lower latency and cost. Now, let's dive into how we handle fix actions in this setup! 👇 Link to the full article: https://lnkd.in/gzk2gea9 Fix actions are a game-changer in Guardrails, allowing for programmatic corrections when a validator fails. For example, our PII validator can anonymize data, and the lowercase validator ensures proper text casing. But how does this work in a streaming scenario? 🤔 Streaming complicates things since each validator accumulates chunks independently, making it impossible to run them sequentially. This means they don’t know what fixes other validators have applied—a real challenge for maintaining data integrity. Our solution? A clever merging algorithm that waits until all validators have enough context to validate and apply fixes. It then merges these fixes into a cohesive output. For instance, it combines PII detection and lowercase adjustments into a single, corrected response. While the merging algorithm usually works well, it’s not without caveats. Overlapping replacement ranges between validators can sometimes cause issues. If you encounter any bugs with stream fixes, we encourage you to file an issue on the Guardrails repo. Let’s make it better together! #Tech #AI #Streaming #MachineLearning #NLP
-
We are hosting a free webinar! 🥳 Topic: How to use common validation patterns through the Guardrails Server When: Wednesday 4 Sept at 9am PDT Sign up here: https://lnkd.in/gEyxbTuj
Guardrails Server Webinar
go.guardrailsai.com
-
Guardrails v0.5.5 is out! This was another week spent on documentation and performance. This time, we upgraded the inference endpoint for the RestrictToTopic validator! https://lnkd.in/guCrcduc See the full changelog here: https://lnkd.in/g6sS6Dyr
Release v0.5.6 · guardrails-ai/guardrails
github.com
-
Validating LLM streams in real time is difficult. There are a number of performance and design problems in this space. We’ve finally come up with a good pattern for applying programmatic fixes from validators to streams. Read about it in our latest blog post https://dub.sh/QsBuesK
How we rewrote LLM Streaming to deal with validation failures
dub.sh
-
We've been getting questions about CI/CD for the Guardrails Server recently, here's a new doc and some terraform to help with that. https://dub.sh/jeyaEFT
Continuous Integration and Deployment - AWS | Your Enterprise AI needs Guardrails
dub.sh
-
Guardrails v0.5.5 is out! This week, we spent time on documentation and some high prio bug fixes. We also worked on our remote inference performance numbers. See details here https://dub.sh/CFUhzgg See the full changelog here: https://dub.sh/jqReryc Bug Fixes: - Cleaned up error reporting between Guardrails server and client - Fix typing mismatches on remote inference endpoints Docs updates: - @Iudex integration - Hosting validator models
Latency and usability upgrades for ML-based validators
dub.sh
-
We made Competitor and Toxic Language detection run 2x as fast for development purposes. We made these endpoints free for everyone. Learn more in this blogpost https://dub.sh/CFUhzgg
Latency and usability upgrades for ML-based validators
dub.sh
-
Guardrails AI reposted this
Creating LLM apps is easy. Creating production LLM apps is hard -- but it can be easy. We've paired up with Guardrails AI so you can guard your LLMs and view when those guards trigger! Check out the docs for how to get started with both: https://lnkd.in/gxryJhZ3 🙌 Sign up for IUDEX AI here: https://lnkd.in/g3K5FKxz Sign up for Guardrails AI here: https://lnkd.in/gg-QYm2Q
Guardrails Integration | IUDEX
docs.iudex.ai
-
📉 We’ve been running performance optimizations, and we’ve seen that ML model based validators do not scale well without GPUs. But it’s suboptimal to collocate those models with the primary Guardrails server since they have different demands. So how do you scale validators? This new doc has the answers - https://dub.sh/8DaK5iX
Host Remote Validator Models
dub.sh