Some big news from the Vera team today: it's official!
Vera
Software Development
Brooklyn, New York 761 followers
The world's first conversational assistant to enforce and automate your privacy, security, and fairness policies.
About us
The world's first conversational assistant to enforce and automate your privacy, security, and performance policies in enterprise AI.
- Website
-
https://www.askvera.io
External link for Vera
- Industry
- Software Development
- Company size
- 2-10 employees
- Headquarters
- Brooklyn, New York
- Type
- Privately Held
- Founded
- 2021
- Specialties
- Artificial intelligence
Locations
-
Primary
102 Covert St
Apt 2
Brooklyn, New York 11207, US
Employees at Vera
-
Jonathan Bulkeley
-
Nick Adams
Managing Partner and co-founder at Differential Ventures
-
Bärí A. Williams
Former GC/COO at BandwagonFanClub, Inc./Attorney/Board Member/DEI Practioner/Chief/Author #SeenYetUnseen
-
Andrew Savitski
Principal Data Engineer // I unlock the power of your data and solve problems you didn't know you had
Updates
-
Just like the internet, Generative AI is "not a fad", and ignoring its popularity can create new security risks. When "22% of employees admit to knowingly violating company rules on the use of generative AI", it may be time to find a way to let your teams use these tools safely.
AI security is a new battle between employers and workers, survey shows
axios.com
-
Chatbots can be a good way to improve customer service and get timely information to those in need. But, especially in high risk use cases, rushing to market without sufficient guardrails has consequences…
NYC AI Chatbot Touted by Adams Tells Businesses to Break the Law
https://www.thecity.nyc
-
Vera reposted this
This past week, I had the delightful opportunity to share our work at Vera with Women in Data Science (WiDS) Worldwide at the University of North Carolina at Charlotte for their Women in Data Science conference. Over the course of my hour-long keynote, I revisited a talk I've given dozens of times, to update it with all the progress that's been made since 2019. In short, I'm now much more optimistic than I used to be on the direction of this field, but it's going to take all of us to ensure a "safe-enough" future for AI and society. I asked GPT-4 (through the Vera platform, of course!) to summarize my speaker notes*, and it came up with the following: "The speaker, an early tech whistleblower, discusses their work in shaping safe AI across non-profits, government, and industry, now by conforming large language models to corporate policies and minimizing risks. The talk highlights the complexities and rapid advancements in AI technology and mitigation strategies, debunking myths about AI's potential to either solve all problems or lead to an apocalypse. It addresses issues such as mathematical complexity, ethical concerns highlighted by studies like Gender Shades, and the need for robustness in AI models. Societal challenges like deepfakes, civil liberties, and economic inequality are also discussed, along with the importance of human oversight in AI and the evolving technical and ethical landscape. The speaker urges industry commitment to continual improvement and standards, and emphasizes individual involvement in policy-making and the significance of voting to shape AI governance." *lightly edited because, well, AI will never be perfect!
-
Good things come to those who wait, and we know you've been patiently looking forward to the latest release from Team Vera. This one is a big deal, and we can't wait to share it with the world! Take a look at our brand new chat UI...
Good Things Come… Glow Up Part 2
medium.com
-
Today on the Vera blog, a letter from our CEO about how AI tools and standards must be aligned with science, but also easy to deploy and use. Vera is a "gummy vitamin" for enterprise... it tastes great, but it's good for you, too.
A Letter From Our CEO — Vera
askvera.io
-
Model moderation is... really hard. But with Vera's team of experts, we can help you map out the universe of exactly how you'd like your models to behave. To illustrate this process, we worked with Claire Geist, formerly of Twitter Cortex, Clarifai, and iMerit to develop a "one-click" policy on illegal substances, and you can read all about it in our latest blog post:
How To Customize Your Company’s LLMs — Vera
askvera.io
-
This week, Vera and Differential Ventures had the privilege of hosting 14 brilliant and fascinating AI leaders at Nightbird in San Francisco for an evening of incredible food and even more incredible conversation on Practical AI. It was an evening full of serendipity and connection, and so much fun I'm beginning to doubt my East Coast loyalty...! Thank you to Tom Driscoll, Justin N., Nick Adams, Aku Srikanth, Rochelle Mattern, EJ Liao, Chris Messina, Brady F., Amber Yang, Ashik Ardeshna, 🌞 Philip C., Atul Dhingra, Arsalan (RC) Mosenia, PhD Michael R. Boone, PE, MS and David Rojas Garcés for attending and making it a fantastic night!
-
First February Release Notes: Growing, learning, adapting, and remaining focused on the customer experience.
Vera’s 2024 Glow Up, Part 1 — The Admin UI
medium.com
-
This week's release contains something unique and exciting for anyone hoping to enjoy AI's latest and greatest. As new models enter the market, no one wants to be locked into a single vendor. With Vera's model router, your prompts get sent to the best (or fastest, or least expensive) model for the job, and now it happens without losing the context of your prior queries. Read on about our approach to LLM memory on the Engineering Blog:
LLM Memory: A Seamless, Context-Aware, Policy-Compliant, Multi-Model Chat
medium.com