Rerun 0.17: defaults and overrides, streaming in notebooks, and better website embeddings The release brings a huge increase in explicit control over your visualizations. You can now use blueprints to both set default component values on a view, and to override components on specific entities in the view. You can do this both from code and in the UI. In the video, we use a default to bulk edit the size of all camera frustums and an override to edit just the front-facing frustum size. With the introduction of blueprint overrides and defaults, Rerun 0.17 gives you direct control over what visualizers are applied to what entities and makes all of this easy to inspect in the UI. Together, these features increase the amount of flexibility and control you have over exactly how your data is visualized in Rerun. Beyond defaults and overrides, this release also comes with: 🔸 Improved notebook and website embedding support 👉 You can now stream data from the notebook cell to the embedded viewer. 👉 There is improved support for having multiple viewers on the same web page. 👉 And you have more more configuration options to control the visibility of the Menu bar, time controls, etc. 🔸 Additional configurability from code, for example: 👉 ImagePlaneDistance(size of the Pinhole frustum visualization) 👉 AxisLength(axis length of the transform visualization) 👉 and all settings on TensorViews 🔸 New examples: 👉 PaddleOCR 👉 Vista, a generative driving world model 👉 Stereo Vision SLAM And much more 🚀🚀🚀 Check out the blog post on overrides and defaults, and the full change log in the links in the comments 👇
Rerun’s Post
More Relevant Posts
-
Introducing Siml.ai Version 0.5. ⚡ With the newest release of our AI-based simulation platform, we're committed to enhancing your user experience, ironing out pesky bugs, and supercharging your performance in every simulation task. Some of the key highlights of this release are: ▶ Selective Geometry Rendering ▶ Visual Editor Tutorial Tour ▶ Compute Credits Tracking & More. Unpacking the new features and enhancements in our new blog post👇 #Simulation #Innovation #SimlAIUpdate
To view or add a comment, sign in
-
Just when I thought I couldn't be more amazed after OpenAI's Sora, Google steps up with their next-level innovation, Lumiere. It's mind-blowing to see the pace at which technology is evolving right now. Check out this incredible video to see what it is about. https://lnkd.in/dSkdVjuN We're truly living in an era where the boundaries of what's possible are being pushed further every day. Can't wait to see what's next! #GoogleLumiere #TechInnovation #FutureIsNow
Introducing Lumiere: A space-time diffusion model for video generation
https://www.youtube.com/
To view or add a comment, sign in
-
We would like to thank everyone for following along and engaging with our content! We love seeing your reactions and reposts, and we're always looking for new content ideas, so feel free to throw some suggestions our way! Our article on Machine Vision is finally out! Get ready to dive into this exciting topic, expertly explained by our specialist. In this piece, you'll learn the difference between computer and machine vision, and discover how to choose the right machine vision components for your specific project needs. Here's the link: https://lnkd.in/e55q7_tb A special line for tags: #MachineVision #ComputerVision #COTS #MachineVisionSolution #HardwareEngineering #HardwareProductDevelopment #ProductDevelopment
To view or add a comment, sign in
-
-
⏲️ Here's how you can speed up Image Generation by 4x: Using LCM ▶️ LCM Lora: Accelerating Image Generation ▶️ LCM Scheduler: Optimizing Image Generation Process ▶️ Integration and real use cases. Read in full: https://lnkd.in/gPt-fYa9 #LCMLora #LCM #stablediffusion #sdxl #aiart #aiartcommunity
- Blog
modelslab.com
To view or add a comment, sign in
-
Introducing SenseCAP Watcher, The World's First Physical AI Agent to Revolutionize Space Management SenseCAP Watcher identifies, monitors, and even interacts with objects of interest in the designated spaces. It comprehends behaviors, and statuses, and notifies anomalies —all based on voice commands. This standalone device is about 1/3 the size of an iPhone. It boasts a 1.46-inch touchscreen, a camera for capturing pictures, a microphone for wake-up word detection and voice commands, a speech interaction speaker, a scroll wheel/button for navigation and push-to-talk, and an RGB LED indicator for device status and notifications. It also includes a Grove connector and extension pin, enabling the addition of multimodal sensing capabilities. The software suite behind SenseCraft is a seamless operating toolkit that combines onboard tinyML models and Large Language Models (LLMs) running either on the cloud or locally on edge computers. SenseCraft supports no-code model training, and deployment via a mobile app and acts as the backbone for handling cloud services and supporting various applications when LLMs are deployed on the cloud. SenseCAP Watcher is now open for alpha test applications. Apply to join our journey to make this cool product live now! Links to the launch, application and live demo can be found in the comment. For those who are visiting the ongoing embedded world Exhibition&Conference, feel free to stop by our Stand 4-551 to get the first glimpse of the product as we've prepared a demo over there!!! #SenseCAPWatcher #tinyML #LLMs #AIAgent #SpaceManagement
To view or add a comment, sign in
-
-
Last week, during a Q&A with the post team for Abbott Elementary (hosted by the one and only William Wedig), Dialog Editor Michael Jesmer shared one of his favorite plug-ins for dialog cleaning: Hush Pro. This tool, with its Hush Mix feature, cleans audio more effectively than any noise reduction tool I've ever used. Powered by machine learning, it's quickly becoming one of the most valuable tools in my kit right now. Truly fascinating to see how technology is transforming our workflows! #AudioEditing #HushPro #AIAudioTools https://lnkd.in/eWhSZMRK
Hush Pro: AI-powered dialogue repair
hushaudioapp.com
To view or add a comment, sign in
-
We recently introduced the ROCK 5 AIO, a new SBC with a powerful 3 Tops NPU from OKdo, powered by OStream. It's designed to simplify AI image processing at the edge, this game-changer comes pre-loaded with a Debian OS and media processing software. It makes creating image processing pipelines with pre-configured AI models accessible to everyone. In our latest getting started guide, we demonstrate how to: 🔧 Set up the ROCK 5AIO 👩💻 Connect it to the OStream management web UI 🎥 Create “drag & drop” image processing pipelines for event identification in video streams We also configure a remote RTSP (Real Time Streaming Protocol) camera feed using a Raspberry Pi camera module and show you how to capture event data for further processing using OStream's webhook service. Check it out now on RS DesignSpark and let us know your thoughts: https://weare.rs/3wHOuCU #LetsOKdo #Ostream #ROCK #AI #EdgeComputing #SBC #ImageProcessing #TechInnovation #DesignSpark
To view or add a comment, sign in
-
-
OtterHD represents a significant advancement in open source multimodal model capabilities. The model is designed to process high-resolution visual inputs without the need for a separate vision encoder. This model boasts the ability to handle varying input sizes during testing, enabling improved versatility across different inference requirements. This results in improved instruction following and in-context learning capabilities. One of the architecture improvements is in the optimization of the original HuggingFace implementation of Fuyu-8B. By integrating FlashAttention-2 and a suite of fused operators – including fused layernorm, fused square ReLU, and fused rotary positional embedding – the model's efficiency is increased, resulting in better GPU utilization and training throughput, which is more than five times greater than the standard HuggingFace implementation. OtterHD sets a new benchmark in training throughput among the current leading large multimodal models (LMMs). See an online demo here: https://lnkd.in/geXytM4u Github repo: https://lnkd.in/gSuETEVD Paper: https://lnkd.in/gSuETEVD
To view or add a comment, sign in
-
-
🚀 FEB 27th 2024- Tech Update Alert: Revolutionizing Customer Interactions with Teneo 7.4 CoPilot & Generative AI 🚀 Join us for an exclusive tech update on how Teneo 7.4's CoPilot feature is setting a new standard for customer interactions through the power of Generative AI. Our innovative Copilot features empower developers to generate new entries in Entities, Training, and Test examples for Classes. Effortlessly create new entries and responses using Copilot’s user-friendly interface, speeding up your workflow and enhancing your development skills. 📅 Don’t miss this opportunity to learn from the experts. Register now: https://lnkd.in/dFrYSPPz #TeneoAI #GenerativeAI #CustomerExperience #Webinar #TechUpdate
Join our upcoming Tech Update on Teneo 7.4 - Copilot Features
teneoai.webinargeek.com
To view or add a comment, sign in
Blog post: https://rerun.io/blog/blueprint-overrides Change log: https://github.com/rerun-io/rerun/releases/tag/0.17.0