You maybe aware of Q* (pronounced QSTAR) as the next big phenomenon in the path towards #artificialgeneralintelligence It is an interesting choice of name that (reportedly) has its origins from Q-learning, a form of reinforcement learning and A* (ASTAR) which is a path finding algorithm. It would be remiss not to share the origin story of our own high fidelity network simulation product X* (also XSTAR). XSTAR was born over two years ago, well before any of this newfound excitement surrounding the nomenclature. At the heart of XSTAR is an optimization engine that solves large scale complex problems. Specifically, we pride ourselves on being able to integrate design, operations and infrastructure in a single platform. This integrated approach is crucial to solve the highly interconnected future of mobility. The name XSTAR itself is a tribute to the "optimal solution", the commonly accepted symbol for which is X*. Using our integrated approach, we strive to take the user from settling for a sub-optimal solution, towards X* Optimizing a product or a process is a journey of evolving constraints and parameters. We hope to help the user through all of it. Check out what we are building at www.alcifo.com #ai #aritificialintelligence #advancedairmobility #ondemand #uam #ram #evtol
Aditya Jagannathan’s Post
More Relevant Posts
-
Airports all over the world are investing in technology to create a better traveler experience. Did you know that 50% of airports expect that machine learning is the technology that will have the biggest impact in the next 12 months? Read on for the findings in our recent study: https://ow.ly/E5Rs30sE62E #Airports #ItsHowTravelWorksBetter #Amadeus
To view or add a comment, sign in
-
🚆👁️ 𝗥𝗲𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝗶𝘀𝗶𝗻𝗴 𝗥𝗮𝗶𝗹𝘄𝗮𝘆 𝘄𝗶𝘁𝗵 𝗔𝗜! Just watched an awe-inspiring video from Network Rail about their groundbreaking AI experiment! 🌐🤖 𝗛𝗲𝗿𝗲'𝘀 𝘁𝗵𝗲 𝗹𝗼𝘄𝗱𝗼𝘄𝗻 𝗼𝗻 𝗵𝗼𝘄 𝗔𝗜 𝗶𝘀 𝘁𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗶𝗻𝗴 𝗿𝗮𝗶𝗹𝘄𝗮𝘆 𝘀𝗮𝗳𝗲𝘁𝘆: - AI at the Forefront: Network Rail is experimenting with Artificial Intelligence (AI) and Automated Intelligent Video Review (AIVR) to enhance railway safety like never before. 🚀📹 - Innovative Tech in Action: AIVR, compact enough to fit on any train, captures high-definition footage across the railway network. 🚆🎥 - AI-Powered Analysis: The AI system meticulously analyses this footage, identifying forgotten scrap items like rail pieces, sleepers, and ballast bags. 🕵️♂️🔍 - Precision Mapping: Each identified item is precisely mapped using GPS, creating an accurate picture of where attention is needed. 🌍📍 - Efficient Maintenance Planning: Armed with this data, Network Rail's maintenance teams can plan effective, safe removal and recycling strategies for these items. 🛠️♻️ - Sustainability Meets Safety: This isn't just about safety; it's a step forward in sustainability, as these materials are repurposed or recycled. 🌿🔄 The video from Network Rail is a testament to how technology is not only safeguarding our journeys but also making them more environmentally friendly. Check out the video for a glimpse into the future of railway maintenance! #RailwayInnovation #ArtificialIntelligence #Sustainability #NetworkRail #TechForGood 🚆🌍🤖🌱
To view or add a comment, sign in
-
Just spoke with the CEO of Airship AI who recently purchased stock in his own company. He thinks the stock is cheap and is optimistic about the future, hence the purchase. Airship AI is a robust AI-driven surveillance video, sensor and data management platform. Going to do some diligence on the business this week and include in our upcoming newsletter blog.stockocean.com What questions would you have for Victor and his business?
To view or add a comment, sign in
-
Alaska Airlines is using an AI-powered dispatch algorithm to process vast amount of data points such as weather conditions, closed air spaces, and private & commercial flight plans to suggest the most efficient flight route. These are the sort of problems that are well suited for computers. As we look towards the future, I’m convinced AI and data-driven tools will play a big role in solving complex problems across many industries. That’s exactly what excites us about what we’re solving with EVpin. Companies that adopt a data-driven approach for selecting EV charging sites will outperform those that don’t.
To view or add a comment, sign in
-
AI holds great promise for #intelligentautomation but it is only as good as the data that fuels it. In a new blog series that we are launching today, we will discuss how disruptive real time location systems (#RTLS) can drastically improve #safety #efficiency and #productivity in #logistics by providing accurate #positioning and #spatial #awareness data in a way that was not possible before. You can find the first blog here https://lnkd.in/daUx_rEp Want to find out more about the RGo Robotics #RTLS solution? Come and see us at #modex2024 and #logimat2024 – https://lnkd.in/dCh33eCH #rtls
Taking Industry 4.0 to the Next Level with Disruptive RTLS
rgorobotics.ai
To view or add a comment, sign in
-
R&D Sr. Executive with 20+ years at Siemens Healthcare | Led up to 50 People with $15M+ Budgets | Developed from Concept to Launch Products Used Globally by 200M People | Expertise in AI/ML and DFSS for Medical Systems
So true Yann LeCun. I share your concern that lack of knowledge about AI is causing unnecessary fear and misplaced priorities. I am worried that actual risks from AI (eg Misuse of AI by humans such as DeepFakes) would be ignored and hypothetical risks will cause over regulation which will stifle innovation. If we had taken such an approach, Fire or the Wheel would not have been invented since both can and do cause human harm. We have for the most part learnt to manage the risks from these and the same thing will happen to AI. #ai #ml #risk #regulation #innovation
It seems to me that before "urgently figuring out how to control AI systems much smarter than us" we need to have the beginning of a hint of a design for a system smarter than a house cat. Such a misplaced sense of urgency reveals an extremely distorted view of reality. No wonder the more based members of the organization seeked to marginalize the superalignment group. It's as if someone had said in 1925 "we urgently need to figure out how to control aircrafts that can transport hundreds of passengers at near the speed of sound over the oceans." It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety. It didn't require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements. The process will be similar for intelligent systems. It will take years for them to get as smart as cats, and more years to get as smart as humans, let alone smarter (don't confuse the superhuman knowledge accumulation and retrieval abilities of current LLMs with actual intelligence). It will take years for them to be deployed and fine-tuned for efficiency and safety as they are made smarter and smarter. https://lnkd.in/eaJ5uuMk
To view or add a comment, sign in
-
CTO at TEN2, helping musical artists grow on YouTube. Formerly CTO at Studio71 + writer, hacker, maker, consultant, investor, husband, father of three, and a maker of amazing sandwiches.
If you are looking for a source of rational and reality-based insight in to generative technology (“AI”) Yann is the best. Pretending General AI is anywhere near a reality, even for the sake of “safety” isn’t helping anything.
It seems to me that before "urgently figuring out how to control AI systems much smarter than us" we need to have the beginning of a hint of a design for a system smarter than a house cat. Such a misplaced sense of urgency reveals an extremely distorted view of reality. No wonder the more based members of the organization seeked to marginalize the superalignment group. It's as if someone had said in 1925 "we urgently need to figure out how to control aircrafts that can transport hundreds of passengers at near the speed of sound over the oceans." It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety. It didn't require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements. The process will be similar for intelligent systems. It will take years for them to get as smart as cats, and more years to get as smart as humans, let alone smarter (don't confuse the superhuman knowledge accumulation and retrieval abilities of current LLMs with actual intelligence). It will take years for them to be deployed and fine-tuned for efficiency and safety as they are made smarter and smarter. https://lnkd.in/eaJ5uuMk
To view or add a comment, sign in
-
Some insightful thoughts from one of the pioneers of Deep Learning. Let's not confuse superhuman knowledge accumulation and retrieval abilities of current LLMs with actual intelligence... #LLM #AGI #AI #GenAI
It seems to me that before "urgently figuring out how to control AI systems much smarter than us" we need to have the beginning of a hint of a design for a system smarter than a house cat. Such a misplaced sense of urgency reveals an extremely distorted view of reality. No wonder the more based members of the organization seeked to marginalize the superalignment group. It's as if someone had said in 1925 "we urgently need to figure out how to control aircrafts that can transport hundreds of passengers at near the speed of sound over the oceans." It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety. It didn't require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements. The process will be similar for intelligent systems. It will take years for them to get as smart as cats, and more years to get as smart as humans, let alone smarter (don't confuse the superhuman knowledge accumulation and retrieval abilities of current LLMs with actual intelligence). It will take years for them to be deployed and fine-tuned for efficiency and safety as they are made smarter and smarter. https://lnkd.in/eaJ5uuMk
To view or add a comment, sign in
-
I solve business problems with data+algos | ML@Adobe | Led the ML team at Koo | Prev at Netflix, LinkedIn California | Relocated to Blr 2021 | Visiting faculty at MSRIT | LLMs are epistemology probes
There is an ongoing debate on alignment of AI systems. And while Yann makes very good arguments, it is hard to ignore this debate. While luminaries like Yann know and understand these systems at a level that few understand, as a user and developer of GenAI technologies, I do wish that safety and alignment don't take a backseat. A concern that is of immediate worry for me is the dependence on GenAI systems that I'm beginning to see around me. From crafting emails, to writing blog posts, to generating code and writing stories, GenAI systems are showing us use-cases that were hitherto impossible. What we aren't debating is how will an over-reliance on GenAI change us. In some areas, such as developing software, we will see an abundance of software being created, due to the ease and speed with which complex code can be created by individuals. But on the other hand, we will see a reduction in human interaction and team work that until now was necessary to build software. When I heard about the steep reduction in usage of Stack Overflow, I couldn't help but lament the fact that many developers will not know what it means to go back and forth with other developers on SO questions. The experience of debating on forums and email lists. The need to go to senior folks and discuss and argue about coding conventions, design patterns and best practices. Maybe "working together" to create code was unproductive compared to prompting GPT-4, it nevertheless taught us how to interact with each other. It taught us how to have conversations about technical topics that many of us would later look back and appreciate as critical learning opportunities. It is a striking co-incidence that the rise of remote work and GenAI have overlapped to on the one hand provide increasing degrees of freedom to professionals, while on the other hand taking away opportunities for deep human interactions. But a sliver lining in all of this is that we find ourselves discussing and debating how to interact with systems that very human like. We are learning how to "cajole" LLMs to do our bidding - from promising fat tips, to being polite, from providing context to being exact with our instructions, millions of people are learning how to interact with a human-like entity. I don't think this has ever happened in human history. And when we do have AGI or human-like intelligent systems, the good news will be that these systems will perform to their best ability when they are treated with respect and understanding. And in learning to respect AI systems, we may find that we're also learning to treat each other with greater understanding and empathy, bridging gaps not only across digital platforms but within our very human interactions. #alignment
It seems to me that before "urgently figuring out how to control AI systems much smarter than us" we need to have the beginning of a hint of a design for a system smarter than a house cat. Such a misplaced sense of urgency reveals an extremely distorted view of reality. No wonder the more based members of the organization seeked to marginalize the superalignment group. It's as if someone had said in 1925 "we urgently need to figure out how to control aircrafts that can transport hundreds of passengers at near the speed of sound over the oceans." It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety. It didn't require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements. The process will be similar for intelligent systems. It will take years for them to get as smart as cats, and more years to get as smart as humans, let alone smarter (don't confuse the superhuman knowledge accumulation and retrieval abilities of current LLMs with actual intelligence). It will take years for them to be deployed and fine-tuned for efficiency and safety as they are made smarter and smarter. https://lnkd.in/eaJ5uuMk
To view or add a comment, sign in
-
Stacking transformers != human intelligence. LLMs are fantastic tools for collection of information and fast retrivial for a well defined problem. It almost seems we are back in the 90s where swarn intelligence (such as ACO and PSO) where expected to solve every problem. It didn’t happen, but then LSTM was the big thing. And GRU, and Attention, and BERT. And now? Now AI agents. Task specialized models. Multiple of of them. With a coordinator. xLSTM is now out. Will it solve every problem? My guess is NO. But I am just a security guy. What do I know about AI, right? The common thread through these developments is the exponential increase in memory requirements and power consumption. Simply throwing more money at these problems isn’t a sustainable solution; true innovation requires novel approaches and efficiency improvements. And let me tell you: developing super fancy predictive models has nothing to do human intelligence.
It seems to me that before "urgently figuring out how to control AI systems much smarter than us" we need to have the beginning of a hint of a design for a system smarter than a house cat. Such a misplaced sense of urgency reveals an extremely distorted view of reality. No wonder the more based members of the organization seeked to marginalize the superalignment group. It's as if someone had said in 1925 "we urgently need to figure out how to control aircrafts that can transport hundreds of passengers at near the speed of sound over the oceans." It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety. It didn't require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements. The process will be similar for intelligent systems. It will take years for them to get as smart as cats, and more years to get as smart as humans, let alone smarter (don't confuse the superhuman knowledge accumulation and retrieval abilities of current LLMs with actual intelligence). It will take years for them to be deployed and fine-tuned for efficiency and safety as they are made smarter and smarter. https://lnkd.in/eaJ5uuMk
To view or add a comment, sign in