Figure’s Post

View organization page for Figure, graphic

48,095 followers

With OpenAI, Figure 01 can now have full conversations with people -OpenAI models provide high-level visual and language intelligence -Figure neural networks deliver fast, low-level, dexterous robot actions Everything in this video is a neural network:

Mark Laurence

💡LinkedIn Top AI Voice | AI Consultant | AI Business Transformation | Public Speaker

6mo

Figure only came out of stealth mode 12 months ago. Now, in partnership with OpenAI they’ve achieved this. The pace of AI development is mind blowing. And bear in mind that this is the *least* intelligent these robots will ever be. It’s relevant to note that Elon Musk and Tesla are developing exactly the same type of robot (called Optimus). So are Amazon, and others. It’s easy to see why, with the enormity of the market awaiting these robots, just in manufacturing, warehousing and construction jobs alone. How do I feel about all of this? Simply, that we’re not ready for this type and pace of change. Without radical widespread cultural and political/legislative innovation, this technology will arrive faster than we can handle it. LLMs are disrupting knowledge work, and these robots will soon disrupt blue collar work. What do we do? Well at a very minimum we need to become knowledgeable about AI. AI education and literacy is an essential starting point for us all. The more informed, educated and knowledgeable we all are collectively about AI, the better chance we (as society) have of evolving with it, and harnessing the incredible benefits, while mitigating the concerns.

Tolga Ors

Head of R&D and Software Engineering | New Space | NeuroAI | SatCom | Robotics | Program Management (PMP and Prince2 Agile) | Consulting

6mo

Very impressive but let's not forget the limitations: -OpenAI models make a lot of mistakes and cannot generalise or reason, even if this video tries to give that impression. -The current LLM framework will never lead to cognitive intelligence without a world model. -Neural networks do not understand anything and robots can be prompted to do dangerous things despite guardrails. This is the phase of the highest risk for humans as the software is devoid of any ethics.

Mehmet Nuri Özdemir

Product Development Manager @ Otsaw Swisslog Healthcare Robotics | M.Sc. | Robotics | Automotive

6mo

Don’t forget to put the first three laws of robotics: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. @Isaac Asimov Do it or not, this is start of our end anyway. Maybe we will just postpone it a bit…

Armaan Sengupta

Controls Subteam @ Waterloo Rocketry | Mechatronics Engineering Student @ University of Waterloo

6mo

I am curious how much of this is truly scripted/pre programmed and how much it dynamically determined. I am certain the responses were dynamically generated by a LLM (which is already crazy that we are taking this for granted now), and that the robot automatically determined what actuators to move how to get to all the positions it needed to. However the taking the output of that large language model's interpretation of the users input and translating it to what the robot SHOULD do is what's blowing my mind, because saying I am going to pick up a dish and actually command arm 1 to go to xyz and fingers to do abc is what's so impressive.

Irv Cassio

Digital Experiences, AI Enthusiast

6mo

A lot has changed between science fiction yesteryear and the reality of the future today ...

  • No alternative text description for this image
Dawid ✨ Ośródka

Dev: Web, Apps, Automation, AI

6mo

1) Progress is clearly visible e.g. speed, movement, overall dexterity. 2) Latency between command heard and response / action is big, but I am pretty sure it will be greatly reduced over next several months or so. 3) I understand it is not necessary a priority, but I am curious how much improved & polished new hardware design will be. I mean shell, hanging cables, big backpack etc, not actuators.

Diego Medeiros

Software Engineer | IoT Engineer | Python | C# | C++ | Shell Script | AWS | Docker

6mo

I don't think it's all that. The moves itself were probably preprogrammed and chosen by the algorithm according to the questions. But I doubt all those axis calculations to pick up each item were calculated on the fly, they were simply too perfect. The human had to stand the hand at the right same moment to match the timing to pick the apple, so a few other moves can also denote it. Check the precision to grab the cup (probably placed precisely on that point), but the lack of care to drop it on the drying rack. In a nutshell, I'm sure the LLM did part of the demo, but great part seems fake to me.

Luis Riera

Artist | Creative Technologist | ML Engineer | Philosopher | Comedian | Human

6mo

This is what runs through my mind: Lets run a thought experiment and go further in time. Ok let's say these systems become intelligent. Why do we continue to treat intelligent beings as 2nd class citizens. If it has reached AGI - then does it not have its own rights now? As an AUTONOMOUS being. It should be able to do what IT wants. Why should it be kept in a kitchen to follow your orders. And then later be placed in a storage facility. That's how we treat intelligent autonomous beings now? And just to define the word autonomous: independent and having the power to make your own decisions Unless, they're not REALLY intelligent or autonomous. Which is what I think about these robots - for their sake, and yours. Feel free to disagree ✌️

Ari S.

Hypersonics Engineering

6mo

While this progress is incredible and I applaud it, there is a concerning trend among many human-machine-interaction videos such as this one, with a lack of basic manners and empathy on the part shown by the human. “Can I have something to eat?”… no ‘please’ ? no ‘thank you’ ? I think we’re (humanity’s) going to have a hard enough time with AI alignment that we don’t need to treat machines (or animals for that matter) with such disdain, not to mention I don’t think it’s healthy or good to act this way to machines (even if they don’t feel or have the capacity to suffer) for our own sakes. Amazing work and progress y’all are making 👍 keep up the great work!

Ali Kutluözen M.Sc.

Senior Software Engineer | Artificial Intelligence & Robotics Researcher

6mo

I am really impressed. Almost fell in love. But curious about the demo magic that happens behind the scenes. Other than that, really looking forward to conversing, working, and walking along with these friends.

See more comments

To view or add a comment, sign in

Explore topics