Imagine a world in which humanoid robots equipped with artificial intelligence can not only talk to you but also understand and interact with the environment around it just like a person would. OpenAI, a leading AI research lab an investor in the development of the Figure 01, has made a significant leap forward with their latest update to speech-to-speech technology. This upgrade is a big step in how AI systems comprehend and engage with the world, bringing them closer to human-like interaction.
This new ChatGPT technology has been rolled out to the humanoid robot known as Figure 01, which now has the remarkable ability to recognize objects around it. Picture this: there’s a red apple sitting on a table, and Figure 01 can identify it accurately. This skill is crucial for the AI to interact meaningfully with its surroundings.
Figure 01 vision and speech-to-speech demonstration
But it doesn’t stop there. This AI can now respond to requests in a way that shows it understands what you need. If you’re hungry and ask for something to eat, Figure 01 can see that the apple is food and offer it to you. This shows that the AI is not just recognizing objects but also understanding their use in our daily lives. Watch the demonstration of the Figure 01 humanoid robot below which is been upgraded with ChatGPT vision and speech-to-speech communications, providing us with a glimpse of a future fast approaching.
Here are some other articles you may find of interest on the subject of humanoid robots equipped with artificial intelligence and paving the way for a robotic future.
- New Tesla Optimus Gen 2 humanoid robot
- H1 Humanoid Robot sets new world record for running
- Figure-01 humanoid robot demonstrated making coffee and more
- Atlas humanoid robot receives upgrades from Boston Dynamics
The AI’s reasoning about the environment has also gotten a lot better. It can predict what will happen next based on what’s going on now. For example, it knows that after dishes are washed, they’re likely to be put in a drying rack. This ability to predict and plan ahead is a step toward AI being able to act on its own, without needing us to guide it through every step.
When it comes to completing tasks, the AI has become more independent. If it thinks that the dishes should go in the drying rack, it will go ahead and do that by itself. This shows that the AI is becoming more self-reliant and can take initiative.
Another new feature is that the AI can now look back on what it has done and judge how well it did. After it finishes a task, Figure 01 can decide if it achieved its goal. This self-assessment is essential for AI to learn from its actions and get better over time.
The impact of OpenAI’s update is huge. It’s not just about the technical side of things; it’s about changing how AI fits into our daily lives. With better speech-to-speech abilities, talking and working with AI will feel more natural. As Figure 01 and other AI systems improve, they could help us out in many ways, from making life easier at home to changing how things are done in industries.
OpenAI’s latest update is a big deal for the future of AI interaction. With skills in recognizing objects, doing tasks, understanding the environment, finishing actions, and evaluating its own work, AI is moving toward a future where it can help us out seamlessly. The continued progress in these technologies is set to reshape how we work together with AI.
Image Credit: Figure 01
Latest Geeky Gadgets Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.