Speaker 1: Google wants to make robots smarter by teaching them to understand human language and then act on it in the real world, bringing together the physical abilities of walking roaming robots and giving them the intuitive AI powers you’d expect from a voice assistant or a smart speaker. It’s a new technology called palms can, and it uses Google’s intelligence in natural language processing and machine learning and turns them into robots created by a company called everyday robots. And it is [00:00:30] something we haven’t seen before. This robot does not need to be programmed with really specific instructions. For example, if this, then it can take vague instructions, like I’m hungry or thirsty, and then work out the steps it needs to take to solve that problem so far, we’ve seen robots in the real world do a park or, and indeed physical activities. And we saw AI-driven voice assistance, [00:01:00] but now Google has combined the two. This is a huge deal for the future of robotics and human assistance. So we thought that for this week’s episode of What Future, we’d try something a little different. I have my colleagues, Stephen Shanklin here to tell me why this is such a game changer now, Shanks, you and I were both at this Google demo. It was kind of impressive to see. Can you give me the basic overview of what Google does? Speaker 2: Of course. It’s a technology called Palm Sayan that combines two very different technologies. The first is called [00:01:30] Palm, which is Google’s very sophisticated, very sophisticated natural language processing engine. So it’s an AI system that’s trained on millions of documents, mostly from the internet. And this is combined with the physical abilities of a robot. They have trained a robot to take a number of actions such as moving around the kitchen, grasping objects, recognizing objects. They start with this language model. You can give a little, you can give the robot a command in natural language like I spilled my drink. I need [00:02:00] a little help. The robot comes up with a number of possible actions, but then bases those possible actions on what the robot actually knows how to do, so the combination of the language model and real-world capabilities is what’s interesting here. Speaker 1: We’ve seen these great demonstrations of, um, a robot that collects several different like balls and blocks of different colors. And it knew that the yellow ball meant the desert and the blue ball meant the ocean. How are these things recognized? Speaker 2: That’s what you learn from [00:02:30] the real-world language information it is trained on. It knows somehow on a metaphorical level that green means jungle, blue means ocean, and yellow means desert. So, for example, by reading the novel Dune, it can learn that yellow desert can be a phrase that appears somewhere, so that it can learn to connect these things. So it actually reaches a sort of metaphorical level of reasoning. It’s much more human than what we’ve seen in most robots, which are extremely literal, extremely precisely scripted, and strictly programmed to perform very narrowly [00:03:00] set of operations. So it’s much more open. Speaker 1: Yes. I remember with that hamburger demo, they showed us some demos of stacking blocks and bowls. But then I asked if they could ask the robot to make a hamburger and it just took the pieces and arranged them in order. He placed an entire bottle of ketchup in the middle of a hamburger, which was peak robot behavior. But I loved it. You don’t actually have to say put a hamburger patty put lettuce on a hamburger patty if lettuce then tomato that [00:03:30] it just knows how to do it all at once. Speaker 2: Yes. So a traditional industrial robot that maybe installs windshield wipers or solders capacitors on a circuit board. It’s a very specific, very scripted activity. It’s very open. And because it’s learned from this incredible wealth of knowledge that’s on the internet, it knows what the components of a hamburger might be. It was a pretty interesting demo, and it was, it wasn’t something that Google planned ahead of time. That was your random question just now. So that was [00:04:00] you know, a good example, a good illustration of how this robot can be, you know, more improvisational. Speaker 1: We’ve seen a lot of robots before from the likes of Boston Dynamics, you know, running through obstacles. Or I saw the Amicka robot at CES that has that very humanoid face and was able to respond in natural language. But these are kind of examples of like a physical, real-world robot and then natural language in a kind of human suit, right? This is something that is quite different from [00:04:30] these. Speaker 2: One of the reasons it’s such an interesting demonstration is that it combines the brain and the bra. There is, there is AI language processing and there is some physical ability to actually go out into the real world. The robots themselves are designed by an Alphabet subsidiary called Everyday Robots. And they just want to create everyday robots that show up in your home or workplace. And so they’re designed to move around and grab things and have, you know, digital vision. And so with that combined with the Google framework [00:05:00] is, you know, something that’s potentially more useful in-house if they can actually, you know, develop this for a few more years to get it out of the research, uh, lab and into your home. Speaker 1: Yes. So, I mean, we’ve seen robots like Amazon’s Astro, which is a little home assistant, you know, that can bring you a can of Coke from the fridge and take it into your, into your bathtub. I saw this demo from our smart home team. What would be the future of this kind of robot in a home context compared to some of the other home assistants we’ve seen before? Speaker 2: If you look a lot [00:05:30] of these other alternatives, it’s kind of like a smartphone with a bit of navigation plastered on top. So, you know, the Amazon asteroids, you know, that’s impressive, that’s a first step, but it’s, you know, a whole other level when it comes to understanding what people want and what the robot itself can do. It is much more potentially open and therefore much more flexible. I guess one of the interesting things here, uh, I saw from the Google robots demo is that this is, uh, [00:06:00] built for the chaos and unpredictability of the real world. If you compare it to Boston Dynamics, they have very impressive physical abilities to navigate in the real world. You know, the Atlas robot can do parkour, it can do flips, the dogs that can go up and down stairs, handle very complex terrain. Um, but they don’t really have much ability in terms of actually executing commands. They can go places, but they can’t do things. Google’s robot is a combination of going places and doing things. Speaker 1: Yes. I feel like you are somehow combining [00:06:30] like the football team and the chess club in one robot. So if you think about where this is going in the future, maybe 5, 10, 20 years from now, what might the future of this kind of technology bring us? It’s obviously very early, but it’s pretty exciting, right? Speaker 2: Yes. So what we’ve seen with the AI ​​revolution is a complete transformation of the computing industry from machines that can do a very specific task to machines that can handle really complex real-world situations. Some of these things [00:07:00] are very difficult, like driving a car on the street, an incredible number of unpredictable events that can happen in this situation. But the AI ​​technology is good enough that it can start to deal with this really, really complex landscape instead of something, you know, very limited like driving a shuttle up a runway and back and down a runway and back, right? So that’s what AI opens up. When you build that into a robot, it gets very complicated. And, and you, I think you’re, you know, a 10- or 20-year time horizon is more likely what we [00:07:30] looking here But when you combine that AI with that physical ability to navigate the real world and take action, then it’s potentially very transformative. Speaker 1: There it is, but I’m interested to know what you think. Is this the future of robotics, or is it kind of terrifying, or is it both, because sometimes robotics and technology are like that. Let me know when the comments below, and while you’re here, throw us the same and subscribe for much more. What are the future videos. We have amazing things about robotics, flying machines, everything for you [00:08:00] may possibly want. Okay, until next time, I’m Claire Riley for CNET, bringing you the world of tomorrow today.


Previous articleA roundup of pet peeves that mess with Google and Alexa
Next articleChemical tanker cargo ship crashes near southwestern Japan