Talking to your computer has been a dream of futurists and technologists for decades. When you look at the state of the art in 2004, it’s staggering to see how far we’ve come. There are now billions of devices in our hands and homes listening to our queries and doing their best to answer them. But despite all the time, money and effort, chatbots of all kinds haven’t taken over the world like their creators intended. They are miraculous. They are also boring. And it’s worth asking why.

Chatbot is a term covering many systems, from voice assistants to AI and everything in between. Talking to your computer in the not-so-good old days meant typing into a window and watching the machine try to make a facsimile of the act of speaking rather than the real thing. The old ELIZA trick (1964 to 1967) of re-expressing user input as a question helped sell this presentation. And this continued even to the SmarterChild chatbot of 2001. The other branch of this work was digitizing the analog with speech-to-text machines, such as the frustrating but sometimes wonderful product of Nuance.

In 2011, the ideas in this early work came together to make Siri for the iPhone 4S, which quietly built on Nuance’s work. Amazon’s founder, Jeff Bezos, saw Siri’s promise early on and launched a major internal project to create a domestic competitor. In 2014, Alexa appeared, followed by Cortana and Google Assistant in the following years. Natural language computing was already available on countless smartphones and smart home devices.

Companies largely refrain from being specific about the cost of building new projects, but chat is expensive. Forbes reported in 2011 that it cost Apple $200 million to buy the startup behind Siri. in 2018 The Wall Street Journal quoted Dave Limp as saying that Amazon’s Alexa team has more than 10,000 employees. A Business Insider a 2022 story suggests the company tied up more than $10 billion in losses from Alexa development. Last year, The information claims that Apple is now spending a million dollars a day on AI development.

So what do we use this expensive technology for? To turn our smart bulbs on and off, play music, answer the doorbell and maybe get sports scores. In the case of AI, maybe getting poorly summarized web search results (or an image of human subjects with too many fingers.) You certainly don’t have much in the way of meaningful conversation or extracting vital data from these things. Because in almost every case his understanding is crappy and struggles with the nuances of human speech. And this is not isolated. In 2021 Bloomberg reported internal Amazon data that found up to a quarter of shoppers stop using their Alexa device altogether within the second week of owning it.

The often-cited goal is to make these platforms conversationally intelligent, answering your questions and responding to your commands. But while it can do some basic things pretty well, like mostly understanding when you ask it to dim your lights, everything else isn’t quite as smooth. Natural language tricks users into thinking that systems are more complex than they actually are. So when it comes time to ask a complex question, you’re more likely to get the first few lines of a wikipedia page, undermining any faith in their ability to do more than play music or turn the thermostat.

The assumption is that generative artificial intelligence, anchored to these natural language interfaces, will solve all the problems currently associated with voice. And yes, for one thing, these systems will be better at miming a realistic conversation and trying to give you what you ask for. But on the other hand, when you really look at what comes out the other side, it’s often nonsense. These systems gesture towards surface-level interactions, but cannot do anything more substantial. Don’t forget when Sports Illustrated i tried to use AI-generated content that boldly claims volleyball can be “hard to get into, especially without a real ball to practice with.” No wonder so many of these systems are, like Bloomberg reported last year, sustained by low-paid human labor.

Of course, form boosters will suggest it’s early and like OpenAI CEO Sam Altman said recently, we still need billions of dollars for more chip research and development. But it makes a mockery of the decades of development and billions of dollars already spent to get us to where we are today. But the problem isn’t just money or chips: last year, New York Times reported that the energy requirements of AI alone could jump to 134 terawatt hours per year by 2027. Given the urgent need to limit energy consumption and make things more efficient, this does not bode well for the future of its development, nor for our planet.

We’ve had 20 years of development, but chatbots still haven’t caught on the way we were told they would. At first it was because they were just struggling to understand what we wanted, but even if that was resolved, would we suddenly embrace them? At the end of the day, the main problem remains: we simply don’t trust these platforms, both because we don’t have faith in their ability to do what we ask them to do, and because of the motivations of their creators.

One of the most enduring examples of natural language computing in fiction, and often cited by real-world manufacturers, is the computer from Star Trek: The Next Generation. But even there, with a voice assistant that seems to have something close to general intelligence, he can’t be trusted to steer the ship alone. A crew member still sits at each station, following the captain’s orders and generally carrying out the mission. Even in a future so advanced as to be free of material needs, beings still crave the feeling of control.


Engadget 20th Anniversary Banner

I’m celebrating Engadget’s 20th Anniversarywe take a look back at the products and services that changed the industry since March 2, 2004.

This article contains affiliate links; if you click on such a link and make a purchase, we may earn a commission.

https://www.engadget.com/chatbots-promise-a-future-that-will-never-arrive-140033258.html?src=rss