CES 2017: Artificial Intelligence - You Don't Understand!

Cars are getting smarter

So that cars are only partially autonomous Above all, they need one thing: intelligence. All the sensors, cameras and radars that can be attached to the car continuously collect large amounts of data, but ultimately they are of no use if the mind is lacking to work with them. Computers are now relatively good in the area of ​​"machine learning": the computer learns from the past and transfers the knowledge to the future.

We encounter examples of machine learning every day: If you search for ACAD on Google, for example, you get the message: “Did you mean ADAC?”. However, the computer is not smart enough to recognize the word as misspelled and correct it. He only knows that in the past, many people who searched for ACAD started a new search for ADAC just seconds later. The machine remembers this pattern and transfers it to new search queries.

The system would only be really intelligent if it could identify a word as wrong without this previous learning process and suggest the correct spelling. But this is exactly the problem of artificial intelligence: language. For example, while technology can now recognize and determine countless objects - cars, houses, pedestrians, cyclists and more - even faster and better than humans, it is often difficult with the simplest words and sentences. Just think of the often more bad than right attempts to translate various translator programs.

And while humans as intelligent beings can often still make sense of crude translations, the computer would fail mercilessly here too. That would not be so bad if language were not the only direct way out of and into our brain. All human thinking, and thus our intelligence, is based on language - Plato has already recognized this, who formulated: "Thinking is the inner conversation of the soul with itself". But as long as the computer does not “speak” as well as we humans, it will not be able to think and act in a comparable way - how difficult that is, everyone who tries to think in a foreign language will notice for themselves.

Technology is taking on more and more functions autonomously

This connection has already put a big damper on research on artificial intelligence (AI) in the 90s. While the development of machine learning continues to progress and success is achieved, the AI ​​learners move from high to low and keep getting stuck on the problem of language. Experts are now expecting a similar slump as back then. After research has picked up speed in the past ten years and Google has set a new milestone with its AlphaGo computer, the latest speech robots are again facing the known hurdle.

The impact of not understanding becomes clear when, for example, computer programs try to use the Facebook activities of a user to make statements about their character - these are often miles apart. It gets even worse if the program finally proposes advertising based on these assumptions: It is not uncommon for every senior from a certain age to be offered the extra-large clock with a full weekday for dementia patients.

Machine learning may seem a bit naive in direct comparison to artificial intelligence - but mistakes like this rarely happen. In relation to the car, this means that more and more functions can be transferred to the technology in the future. Straight ahead on the highway or parking processes, for example, sooner or later we'll no longer have to do it ourselves. Here the computer can apply and transfer knowledge that has been learned. As soon as the car is supposed to cruise through the city alone and is confronted with a multitude of unforeseeable events, it will fail for a long time: due to the lack of reason. (Michael Gebhardt / SP-X)

Related Posts