Below you will find the translation into english of the post Ma le reti neurali sognano panda elettrici?, as performed by Google Translate. You can judge by yourself the quality of the result
(back to the italian version).
Neural networks are everywhere. We use them whenever Gmail suggests us words to write in an email. Or when we interact with Siri or Alexa. Or when we have a text translated by Google Translate — a few years ago it was an opportunity to have a laugh, now the result is more than good.1
But neural networks also serve to teach a machine to drive by itself, to recognize writing, to check if a parking space is free, to transcribe speech. And these are just examples in the field of do-it-yourself (or almost), advanced projects are literally amazing, just think of Google’s predictive search, which is able to suggest us the words to search for while we are writing them.
Ten years ago research in this field seemed deadlocked, the computing power was not sufficient and not enough data were available to train neural networks. Then the IT giants arrived, first of all Microsoft and Google, they invested dollars and the landscape has changed in a few years, bringing to the results that are visible to everyone.
Despite the many successes achieved with neural networks, there are shadows that should be taken into account.
Douglas Heaven tells us in a nice article in Nature, Why deep-learning Is so easy to fool, how easy it is to cheat a neural network. The magazine is he heavy, Nature is one of the most important scientific journals, but the article is very well written and is quite easy to read. And it’s good to do it, to discover that with a few well-placed stickers you can fool a neural network into believing that a normal stop sign is a speed limit. Or make her see a monkey instead of a panda.
— Source: Nature (2019).
All this happens because the neural networks used to recognize objects do not create an abstract pattern of what they see, as does the human brain, but they use brute force to quickly classify millions and millions of different images. For us an apple remains such even if it is bitten or cut into segments or without peel, but a neural net to which only whole apples have been shown will never recognize a cut and chewed one. It takes little then to undermine a neural network, and no one knows if all this is the fault of the algorithms used for recognition or is an intrinsic limit of the architecture of the neural networks themselves.
Even more interesting is the article by John Seabrook on the New Yorker, The Next Word: Where will predictive text take us? which, as the title says, discusses the problem of automatic text generation in great detail. Today it is possible to automatically generate passages of text that can be distinguished with great difficulty from those produced by a human writer. But the neural network does not frighten, it only puts one word after another, and when it tries to generate longer texts it quickly loses the thread of the speech. In short, “[the machine] looks like a person who constantly talks but says nothing. Political speeches could be a natural field [of use].”
But what will happen tomorrow when we have even more powerful computers? How can we distinguish the true from the false, how can we understand if what we read was written by a man or a machine? Maybe we will be lucky enough to discover that a machine capable of understanding and reasoning like a man must necessarily be as complex as the human brain. And that, just like with the human brain, we have no idea how it works. Game over.
- I tried to have this post translated to Google Translate, without touching at all what came out. The translation is not perfect but it is certainly a very interesting starting point. ↩