The rapid pace at which Artificial Intelligence (AI) has been progressing in the past few years is remarkable. We see a vast investment from large companies in the field, spanning over businesses like finance, marketing, retail, logistics, social media, and other services where AI can scale beyond what is affordable by human employment.
Artificial Intelligence, as the words might hint, aims to mimic human intelligence. The intelligence level should match that of a human being and sometimes beat it. Since the birth of the research field of AI in the early 50's, one of the main topics was to understand how the human brain actually functions. The dominating theory of how the brain works is due to Donald O. Hebb who has been called the father of neuropsychology and neural networks. In his seminal paper that was published in 1949, "The Organization of Behaviour", Hebb models the brain as a network of neurons that signal to each other through links that are called synapses. This network of interconnected neurons gave rise to the term neural network was born and the way the brain learns, so called Hebbian learning, is by adjusting the connections between the neurons as a result of some repetitive behaviour, which is what we call training.
On a conceptual level, the way the neurons are connected and how they signal to each other and store information serves as an algorithm, which is a method that makes computations and then acts upon the results of these computations. In that sense, the human brain is very impressive. We can perform different tasks such as seeing, hearing, speaking, moving, and sensing. And this is all done in various and distinct contexts. One could compare to how we design different algorithms today. There is a field called computer vision, where research is conducted to be able to see like us humans do. There is another field called speech signal processing, where the algorithms are designed to be able to understand a direct conversation by "listening". In robotics, a great body of work is focused on mimicking the way humans move and balance their body. So it seems like as we have tried for a long time to develop different methods, or algorithms, to solve different problems, where the brain seems to use only one algorithm!
But this is not where the fascination stops. There have been experiments where a blind person got the nerves from the brain to the eyes rewired so they got connected to the ears instead. Thus, signals from the ears reached to the part of the brain that is responsible for interpreting signals from the eyes. The result was that the blind person's brain got trained after a while so that the signals that were induced from the ears were somehow interpreted as an image to form some of sort of sonic vision. The observation that the brain has a large capacity to adapt to new tasks indicates that there might exist a universal algorithm that could learn a wide range of tasks. One algorithm to rule them all! (Stealing with pride the "Lord of The Ring" famous sentence - "One ring to rule them all.")
AI researchers started to build mathematical models of neural networks. Initially, the generic structure of these models made them difficult to realize in practice. However, clever simplifications based on recent techniques from applied mathematics and the recent development of distributed computation that has become cheap and widely available, applications of neural networks started to flourish. Now it has become feasible to implement neural networks of large number of neurons with multiple layers, which is what is known today as deep learning, where the word deep refers to the series of many layers of neutrons that are connected to each other. Deep learning has proven to be extremely useful in practice, solving problems that were not believed to be solvable within a hundred years, such as the game of Go. Other applications are computer vision, self-driving cars, natural language processing, understanding, and generation, speech signal processing, marketing, recommender engines, and the list go on. It's definitely exciting times ahead to follow what deep learning may achieve and how it will evolve over time.Suggest a correction