For up to £250 Bonus for sports, use our exclusive bet365 Bonus www.abonuscode.co.uk Claim your bonus and start betting at bet365 now.

AI and Machine Learning: can we build an artificial brain?

AI is changing the world around us, making its way into businesses, health care, science and many other fields. In fact, most of us are happy to work daily with the all-knowing Google, the largest AI on Earth. Google; Golem; God: the associations are rather imposing. Are we on the verge of creating a true artificial brain?

When artificial intelligence first emerged as a discipline, scientists had great hopes for it. They wanted to create General Artificial Intelligence, that is, a computer system capable of doing anything a human can, in a better and faster way than we do. After AI failed to deliver on its initial promises, scientists scaled back their expectations, instead focusing on specific tasks.

This is called Narrow AI, and even though it’s a step back from General AI, it still gets important jobs done. Today’s software won’t argue with you about the world economy while fixing you a cup of tea or make you feel better when you’re depressed. It can still recognize your face, though, and understand you when you invite it to a game of chess.

 

Today, after years of trying different approaches to creating AI, "machine learning" is the only area that brings promising and relevant results. The idea behind it is fairly simple. Rather than programming computers with a specific set of instructions to accomplish a particular task, such as moving around, speaking, or recognizing faces, you code machines to learn on their own how to perform the task.

Unlike traditional programming, which uses explicit, sequential instructions, machine learning software looks at a large number of sample data and uses statistical modelling to find patterns therein. Training it to recognize pictures of horses involves showing it lots of horse pictures tagged as such, along with another set of pictures of other things. The machine then learns what data points are common to horse images, and can use them to identify new pictures.

Various algorithmic concepts have been used for machine learning, but it was biomimetics, or biomimicry, that allowed for a real breakthrough. Biomimetics takes inspiration from biology, in this case the human brain, in order to design a more intelligent machine. This led to the development of Artificial Neural Networks, which are programmed to process information in the same way our brain does.

Read more: Google created an AI that can learn almost as fast as a human

Due to the exponential increase in computing power in the last few years, we're now able to build neural networks with much larger and deeper layers than it was previously possible. Although there is no clear border between the terms, this area of machine learning is often described as "deep learning".

Deep learning can be summarized as a system of probability with a feedback loop on top of it. Based on a large dataset, it is able to make statements with a degree of certainty. For instance, the system might be 77% confident that there is a horse on the image, 90% confident that it’s an animal and 12% confident it’s a toy. To improve itself, the artificial neural network is able to learn from its mistakes, to be able to make a better decision later on.

Our own learning processes are linked to the synapses in the brain, which serve as connections between our neurons. The more a synapse is stimulated, the more the connection is reinforced and learning improved. Researchers took inspiration from this mechanism to design an artificial synapse, called a memristor.

The resistance of this electronic nanocomponent can be tuned using voltage pulses similar to those in neurons. If the resistance is low the synaptic connection will be strong, and if the resistance is high the connection will be weak. This capacity to adapt its resistance enables the synapse to learn. However, apart from the problem that we don’t really understand how human intelligence works, the functioning of a biological system like our brain remains fundamentally different. 

Read more: Low-energy artificial synapse for neural networks

Healthcare, which has been an early adopter of the technology, is one of the many areas where AI can be useful. Algorithms analysing radiology scans or images of tumors can apply the same rigour across thousands of images, without getting tired on a Friday afternoon. This is an example of a known use case: computers doing something people were already good at, only more efficiently.

But machine learning can also teach healthcare professionals what they didn’t already know, because they simply couldn’t get hold of the data to start with, or because they had the data but couldn’t process it. Historical records, discharge summaries and nurse’s notes often have useful information about a patient. Married with other data, that information can provide healthcare professionals with early indicators of further problems, or might help to inform decisions about how to provide home care that could lighten the burden on the clinical system.

The initial enthusing over AI has faded and the sci-fi scenarios are mostly over. Even with the emergence of new machine-learning techniques the ultimate goal of the field—some form of General AI—remains a distant vision. Still, powerful machine learning is spreading into new industries and areas of daily life and will heighten attention on the unintended consequences that may result.

Biases could become embedded in machine-learning algorithms that are increasingly used to guide important decisions such as the appropriate length of a sentence for a person convicted of a crime, or who is granted a bank loan. Who—or what—is going to take responsibility if something goes wrong? Are we prepared for AI to displace huge numbers of highly educated workers? As machine learning comes to dominate in more areas of life, these and other issues will raise serious ethical, maybe even existential concerns.

Image: Prague Golem

Source: The Register