Scientists Are Closer to Making Artificial Brains Like Humans

Scientists Are Closer to Making Artificial Brains Like Humans


A new superconducting switch could soon enable computers to make decisions very similarly to the way we do, essentially turning them into artificial brains. One day, this new technology could underpin advanced artificial intelligence (AI) systems that may become part of our everyday life, from transportation to medicine.

Brain-inspired computing is having a moment. Artificial neural network algorithms like deep learning, which are very loosely based on the way the human brain operates, now allow digital computers to perform such extraordinary feats as translating language, hunting for subtle patterns in huge amounts of data, and beating the best human players at Go.

Machines learn to think

Computational neuroscience bridges the gap between human intelligence and AI by creating theoretical models of the human brain for inter-disciplinary studies on its functions, including vision, motion, sensory control, and learning.
Research in human cognition is revealing a deeper understanding of our nervous system and its complex processing capabilities. Models that offer rich insights into memory, information processing, and speech / object recognition are simultaneously reshaping AI.
A nuanced understanding of the structure of the human brain can help restructure hierarchical deep learning models. Deep learning, a branch of machine learning, is based on a set of algorithms that attempt to model high-level abstractions in data. It will enhance speech / image recognition programs and language processing tools by understanding facial expressions, gestures, tone of voice, and other abstracts. We are at the threshold of experiencing advances in speech technology that will lead to more practical digital assistants and accurate facial recognition that will take security systems to the next level.

A Human-Like AI

The switch is meant to boost the ability of the so-called “neuromorphic computers” which can support AI that one day could be vital to improving the perception and decision-making abilities of smart devices such as self-driving cars and even cancers diagnostic tools.
The world’s largest car makers are investing in technologies able to replace a human driver, but there is still a long way to go. No matter how safe driverless cars will eventually become, the AI driver will eventually face the moral dilemma of having to decide whether to prioritize the safety of its passengers or others who might be involved in a collision. This switch could equip the artificial brains that make these decisions to have more capacity to deal with these kinds of ethical conundrums.

Machines learn to think

Computational neuroscience bridges the gap between human intelligence and AI by creating theoretical models of the human brain for inter-disciplinary studies on its functions, including vision, motion, sensory control, and learning.
Research in human cognition is revealing a deeper understanding of our nervous system and its complex processing capabilities. Models that offer rich insights into memory, information processing, and speech / object recognition are simultaneously reshaping AI.
A nuanced understanding of the structure of the human brain can help restructure hierarchical deep learning models. Deep learning, a branch of machine learning, is based on a set of algorithms that attempt to model high-level abstractions in data. It will enhance speech / image recognition programs and language processing tools by understanding facial expressions, gestures, tone of voice, and other abstracts. We are at the threshold of experiencing advances in speech technology that will lead to more practical digital assistants and accurate facial recognition that will take security systems to the next level.
However, contemporary deep neural networks do not process information the way the human brain does. These networks are highly data-dependent and should be trained to accomplish even simple tasks. Complex processes require large volumes of data to be annotated with rich descriptors and tagged accurately for the machine to ‘learn.’ Further, deep learning systems consume far more power than the human brain (20 watts) for the same amount of work.
We need to discover less intensive machine learning approaches to augment artificial intelligence with native intelligence. Our world is awash with data from Internet of Things (IOT) applications. Deep neural networks capable of consuming big data for self-learning will be immensely useful. Just as children identify trees despite variations in size, shape, and orientation, augmented intelligence systems should learn with less data or independently harness knowledge from the ecosystem to accelerate learning. Such self-learning algorithms are necessary for truly personalized products and services.

The interface imperative

The merger of human intelligence and AI will turn computers into super-humans or humanoids that far exceed human abilities. However, it requires computing models that integrate visual and natural language processing, just as the human brain does, for comprehensive communication.
Language learning skill is one of the defining traits of human intelligence. Since the meaning of words changes with context, ‘learning’ human language is difficult for computers. AI-embedded, virtual assistants can address complex requests and engage in meaningful dialogue only when they ‘think and speak’ the human language. Machines should learn to understand richer context for human-like communication skills. They should be endowed with richer cognitive capabilities to interpret voice and images correctly.
AI systems such as IBM’s Watson, Amazon’s Alexa, Apple’s Siri, and Google Assistant will become more useful, if enhancements to the quality of language and sensory processing, reasoning, and contextualization are achieved. Voice-activated devices and smart machines will create a centralized, artificial intelligence network or ‘intelligent Internet,’ which will redefine man-machine and machine-machine collaboration.

Mind the gap

Yet these devices are still inefficient, especially when they transmit information across the gap, or synapse, between transistors. So Schneider’s team created neuron-like electrodes out of niobium superconductors, which conduct electricity without resistance. They filled the gaps between the superconductors with thousands of nanoclusters of magnetic manganese.
By varying the amount of magnetic field in the synapse, the nanoclusters can be aligned to point in different directions. This allows the system to encode information in both the level of electricity and in the direction of magnetism, granting it far greater computing power than other neuromorphic systems without taking up additional physical space.
The synapses can fire up to one billion times per second — several orders of magnitude faster than human neurons — and use one ten-thousandth of the amount of energy used by a biological synapse.

In computer simulations, the synthetic neurons could collate input from up to nine sources before passing it on to the next electrode. But millions of synapses would be necessary before a system based on the technology could be used for complex computing, Schneider says, and it remains to be seen whether it will be possible to scale it to this level.
Another issue is that the synapses can only operate at temperatures close to absolute zero, and need to be cooled with liquid helium. Steven Furber, a computer engineer at University of Manchester, UK, who studies neuromorphic computing, says that this might make the chips impractical for use in small devices, although a large data centre might be able to maintain them. But Schneider says that cooling the devices requires much less energy than operating a conventional electronic system with an equivalent amount of computing power.

Designing the Research

To achieve the results, the team set up the study with the goal of completing neuromorphic computing, which involved the creation of memristors--also known as memory resistors--which consist of niobium-doped strontium titanate. The reason for choosing the oxide is that its strong electronic properties make it ideal copying the brain neuron function. 

Researchers May Have Found the Key to Engineering Electronic Brains

Source: University of Groningen
In explaining the way that the memristors were used in the technique, UG researcher Anouk Goossens, paper first author, said, "We use the system's ability to switch resistance: by applying voltage pulses, we can control the resistance, and using a low voltage we read out the current in different states.
The strength of the pulse determines the resistance in the device. We have shown a resistance ratio of at least 1000 to be realizable. We then measured what happened over time."
Goossens spent a great deal of time testing the memory resistors, focusing on the duration of the memory which could be achieved (one to four pulses produced a range of one second to two minutes, and the material seemed to hold up even after 100 cycles were switched).

Just Myself

I like writing about Science, games and free software.

1 Comments

Please Comment if you are like this post

Previous Post Next Post