Since the year 2014, DeepMind has been playing the Atari Video Games. Initially, these are the machine learning systems that could be helpful in learning to win games and beat the human score but they couldn’t manage how they used to do it. And therefore, for every Atari video game, there is an all-new set of neural networks was required to be created. However, DeepMind has never been benefitted with its own experience until now.

And there, a team of researchers from the DeepMind and the Imperial College London worked on an algorithm that bestows memory on the systems and allowing them to learn. The system includes supervised learning augmented learning tests in accordance to learn in sequences.

For continual learning, synaptic consolidation is the basis in the human brain. Saving the already learn knowledge and transferring it to one task to the another is something critical to the way humans learn. Moreover, inability in performing such tasks has been considered as the key failure for machine learning. The algorithm was named as “elastic weight consolidation” (EWC), chooses the most useful part of the machine play and in accordance to win games in the past and later it transfers ahead only those parts.

The High-Level Applications

The system is awe-inspiring but is not perfect. DeepMind can now easily influence the most important information from its past experiences that could help them in learning. The efficiency of learning is totally meant to match if the machine learning can or eventually eclipse with the real world learning.

The core component of any intelligence is the elastic weight consolidation, where intelligence could be artificial or biological. As it enables the thinker to learn the learn the tasks in series and without forgetting the previous one. Although, the new DeepMind algorithm is meant for supporting the continual learning that is similar to the human brain’s synaptic consolidation.