Artificial Intelligence: Improving learning by forgetting

AI must develop information pruning (forgetting) techniques to deliver actionable insights


Vaqar Khamisani December 31, 2019

The evolutionary process has enhanced human capabilities to efficiently manage information sensed from the environment. To this end, our brain undertakes the key functions of storage and retrieval, as well as conversion of data to its generalised form through the process of learning. Additionally, our mental faculties continuously retrieve content from the memory, and we effectively bring it to bear to achieve our objectives. Hence, as information in all its form is critical for our sustenance, we have historically regarded it as the most powerful possession.

Whereas our senses and the brain add information to our mental repository, forgetfulness does the opposite by suppressing content in our memory. We are therefore understandably disappointed at not being able to recall expunged knowledge, especially when it could have helped us achieve our goals. Hence, as forgetting results in information loss, it has traditionally been referred negatively.

Paradoxically however, our past views on information and forgetfulness have undergone a reset due to modern research in the areas of psychology and artificial intelligence. Firstly, although information is generally valuable, however, it is only useful up to a threshold and then it starts to become detrimental. Secondly, forgetfulness improves our intellect as it prevents our mind from becoming overloaded with complex data. In short, if the content in our mind were to expand unchecked, it could get overly congested and cause degradation of our cognitive functions. However, forgetting halts the uncontrolled growth of information, and this prevents the brain from getting overwhelmed from the continuous flow of data through the environment.

Even though information has great utility, its effectiveness recedes drastically if acquired limitlessly. This feature of it is best highlighted by the law of diminishing marginal returns in the field of economics. For example, in a situation when we are thirsty, the highest gratification is experienced after consuming the first glass of water. Subsequently then the relative satisfaction subsides and reaches a tipping point beyond which any further ingestion could turn into an agonising experience. Hence, a seemingly positive and a healthy action becomes burdensome if the usage stays unchecked. Incidentally, information also follows the law of marginal utility which means that it starts to cause more harm than good if expanded beyond certain levels.

Interestingly, the decline in cognitive functions due to unregulated knowledge expansion was best described in a landmark research paper authored by Shaul Markovitch and Paul Scott. The investigators created an Artificial Intelligence (AI) based system that solved, as well as saved solutions to basic toy problems. The main purpose of the storage was to provide a handy resolution in case if a newly encountered problem was similar to a previously addressed instance. As the programme went about tackling further puzzles, it experienced improved efficiency due to the availability of cached worked-out solutions. However, as the solution repository grew unchecked, the performance started to degrade as the programme began to sink in its own weight of accumulated knowledge. It was apparent that after reaching a tipping point, it was faster to solve problems using the first principles than to search for a ready-made answer from an increasingly large solution storage.

At some stage, the researchers introduced the concept of forgetfulness in the system, which was implemented by discarding those stored solutions that were sparingly used. This change made the programme acquire stability and its overall performance improved drastically. However, the most striking aspect of the study was that even random forgetfulness, which was implemented by arbitrarily removing stored solutions, was enough to outperform a system that never forgot!

In general, there are many ways in which overabundance of information decreases the intellect. Firstly, the retrieval speed of content could slow down as its size increases uncontrollably. Secondly, superfluous information causes confusion as any access to an overcrowded memory results in irrelevant data being fetched along with appropriate content. Thirdly, excessive information also deteriorates learning by reducing the ability to form useful generalisations. This occurs when noisy data cluttered in the mind is not suppressed, which results in it being used as building blocks to form poor generalisations.

Hermann Ebbinghaus, the late 19th century psychologist, proved to the world that forgetting follows a decreasing power law trend. This implies that prior to settling down, the strength of recall from memory reduces sharply at each passing day. On the surface, this steep decline seems worrisome, however, consistent stream of data from the environment regularly replenishes our mental storage. Hence, we can postulate that high levels of cognition require an equilibrium to be maintained between acquisition and discarding of knowledge. On one hand, the pruning of content by forgetting protects us from the harmful effects of uncontrolled data expansion. Whereas, on the other hand, loss of information by forgetfulness is continuously refurbished with new stream of input from the environment. Therefore, information loss as well as gain maintain a balance, and any major deviation results in decreased intellectual capabilities.

Despite the proven role of forgetfulness in enhancing human cognition and learning, the world of data science has been slow to implement methods that prune information. Although a few exceptions are noteworthy to have shown potential, the current approaches are far from being comprehensive. For example, winnowing is a process that discards certain data elements which results in improved outcomes by a machine learning model as it targets more relevant features. Similarly, in many dynamic algorithms, recent information is weighted higher to support the algorithm enhance its learning accuracy by deprioritising older data and focusing on freshly received input.

As businesses adopt big data at scale, the magnitude of information within these organisations is expected to grow at an unprecedented level. Although smart algorithms are being deployed to draw meaningful trends, the sheer vastness of data is a major bottleneck to improve their outcomes. In the hope of generating better analytics, these intelligent programmes are put under stress to process enormous quantities of data as organisations leverage the progress made in cloud storage technologies. Irrespective of the type of advance algorithm being used, inputting large volumes of unregulated information perpetuates deficient learning with stagnant results. The key lesson from human cognition is that information growth has to maintain an equilibrium in order to achieve optimum outcomes. Hence, keeping that in perspective, it is imperative that the current AI systems embrace content-pruning and forgetfulness techniques in order to successfully tackle the organisational push to deliver actionable insights from mammoth amounts of data.

 

 

The writer is based in London and works as a Global Director of Insights for a leading information-based analytics company.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ