How societal imperfections drive bias in artificial intelligence

Nex gen smart programmes designed to develop their insights without relying upon human expertise

As artificial intelligence (AI) gains more sophistication, it becomes increasingly difficult to detect and address its inherent biases. This is mainly due to the autonomous nature of emerging machine learning systems. The next generation of smart programmes are designed to develop their insights without relying upon either human expertise, or on a set of curated datasets. In fact, the upcoming wave of AI-driven software are capable to adapt as they engage with their environments and are quick to overrule what was initially learnt. In that way, the main departure from the past is that their knowledge constantly enhances to suit their surroundings.

In specific terms, intelligent algorithms such as reinforcement learning empowers a computer to continuously modify its behaviour based on external interactions. Its key competency lies in monitoring its own performance and applying fixes when it makes mistakes. In that way, and with time, it completely decouples itself from its original learnings which are generally based on trainings on balanced instances. Therefore, even if initial input was flawless and the developers had encoded the best of ethical practices, these rules could be overridden over time if they failed to perform. Hence, for a machine that dynamically interacts with its environment, there is a risk that the flaws in our surroundings, such as our unconscious biases, could creep into it and ultimately influence its logical processes.

The presence of bias has major downsides, however, it is also considered integral to the development of machine and human intelligence. Its main role is to address conflicts in the presence of competing and often opposite beliefs and narratives. Within people, the tendency to simultaneously possess contradictory opinions is known as cognitive dissonance and its continued presence could be unsettling. Therefore, we often rely upon what are referred to as our cognitive biases to resolve any stalemates and achieve some degree of consonance. In general, our unconscious predispositions help prioritise those choices that comfortably align with our existing set of principles. Similarly, smart algorithms frequently encounter a plethora of plausible explanations when evaluating information. Hence, these systems use a set of preferences to overcome any deadlocks and proceed with the best available conclusion. Therefore, the existence of bias is compulsory for intelligence as it helps resolve intellectual impasse.

Although bias is a vital prerequisite for intelligence, it is critical to ensure that its usage within programmes do not violate legal conditionalities. This is particularly important for businesses as they could be held liable for wrong decisions made by automated processes within their setup. In general, a common approach to minimise system bias is to ensure that the deployed machine is trained on a well-represented dataset. However, the main challenge with solely relying on this approach is that as the external surroundings inevitably change, the stored generalisations start to become obsolete. As a result, the learning algorithms will swiftly overwrite original insights with latest discoveries. In such dynamic environments, therefore, the organisations will have to move beyond the mere fixing of initial training instances to minimise rational flaws.

As knowledge consistently evolves within intelligent programmes, businesses should put in place a recurring review process to detect logical faults. For any audit regime to be effective, I believe it is important to address its three important dimensions namely behaviour, cognition, and culture. In terms of behaviour, the aim of inspection is to uncover anomalies by analysing ongoing system responses. This is essentially a reactive approach in which the detected shortcomings continually lead to repairing of the code to ensure the future automated decisions conform to regulations. However, this tactic can get difficult to manage especially if the deviations were to soar. E.g., the UK Home Office had to scrap their AI-driven immigration programme last year due to the detection of unconscious bias which unduly preferred some nationalities over others.

The cognitive dimension of audit is about assessing the underlying logic of intelligent programmes. The main purpose is to intensely understand a system’s thinking process and expose underlying wrong predispositions. This is a cumbersome, but nevertheless a doable undertaking if a machine’s knowledge is saved as a set of understandable rules. Essentially, it requires experts to manually scrutinise insights that are stored as clauses and update those that diverge from existing standards.

However, many modern architectures are built on deep learning machines that internally represent its information as signals across a network of nodes. The key challenge with this design is that their reasoning process is not interpretable by human evaluators. Hence, in these situations it becomes almost impossible to conduct any type of inspection that aims to assess a computer’s intellect. Therefore, novel methods are needed to assess the state of cognition of deep learning systems and this currently remains an area of active research.

To gain a better understanding of the inner workings of intelligent machines, scientists are also exploring the applicability of human psychological testing. In that context a couple of years ago, the media laboratory of Massachusetts Institute of Technology (MIT) conducted experiments on the inkblot evaluation process. The original procedure was created in 1921 by Hermann Rorschach to comprehend the mental state of individual subjects. It entails creation of ambiguous ink-based images and then asking individuals to observe and provide an interpretation. The resultant explanation by the participant is then deemed to reveal hidden patterns of their thinking process.

In the MIT lab experiment, several inkblots were shown to both a ‘standard’ machine learning programme as well as another one which was excessively fed gory images from the web. The latter one is called Norman and its name is inspired from the main character in the Alfred Hitchcock’s famous thriller, Psycho. It turned out that in total contrast to its counterpart, Norman deciphered every inkblot instance as a violent image. To see why this is useful, let us momentarily consider a situation in which the analysts are tasked with analysing the interpretations without knowing the algorithm’s respective backgrounds. In such a scenario, Norman will appear to researchers as a deviant due to its translation of the pictures, whereas, the other one would appear more normal. Therefore, this experiment highlights that the field of human psychological testing can be customised and applied to deep learning systems to help decode their state of cognition.

Finally, the cultural dimension of audit has to do with inspecting and fixing our regressive social practices since they impact intelligent algorithms. For example, imagine a smart machine being part of an environment that suffers from institutional discrimination. This type of surrounding can wrongly influence any operating programme which may start to exhibit racist or sexist tendencies. Therefore, it is important for businesses to routinely audit their internal procedures and eliminate processes that condone discriminatory actions and attitudes. Although any AI-driven system is not innately prejudiced, however, it could still be influenced to act in that way from a society that rewards such behaviours. In many ways the smart machines are our reflection and if we want them to follow ethical standards, then it is imperative that we start to practice the same.

The writer is based in London and works as a Global Director of Insights for a leading information-based analytics company

Load Next Story