Is it realistic to expect Machine Learning to be fair

Protected attributes and basic approach to make models fair and its limitations

 

In this article, we will understand what fairness in the context of Machine learning means and a basic approach to achieve fairness and its limitations.

Machine learning is everywhere

We may not see Robots walking the streets, but it certainly touches nearly all aspects of our day-to-day life.

In our personal lives, we see it in the form of Music or video recommendations, product recommendations, video games, shopping, and personal digital assistants answering our questions.

In the industry, we see Machine learning (ML) models being used in Insurance companies to settle claims of the insured and automate the submission intakes. In banking, ML models are used for risk assessment, detection of frauds in financial transactions, money laundering, and many other use cases across all industries.

Machine Learning Model as a Black Box

Photo by Thorn Yang :https://www.pexels.com/photo/robot-picking-a-chess-piece-8438957/

ML models are generally considered to be ‘Black Box’ created by an algorithm based on input data.

These Black box models are based on complicated mathematical functions that make it difficult for humans to understand how they reached a final prediction.

Fairness

According to Oxford Dictionary, Fairness means ‘the quality of treating people equally or in a way that is reasonable”.

In the world of Machine learning, it means that the training data may be biased and there may be some inaccuracies in the model, but the individuals should not be treated unfavourably by the output of the models because of certain specific attributes such as race, gender, disabilities, etc.

Protected Attributes

According to Cornell Law School, a protected characteristic–also referred to as a protected class–is a personal trait that cannot be used as a reason to discriminate against someone.

Different laws across the countries refer to a different set of protected characteristics or attributes but broadly, these include –

Race Color National Origin Religion or belief Sex Gender reassignment Maternity Status Familial Status Disability Age Marriage Status Gypsies and Travellers Military or Veteran status Mental Disability Ancestry Medical or genetic condition

The following table displays each protected trait alongside the law/regulation that established it as such in the US -

Protected Characteristic Federal Law Establishing Protected Status Race Civil Rights Act of 1964 Religious belief Civil Rights Act of 1964 National origin Civil Rights Act of 1964 Age (40 years and up) Age Discrimination in Employment Act of 1975 Sex* Equal Pay Act of 1963 and Civil Rights Act of 1964 Pregnancy Pregnancy Discrimination Act of 1978 Citizenship Immigration Reform and Control Act of 1986 Familial status Civil Rights Act of 1968 Disability status Rehabilitation Act of 1973 and Americans with Disabilities Act of 1990 Veteran status Vietnam Era Veterans' Readjustment Assistance Act of 1974 and Uniformed Services Employment and Reemployment Rights Act Genetic information Genetic Information Non-discrimination Act of 2008

Source :https://www.thoughtco.com/what-is-protected-class-4583111

*/ Real-life examples of Machine learning bias

Just as the use of Machine learning is becoming prevalent, so are the instances where the models have been found to be biased against a class(es) of people.

A good example of biased Machine Learning is the COMPAS system

(https://en.wikipedia.org/wiki/COMPAS_(software) used in a few states in the US. The COMPAS system used a Machine learning model to predict the likelihood of a defendant becoming a recidivist. The model was found to predict double the number of false positives for recidivism for African American ethnicities than for Caucasian ethnicities.

Another popular example of Bias is Amazon’s hiring algorithm. It should not come as a surprise that Amazon, one of the largest tech companies, uses Machine learning for in-house activities. In 2015, Amazon realized that its ML model used for hiring employees was biased against women. And this was because the data on which it was trained, had more men applicants. This resulted in the model being trained to favor men over women.

Among various reasons for the bias in ML, the one that stands out very clearly is that it starts with the training data.

Humans doing manual data labelling may carry inherent biases. Human-labelled data often does not come with an explanation, and the only way to understand a human label is often to speak with the data labeller. Training data may not be representative of all the groups. The data used needs to represent “what should be” and not “what is”. Hidden correlations in the data, like the correlation between the protected attribute and the predictor. This is quite interesting, and we will explore this in more detail shortly. Fairness through Unawareness

It is one of the basic approaches to solve Bias related issues arising from the training data. It attempts to achieve fairness by not explicitly using protected attributes in the training and prediction process.

In theory, this appears to be a perfect approach to make your models fair but let’s look at a real-life example where such an approach failed.

In Oct 2019, the researchers found that the model that was used in US hospitals to predict the patients that were more likely to need extra medical care was favoring Caucasian patients more than the African American patients.

The model itself had no knowledge of the race of the individuals, just like what ‘Fairness through Unawareness’ suggests. So, what really happened?

Even though we remove the features representing the protected attributes, the data may still have other features that are highly correlated to protected attributes, so even though we leave aside the features representing the protected attributes directly, these features that are indirectly correlated would make your model bias.

In this example of the ML model used in US hospitals, another feature representing the healthcare cost history of the individuals was identified to be highly correlated to the race., it was observed that for certain reasons, the African American patients incurred lower healthcare costs than Caucasian patients with the same conditions on an average.

This Redundant Encoding, lets the algorithm infer information about the hidden features from the other features.

We should be able to remove these proxy features as well to make the models truly fair. But it turns out that in most cases, this would lead to too much useful information being thrown away, resulting in a sub-optimal model that does not truly reflect the real world.

Way forward

Looking at these real-life examples, it is very clear that the ‘Fairness through Unawareness’ approach, though simple and straightforward, does not handle the problem of Bias very well.

In the follow-up articles, we will be exploring other approaches and other aspects to consider to make your ML models fair.

Deepak Saini

AI Specialist,

Coforge Technologies, India

https://www.linkedin.com/in/deepak-saini-422845b1/

Anuj Saini

AI Researcher,

Montreal, Canada

https://www.linkedin.com/in/anuj-s-23666211/

 

 

Sources –

https://datatron.com/real-life-examples-of-discriminating-artificial-intelligence-2/

https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/

https://aibusiness.com/document.asp?doc_id=761095

https://en.wikipedia.org/wiki/COMPAS_(software)

https://www.thoughtco.com/what-is-protected-class-4583111

https://www.fatml.org/media/documents/formalizing_fairness_in_prediction_with_ml.pdf

https://towardsdatascience.com/real-life-examples-of-discriminating-artificial-intelligence-cae395a90070

http://cs.wellesley.edu/~cs115/slides/W06-01-BiasFAccT.pdf

https://www.chrisstucchio.com/blog/2016/alien_intelligences_and_discriminatory_algorithms.html

https://www.lexology.com/library/detail.aspx?g=d4f54bc5-704d-4ad3-9047-500493cdc41d