Image for post
Image for post
Lluis Quiles Ardila — Data & Analytics Director at everis

In the coming years, the use of Artificial Intelligence solutions (AI) will grow as part of the digital revolution, along with increasing technological capabilities and the use of AI applications which cover much of our daily activities: autonomous cars or drones, online assistants, recommended solutions, decision making automation, and image recognition, etc.

Real examples of Machine Learning Bias.

The use of machine learning models in a generalized way also implies managing some risks and responsibilities. Here are two recent examples in which the use of machine learning models has generated impacts on the organizations’ reputations:

· Chatbot on Twitter(1) In March 2016, Microsoft released a bot for Twitter which was created to interact freely. Within 24 hours the bot learned racism, sexism and started defending Nazism.

· Score system for sentence definition (2) In several states of the USA, models are used to define sentences. These scoring models define the probability of recidivism of an individual, the greater the probability the greater the sentence. Organizations such as Propublica(3) provide evidence that the models are much harsher when it comes to African Americans than they are with whites even though the models do not use race as an input.

The first case represents the risk when models are developed in a controlled environment and are left in an open environment. Assuming the chatbot’s maximizing function was the number of views, it is fair to deduce that the bot quickly learned that the more extremist and controversial its statements were, the more views it received.

The second case is a problem of Machine Learning Bias, which occurs when a model assumes the biases of the data or the creator of the model, in this particular case based on studies carried out based on the model’s results(2):

  • The model’s global assertiveness is 68%, for white, and African American values assertiveness are also around this average (69% and 67% respectively).
  • However, when analyzing how the model makes a mistake is where we find the difference. For African Americans, 44.9% of cases classified as high risk ended up being false positives, on the other hand, only 23.5% of whites were classified as high risk.
  • And on the contrary for whites, the model significantly generates more false negatives than false positives.

Possibly the origin of the bias is in the data used for the scoring model development, although the race variable is not used directly, the data inherently tended to make mistakes “in favor” of whites and “against” those of African descent, which was evidence after using the model to work with real data and a wider database, demonstrating the scoring bias.

Scope of Machine Learning Bias

The two cases above are only a sample of the several cases in which we can find the bias:

· Use of models for personnel selection, how can we guarantee that the historical prejudices of the recruiters are not included in our models?

· When using recommendation models for either purchase or news, we fall into the chicken and the egg dilemma.

· The autonomous car that in an extreme situation may have to pick someone to run over.

· Bias in facial recognition algorithms which in many cases accumulate more errors with dark-skinned people than fair-skinned people.

From these examples, we conclude that the way to fight Machine Learning Bias involves paying more attention to the data we use to build the model, also to understand the operation of the model and to use strategies that analyze the impact that “real” data will have on the result.

At a time when one of the main trends is Auto Machine Learning, the previous recommendations may sound contradictory. Auto ML is a powerful tool that accelerates the models’ construction phase, but is not a magic box, it does not spare us from the earlier analysis stages and variables selection, and later stages of analyzing the results of the model, especially if we consider solutions that are integrated with human behavior.

From a practical point of view, to avoid Machine Learning Bias, we are required to take into consideration the following points:

  • Work the data: select the data and the cases to avoid the bias, and perform data augmentation if required.
  • Conduct correlation studies: perform correlation analysis with the objective of identifying variables with bias which would be necessary to treat or exclude from the model.
  • Analyze important variables in the model results: use techniques to identify which variables are most determinant for the model’s ultimate results.
  • Models Monitoring: once the model is integrated with the business process before go-live, it is important to run some tests to understand the impact caused by real data and once the model is in operation, it is important to keep ongoing monitoring.

At everis, we include Machine Learning Bias analysis within our work methodology, so we have experience working in this context, and we are available to talk about it and collaborate to help face the challenges with our customers.

Nowadays the concern for Machine Learning Bias has increased significantly, with groups proposing best practices to prevent ML Bias, groups working on creating sets which are more representative of society, and there is even a proposal for data scientists to have a code of ethics inspired by the medical one. Surely it is a field which will increase in the upcoming years.

(1) https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

(2) https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Exponential intelligence for exponential companies

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store