profile picture

Algorithmic Discrimination

Table of Contents

Today we are going to have a different post again. We are going to talk about Algorithmic Discrimination.

Algorithmic Discrimination

The best way to start a discussion of algorithmic discrimination is to define what an algorithm is and what is bias. Cormen defines algorithm, informally, as being “a well-defined computational procedure that takes some value or set of values as input and produces some value or set of values as output.”* And, by the very etymology of the word, we can deduce an informal concept for prejudice, that here we will use the fact of inferring properties to someone or something based on a set of pre-defined values. A worst case example, as Cormen uses to analyze algorithms, the “O” notation, extrapolating the concept to social problems, would be, in World War II Germany there were many Nazis, now infer that every German was Nazi because the supposed majority of Germans were Nazis is a form of prejudice.

Now that we have defined what algorithms and what bias are, we can discuss biases and algorithms. We will discuss in this text based on an example, reported by journalist Gregório Duviver, of prejudice of a non-computational algorithm but which fits perfectly into the discussion. To summarize the fact reported by the journalist, in Rio de Janeiro, people who were waiting for a hearing for a judge to rule on the offense of drug possession under the anti-drug law of 2006 were being sentenced based on their zip code. Also reported by Folha de São Paulo, “Living in a favela in Rio is aggravating a conviction for drug trafficking”.

The current problem is not the algorithms we use, whether they are deterministic or artificial intelligence. The problem lies in how we analyze the data, what data we use, or even the famous statistical motto we constantly repeat, correlation does not mean causality. Despite using the clever name and most people don’t really understand how an algorithm works, the computer is penalized by the actions of humans. Even inventions whose primary function is something bad for human beings, such as a weapon, whose purpose is to take a life, can be used for good if, as discussed in previous topics, a set of intolerant people appear who are willing to use violence to impose itself on a tolerant society, we will need inventions like firearms to ensure that the tolerant society prevails.

Finally, unlike the topics previously discussed, the problem of applying algorithms and statistics without looking at the data or what we are analyzing is older than computing. As a society we still don’t have a solution for what data can be used as statistics, what we can generalize, what we can’t, what is right and what is not so much from a moral point of view as from a mathematical point of view. After all, even if a supposed statistic shows that blacks and slum dwellers are more likely to commit a crime and this becomes an aggravating factor, we have to remember that we may have started with a biased training set, if the police or government officials are prejudiced and put more police in these regions or groups of people, consequently more crimes will be caught in the region and with that the bias of the data is formed to a point where we do not realize later when we will analyze.

Conclusion

That its folks! Thank you for following up until here, and if you have any question or just want to chat, send me a message on GitHub of this project or an email. Am I doing it right? Was it a good hello world post for the blogging community?

https://github.com/lbenicio/lbenicio.blog

hello@lbenicio.dev

# Conclusion

That its folks! Thank you for following up until here, and if you have any question or just want to chat, send me a message on GitHub of this project or an email. Am I doing it right?

https://github.com/lbenicio.github.io

hello@lbenicio.dev

Categories: