We often think of AI as neutral, but sometimes it makes unfair, discriminatory, or simply wrong decisions. Why? Because it learns from us—humans who feed these systems with information already biased by our own awareness (or lack thereof), creating a a “mirror effect” between our minds and and algorithms, copying our biases.

Cognitive biases are tricks our brains use. They are preset pathways in our neural network that allow us to make quick decisions in response to certain stimuli or situations. For example:
- Confirmation bias:You only see what suits you.
- Loss aversion bias:If you have to lose something to gain something of equal value, you stick with what you have to avoid losing it (assigning it a higher subjective value).
- Anchoring bias:You stick with the first thing you heard.
- Generalization bias:If a dog bites, all dogs are dangerous.
But how do these biases reach our AI models?
When we train AI with information based on past human decisions, we are giving it these biases over and over again. For example, if a recruitment system learned from data where more men than women, were hired, it will continue to do so, assuming that repetition indicates an appropriate response or actionand if the data is incomplete, it will fill in the gaps based on the stereotypes present in the information it was trained on.
One of the dangers of these biases is that, by delegating decisions to an automated system,
they are often made at scale and are hard to detect. But there are things we can do to use
technology responsibly.
For example: conducting ethical audits to review how AI makes decisions, designing
algorithms that seek fairness, including diverse teams who can see what others might miss,
and even using AI itself to detect our own biases based on behavioral patterns in decision-
making.

Ultimately, AI only has biases if we give them to it but we can also teach it to avoid them.
In the end, everything starts and ends with us: humans making the most of our ability to act with awareness, diversity, and responsibility (or not?).