Sunday, September 22, 2024

AI Can’t Be Biased (Unless It Learns from Us)



You ever notice how your social media feed starts looking like an echo chamber? Like, one day you’re searching for camping gear, and suddenly your timeline’s flooded with tents, hiking boots, and “top 10 ways to survive a bear attack” videos? I had that moment, too, and it made me wonder—what exactly is feeding this? Is it the algorithm, or is it… me? Turns out, it’s a bit of both, and that brings us to today’s topic: AI and bias.

A lot of people think AI is completely neutral. I mean, it’s just math, right? Cold, hard, objective calculations. Except that’s a myth. AI is only as good as the data it’s trained on, and who’s behind that data? Humans. And we’re all a little biased, whether we like to admit it or not. So, let’s talk about how that bias creeps into our shiny, tech-driven world and what we can do about it.

What Does Bias Even Mean in AI?

Alright, let’s clear something up. When I say bias in AI, I’m not talking about bots developing political opinions or deciding whether pineapple belongs on pizza. (For the record, it absolutely does.) Bias, in this context, means the patterns AI picks up from the data we feed it. That data comes from our likes, dislikes, habits, even our subconscious beliefs.

When teaching a kid math, if all you show them is 2+2, they might start thinking all math leads to 4. It’s not their fault, they just learned what they were taught. AI is the same. We train it on our behavior, our history, our societies. And surprise, surprise—it learns those patterns, including the biases baked into them. This explains why a lot of artificial intelligence systems end up making some pretty questionable decisions.

Quick stat for you: 90% of companies face ethical challenges when they implement AI . And those ethical issues often boil down to one thing: bias. If this technology should be fair and impartial, why does this keep happening? 

Hiring Algorithms Gone Wrong

Let’s get into a real-world example. In 2018, a big-name tech company (you’ve definitely heard of them) created an AI system to help with hiring. The goal? Speed up the process by having the AI scan resumes and pick the best candidates. Sounds smart, right? Except the AI started showing some, let’s say, questionable preferences. 

It downgraded resumes that had any mention of being, well… female. How did that happen? Simple. They trained the tool on past resumes, most of which came from men. So, it learned that "ideal" candidates had male-coded language in their resumes.

Now, think about it this way: You’re more than qualified for a job, but because the system learned from old data that mostly featured men, your application gets tossed out. Not because of anything you did wrong, but because of an invisible bias built into the AI. It’s not even like the people running the system were malicious; they just didn’t realize the data they were feeding it had bias.

And it’s not just in hiring. Did you know that 33% of hiring managers rely on AI to help screen candidates ? If these systems are trained on biased data, they can perpetuate those same inequalities we’ve been fighting against for years. AI can end up amplifying the very problems it’s supposed to solve.

Is It Fixable? The Path Toward Less Biased AI

So, now you’re probably wondering—can we fix this? Well, sort of. The truth is, we can’t remove bias completely, but we can definitely reduce it. The first step is making sure artificial intelligence systems get trained on more diverse datasets. Instead of only using data from certain groups, we need to include a wide range of voices, experiences, and perspectives. That’s the only way to make sure AI reflects a more complete picture of the world.

And it’s not just about better data. We need more human oversight, too. AI might be powerful, but at the end of the day, it’s just a tool. We still need real people making the final call—especially when it comes to decisions that affect people’s lives.

Here’s a thought: We could have regular audits of these systems, just like you’d check your car for a tune-up. These audits would look for any signs of bias and flag them for improvement. It’s not a perfect solution, but it’s a step in the right direction. After all, progress isn’t about perfection; it’s about catching mistakes and fixing them.

How You Can Spot Bias in AI

Now that I gave the big picture, let’s talk about what you can do. You don’t need to be a tech expert to spot bias in the systems you use every day. Here are a few practical tips:
  • Pay attention to patterns. If your social media feed starts looking a little too uniform, it might be worth asking why. Algorithms tend to reinforce what you already like, which can create a bubble.
  • Diversify your data. If you’re using AI for anything, from hiring to content curation, make sure the data you’re feeding it is as diverse as possible.
  • Speak up. If you notice something off—whether it’s an AI system making weird recommendations or an algorithm that seems to favor certain groups—don’t be afraid to call it out. The only way to improve these systems is by pointing out their flaws.

AI Reflects Us, Bias and All



At the end of the day, artificial intelligence is just a reflection of us. It learns from the data we give it, which means it learns our biases, too. But here’s the good news: the more we catch these issues, the more we can fix them. That way, technology works for everyone.

And that’s what makes this conversation so important. We’ve got the power to shape the future of AI, but it all starts with acknowledging that the bias isn’t just in the tech—it’s in us. The future’s still wide open. Let’s make sure we’re asking the right questions as we move forward.