F
12

I used to think AI bias was just a coding bug, then I saw my own students get sorted

For a long time, I figured if an AI made a biased choice, it was because of a bad line of code or a messed up dataset. That changed last semester when our district quietly rolled out a new 'student support' AI tool. It was supposed to flag kids who might need extra help. I watched it sort a bunch of my 8th graders, and the pattern was obvious. It kept pushing kids from our lower-income neighborhood into the 'at-risk' category, even when their grades and my own notes didn't match that. The tipping point was a specific kid, let's call him Marcus, who the system flagged as 'high risk of falling behind' based on things like how often he logged in after 8 PM. The AI didn't know his mom works nights and he shares a laptop with his sister. It took me three weeks of back-and-forth emails to get his flag removed. That showed me the problem isn't just a glitch you can patch; it's baked into what data we choose to collect and the assumptions we never question. Has anyone else had to fight a system's automated judgment on behalf of someone?
3 comments

Log in to join the discussion

Log In
3 Comments
gibson.morgan
Thought the same until my cousin got flagged for "irregular hours.
7
vera195
vera19513d agoTop Commenter
Yeah, the training data thing is the real problem. I saw an article about how some hiring tools just copy old bias because they learned from messed up company records.
2
wyatt135
wyatt13514d ago
That's a really good point about the data we choose to collect. I'd just add that the coding itself isn't even the main bug most of the time. The real issue is the training data. If you feed an AI historical info that's already unfair, it just learns to copy that bias perfectly. It's like grading a test with an answer key that has the wrong answers circled. The system isn't broken, it's doing exactly what we told it to do, and that's way scarier to fix than a software patch.
1