Algorithmic bias refers to systematic and repeatable errors in AI outputs that result in unfair treatment of certain individuals or groups. This bias can originate from skewed training data, biased human labeling, model assumptions, or how outcomes are applied in the real world.
Bias can manifest in various ways—such as gender, race, age, or socioeconomic disparities—and often reflects existing societal inequalities. Importantly, algorithmic bias isn’t always intentional; it can emerge even in seemingly objective systems due to incomplete or unrepresentative data.
Preventing and mitigating algorithmic bias is critical in high-stakes applications like hiring, healthcare, finance, and law enforcement, and requires deliberate design, monitoring, and auditing practices.