The article explains that simply not using sensitive factors like race or sex in algorithms doesn't prevent them from being biased. Biases can still seep in through related data, like location or income, acting as proxies. While some countries, under laws like the GDPR, avoid collecting data on these sensitive factors, this doesn't truly solve bias; it just hides it because we can't measure it. The piece argues for stronger regulation and accountability, not just guidelines, to actively combat and correct biases in algorithms. It suggests a detailed auditing framework to ensure algorithms are fair and transparent. (the bots got confused so this is chatgpt generated).