Some of the marginalized groups most likely to be harmed by AI are also most wary of it. A new study’s findings raise questions about equity and consent in technology design.
I believe that a future built on AI should account for the people the technology puts at risk.
I’ve seen various iterations of this column a thousand times before. The underlying message is always “AI is going to get shoved down your throat one way or another, so let’s talk about how to make it more palpable.”
The author (and, I’m assuming there’s a human writing this, but its hardly a given) operates from the assumption that
identities that defy categorization clash with AI systems that are inherently designed to reduce complexity into rigid categories
but fails to consider that the problem is employing a rigid, impersonal, digital tool to engage with a non-uniform human population. The question ultimately being asked is how to get a square peg through a round hole. And while the language is soft and squishy, the conclusions remain as authoritarian and doctrinaire as anything else out of the Silicon Valley playbook.
The article contains nothing of the sort and I have no idea why you came to that conclusion.
I’ve seen various iterations of this column a thousand times before. The underlying message is always “AI is going to get shoved down your throat one way or another, so let’s talk about how to make it more palpable.”
The author (and, I’m assuming there’s a human writing this, but its hardly a given) operates from the assumption that
but fails to consider that the problem is employing a rigid, impersonal, digital tool to engage with a non-uniform human population. The question ultimately being asked is how to get a square peg through a round hole. And while the language is soft and squishy, the conclusions remain as authoritarian and doctrinaire as anything else out of the Silicon Valley playbook.
This is a reasonable point, but it’s also not what you said previously.