• UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      4 days ago

      I believe that a future built on AI should account for the people the technology puts at risk.

      I’ve seen various iterations of this column a thousand times before. The underlying message is always “AI is going to get shoved down your throat one way or another, so let’s talk about how to make it more palpable.”

      The author (and, I’m assuming there’s a human writing this, but its hardly a given) operates from the assumption that

      identities that defy categorization clash with AI systems that are inherently designed to reduce complexity into rigid categories

      but fails to consider that the problem is employing a rigid, impersonal, digital tool to engage with a non-uniform human population. The question ultimately being asked is how to get a square peg through a round hole. And while the language is soft and squishy, the conclusions remain as authoritarian and doctrinaire as anything else out of the Silicon Valley playbook.