• JuxtaposedJaguar@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    10
    ·
    4 days ago

    You only notice AI-generated content when it’s bad/obvious, but you’d never notice the AI-generated content that’s so good it’s indistinguishable from something generated by a human.

    I don’t know what percentage of the “good” content we see is AI-generated, but it’s probably more than 0 and will probably go up over time.

    • zarkanian@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Maybe, but that doesn’t change the fact that it was trained on stolen artwork and is being used to put artists out of work. I think that, and the environmental effect, are better arguments against AI than some subjective statement about whether or not it’s good.

    • BlackRoseAmongThorns@slrpnk.net
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      4 days ago

      Shit take, the more AI-made media is online, the harder it is for AI developing companies to improve on previous models.

      It won’t be indistinguishable from media made with human effort, unless you enjoy wasting your time on cheap uninteresting manmade slop then you won’t be fooled by cheap uninteresting and untrue AI-made slop.

      • Electricd@lemmybefree.net
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        edit-2
        3 days ago

        the harder it is for AI developing companies to improve on previous models.

        They all use each other’s data to improve. That’s federated learning!

        In a way, it’s good because it helps have more competition

        • BlackRoseAmongThorns@slrpnk.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          I was talking about ai training on ai output, ai requires genuine data, having a feedback loop makes models regress, see how ai makes yellow pictures because of the ghibli ai thing

          • Electricd@lemmybefree.net
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            3 days ago

            Sure, that mainly applies when it’s the same model training on itself. If a model trains on a different one, it might retrieve some good features from it, but the bad sides as well

              • Electricd@lemmybefree.net
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                3 days ago

                If they weren’t trained on the same data, it ends up similar

                Training inferior models with superior models output can lower the gap between both. It’ll not be optimal by any means and you might fuck its future learning, but it will work to an extent

                The data you feed it should be good quality though