• filcuk@lemmy.zip
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    13 hours ago

    It doesn’t fix it, but as stupid as it looks, it should actually improve the chances.
    If you’ve seen how the reasoning works, they basically spit out some garbage, then read it again and think whether it’s garbage enough or not.
    They do try to ‘correct their errors’, so to say.

    • underisk@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      ·
      12 hours ago

      That’s not enabled by default afaik and it burns through way more tokens looping its output through several times. It also adds a bunch more context which will bring you that much closer to context collapse.

      • Modern_medicine_isnt@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 hours ago

        I didn’t turn it on, and I see it doing it all the time. In my case though the mistakes are often absurd. I often feel like claude is a very junior programmer that has a hard time remembering the original requirements.

      • fuzzzerd@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        12 hours ago

        While true, the latest opus model has 1m token context. Which is a lot more than the previous 200k limit. Hard to fill that up with regular work, but easy if you try to oneshot a whole product.