• underisk@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    12 hours ago

    That’s not enabled by default afaik and it burns through way more tokens looping its output through several times. It also adds a bunch more context which will bring you that much closer to context collapse.

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 hours ago

      I didn’t turn it on, and I see it doing it all the time. In my case though the mistakes are often absurd. I often feel like claude is a very junior programmer that has a hard time remembering the original requirements.

    • fuzzzerd@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      12 hours ago

      While true, the latest opus model has 1m token context. Which is a lot more than the previous 200k limit. Hard to fill that up with regular work, but easy if you try to oneshot a whole product.