• br3d@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    8 days ago

    This is how an LLM will always work. It doesn’t understand anything - it just predicts the next word based on the words so far, learned from reading loads of text. There is no “knowledge” in there, so stop asking these things questions and expecting useful answers

    • cmhe@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      8 days ago

      Yeah, I don’t understand why people seem to be surprised by that.

      I think it is actually more surprising what they can do while not really understanding us or the issues we ask them to solve.

      • br3d@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 days ago

        Not quite. It’s more an “average sentence generator” - which is one reason to be skeptical: written text will tend to get more average and bland over time