• br3d@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    8 days ago

    This is how an LLM will always work. It doesn’t understand anything - it just predicts the next word based on the words so far, learned from reading loads of text. There is no “knowledge” in there, so stop asking these things questions and expecting useful answers

    • cmhe@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      8 days ago

      Yeah, I don’t understand why people seem to be surprised by that.

      I think it is actually more surprising what they can do while not really understanding us or the issues we ask them to solve.

      • br3d@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 days ago

        Not quite. It’s more an “average sentence generator” - which is one reason to be skeptical: written text will tend to get more average and bland over time

  • RejZoR@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    8 days ago

    In a way, Ai behaves like Trump. Spews bullshit and then explains with bullshit justifications how it’s actually all correct.

  • Kyrgizion@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    8 days ago

    This is why LLM’s at their current point are fairly useless except to quickly rewrite some copy-text or sth. I study numismatics and frequently have to research, for example, Roman emperors and what coins they minted. O4 creates these extremely slick-looking charts with info that, at first glance, seem to contain absolutely every detail you could possibly dream of.

    Until you try to verify that information with actual facts. Entire paragraphs made up of whole cloth. Sounds 100% acceptable to anyone without more than passing knowledge of the subject, but will not fool actual experts. This is dangerous in my opinion. You can feel like you have all the knowledge at your fingertips, but it’s actually just fucking lies. If I were to do all my research via ChatGPT and would accept its answers as truth, and publish a book based on that, it would (I hope) get absolutely critically panned by experts in the field because it would be filled to the brim with inconsistencies and half-truths that just “sound good”.

    That meme about a “digital dumbass who is constantly wrong” rings completely true to me.