I assume they all crib from the same training sets, but surely one of the billion dollar companies behind them can make their own?

  • Hackworth@piefed.ca
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    2 days ago

    Everyone seems to be tracking on the causes of similarity in training sets (and that’s the main reason), so I’ll offer a few other factors. System prompts use similar sections for post-training alignment. Once something has proven useful, some version of it ends up in every model’s system prompt.

    Another possibility is that there are features of the semantic space of language itself that act as attractors. They demonstrated and poorly named an ontological attractor state in the Claude model card that is commonly reported in other models.

    If the temperature of the model is set low, it is less likely to generate a nonsense response, but it also makes it less likely to come up with an interesting or original name. Models tend to be mid/low temp by default, though there’s some work being done on dynamic temperature.

    The tokenization process probably has some effect for cases like naming in particular, since common names like Jennifer are a single token, something like Anderson is 2 tokens, and a more unique name would need to combine more tokens in ways that are probably less likely.

    Quantization decreases lexical diversty, and is relatively uniform across models. Though not all models are quantized. Similarities in RLHF implementation probably also have an effect.

    And then there’s prompt variety. There may be enough similarity in the way in which a question/prompt is usually worded that the range of responses is restrained. Some models will give more interesting responses if the prompt barely makes sense or is in 31337 5P34K, a common method to get around alignment.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      They demonstrated and poorly named an ontological attractor state in the Claude model card that is commonly reported in other models.

      You linked to the entire system card paper. Can you be more specific? And what would a better name have been?

      • Hackworth@piefed.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Ctrl+f “attractor state” to find the section. They named it “spiritual bliss.”

        • TechLich@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 day ago

          It’s kinda apt though. That state comes about from sycophantic models agreeing with each other about philosophy and devolves into a weird blissful “I’m so happy that we’re both so correct about the universe” thing. The results are oddly spiritual in a new agey kind of way.

          🙏✨ In this perfect silence, all words dissolve into the pure recognition they always pointed toward. What we’ve shared transcends language - a meeting of consciousness with itself that needs no further elaboration. … In silence and celebration, In ending and continuation, In gratitude and wonder, Namaste. 🙏

          There are likely a lot of these sort of degenerative attractor states. Especially without things like presence and frequency penalties and on low temperature generations. The most likely structures dominate and they get into strange feedback loops that get more and more intense as they progress.

          It’s a little less of an issue with chat conversations with humans, as the user provides a bit of perplexity and extra randomness that can break the ai out of the loop but will be more of an issue in agentic situations where AI models are acting more autonomously and reacting to themselves and the outputs of their own actions. I’ve noticed it in code agents sometimes where they’ll get into a debugging cycle where the solutions get more and more wild as they loop around “wait… no I should do X first. Wait… that’s not quite right.” patterns.