I’m a #SoftwareDeveloper from #Switzerland. My languages are #Java, #CSharp, #Javascript, German, English, and #SwissGerman. I’m in the process of #LearningJapanese.

I like to make custom #UserScripts and #UserStyles to personalize my experience on the web. In terms of #Gaming, currently I’m mainly interested in #VintageStory and #HonkaiStarRail. I’m a big fan of #Modding.
I also watch #Anime and read #Manga.

#fedi22 (for fediverse.info)

  • 0 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: March 11th, 2024

help-circle

  • Here’s a question regarding the informed consent part.

    The article gives the example of asking whether the recipient wants the AI’s answer shared.

    “I had a helpful chat with ChatGPT about this topic some time ago and can share a log with you if you want.”

    Do you (I mean generally people reading this thread, not OP specifically) think Lemmy’s spoiler formatting would count as informed consent if properly labeled as containing AI text? I mean, the user has to put in the effort to open the spoiler manually.



  • Check the actual reviews.

    (the link is for the past week, so will be less and less accurate to the july first start date as the days pass by)

    The only two reviews related to the drama are specifically in reaction to the alleged review bombing. The other negative reviews don’t mention anything related to the drama at all, and so the increase is probably just due to the streisand effect.

    I’ll list the two drama-related reviews here trimmed down to the drama-relevant parts only (not the full reviews):

    “Drumming up fake drama about a review bombing that never happened to artificially inflate your positive review count through fan counteraction is gross.” — Full Review

    “Wasn’t gonna leave a review but Ludwig and Pirate Software cried review bombing so I’m leaving an honest review to combat the non-existent bombing.” — Full Review

    As you can see by these excerpts, both of them were made AFTER the allegations of review bombing. They’re not part of the review bombing itself that was being talked about.


    Edit: fixed inaccurate -> accurate




  • Isn’t the Atari just a game console, not a chess engine?

    Like, Wikipedia doesn’t mention anything about the Atari 2600 having a built-in chess engine.

    If they were willing to run a chess game on the Atari 2600, why did they not apply the same to ChatGPT? There are custom GPTs which claim to use a stockfish API or play at a similar level.

    Like this, it’s just unfair. Both platforms are not designed to deal with the task by themselves, but one of them is given the necessary tooling, the other one isn’t. No matter what you think of ChatGPT, that’s not a fair comparison.


    Edit: Given the existing replies and downvotes, I think this comment is being misunderstood. I would like to try clarifying again what I meant here.

    First of all, I’d like to ask if this article is satire. That’s the only way I can understand the replies I’ve gotten that critized me on grounds of the marketing aspect of LLMs (when the article never brings up that topic itself, nor did I). Like, if this article is just some tongue in cheek type thing about holding LLMs to the standards they’re advertised at, I can understand both the article and the replies I’ve gotten. But the article never suggests so itself. So my assumption when writing my comment was that this is not the case and it is serious.

    The Atari is hardware. It can’t play chess on its own. To be able to, you need a game for it which is inserted. Then the Atari can interface with the cartridge and play the game.

    ChatGPT is an LLM. Guess what, it also can’t play chess on its own. It also needs to interface with a third party tool that enables it to play chess.

    Neither the Atari nor ChatGPT can directly, on their own, play chess. This was my core point.

    I merely pointed out that it’s unfair that one party in this comparison is given the tool it needs (the cartridge), but the other party isn’t. Unless this is satire, I don’t see how marketing plays a role here at all.




  • why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff?

    They will, when it makes sense for what the AI is designed to do. For example, ChatGPT can outsource image generation to an AI dedicated to that. It also used to calculate math using python for me, but that doesn’t seem to happen anymore, probably due to security issues with letting the AI run arbitrary python code.

    ChatGPT however was not designed to play chess, so I don’t see why OpenAI should invest resources into connecting it to a chess API.

    I think especially since adding custom GPTs, adding this kind of stuff has become kind of unnecessary for base ChatGPT. If you want a chess engine, get a GPT which implements a Stockfish API (there seem to be several GPTs that do). For math, get the Wolfram GPT which uses Wolfram Alpha’s API, or a different powerful math GPT.


  • Markdown is a markup language, which can be used by users to indicate formatting hints to the underlying system. For example, you want a text to be bold, a markup language lets you tell that to the website in a way it understands.

    Older markup languages tended to be verbose and complicated. For example, this is a numbered list in BBCode, which is the classic forum markup language: [ol][li]Item one[/li][li]Item two[/li][/ol].

    Markdown keeps it simple and intuitive, for the most part.

    1. item 1
    2. item 2
    

    The above is a numbered list in Markdown. Much simpler than the BBCode version. Simple enough that people like you can do it without even being aware of Markdown at all.


    *This is cursive text*
    **This is bold text**
    
    # this is a heading
    
    ## this is a smaller heading
    
    ###### usually up to six levels are supported, but this might differ based on the implementation (my instance seems to make all of these the same size)
    
    > this is a quote
    it can span multiple lines too
    
    this is a bullet point list:
    - item 1
    - item 2
    
    [Links are more complicated, but still as easy as they can be](https://example.org/)
    

    The above doesn’t actually display formatted because I used a code block to show the Markdown as written. The below is how the above actually displays:

    This is cursive text This is bold text

    this is a heading

    this is a smaller heading

    usually up to six levels are supported, but this might differ based on the implementation (my instance seems to make all of these the same size)

    this is a quote it can span multiple lines too

    this is a bullet point list:

    • item 1
    • item 2

    Links are more complicated, but still as easy as they can be


    edit: this is what the original creator of Markdown has to say on the matter:

    Markdown is intended to be as easy-to-read and easy-to-write as is feasible.

    Readability, however, is emphasized above all else. A Markdown-formatted document should be publishable as-is, as plain text, without looking like it’s been marked up with tags or formatting instructions. While Markdown’s syntax has been influenced by several existing text-to-HTML filters — including Setext, atx, Textile, reStructuredText, Grutatext, and EtText — the single biggest source of inspiration for Markdown’s syntax is the format of plain text email.

    To this end, Markdown’s syntax is comprised entirely of punctuation characters, which punctuation characters have been carefully chosen so as to look like what they mean. E.g., asterisks around a word actually look like emphasis. Markdown lists look like, well, lists. Even blockquotes look like quoted passages of text, assuming you’ve ever used email.


  • What kind of effect does knowing exactly which people downvoted you have on a platform? Is there a chilling effect on downvotes, is there revenge downvoting? I guess we’ll find out.

    It’s not like we haven’t had public downvoting in the fediverse yet. We already found out in the past.

    Kbin had public downvotes and the main effect was that people put more thought into their downvotes. While I assume it wasn’t without abuse, I’ve never seen it brought up as an active problem, just something that could theoretically happen.