• apinanaivot@sopuli.xyz
    link
    fedilink
    arrow-up
    12
    arrow-down
    7
    ·
    1 year ago

    All ChatGPT is doing is guessing the next word.

    You are saying that as if it’s a small feat. Accurately guessing the next word requires understanding of what the words and sentences mean in a specific context.

    • blackbirdbiryani@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      1 year ago

      Don’t get me wrong, it’s incredible. But it’s still a variation of the Chinese room experiment, it’s not a real intelligence, but really good at pretending to be one. I might trust it more if there were variants based on strictly controlled datasets.

      • jadero@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I have read more than is probably healthy about the Chinese room and variants since it was first published. I’ve gone back and forth on several ideas:

        • There is no understanding
        • The person in the room doesn’t understand, but the system does
        • We are all just Chinese rooms without knowing it (where either of the first 2 points might apply)

        Since the advent of ChatGPT, or, more properly, my awareness of it, the confusion has only increased. My current thinking, which is by no means robust, is that humans may be little more than “meatGPT” systems. Admittedly, that is probably a cynical reaction to my sense that a lot of people seem to be running on automatic a lot of the time combined with an awareness that nearly everything new is built on top of or a variation on what came before.

        I don’t use ChatGPT for anything (yet) for the same reasons I don’t depend too heavily on advice from others:

        • I suspect that most people know a whole lot less than they think they do
        • I very likely know little enough myself
        • I definitely don’t know enough to reliably distinguish between someone truly knowledgeable and a bullshitter.

        I’ve not yet seen anything to suggest that ChatGPT is reliably any better than a bullshitter. Which is not nothing, I guess, but is at least a little dangerous.

        • nogrub@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          what often puts me of that people almost never fakt check me when i tell them something wich also tells me they wouldn’t do the same with chatgpt

      • Fraylor@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        So theoretically could you program an AI using strictly verified programming textbooks/research etc, is it currently possible to make an AI that would do far better at programming? I love the concepts around AI but I know fuckall about ML and the actual intricacies of it. So sorry if it’s a dumb question.

        • PixelProf@lemmy.ca
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          Yeah, this is the approach people are trying to take more now, the problem is generally amount of that data needed and verifying it’s high quality in the first place, but these systems are positive feedback loops both in training and in use. If you train on higher quality code, it will write higher quality code, but be less able to handle edge cases or potentially complete code in a salient way that wasn’t at the same quality bar or style as the training code.

          On the use side, if you provide higher quality code as input when prompting, it is more likely to predict higher quality code because it’s continuing what was written. Using standard approaches, documenting, just generally following good practice with code before sending it to the LLM will majorly improve results.

      • worldsayshi@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        The Chinese room thought experiment doesn’t prove anything and probably confuses the discussion more than it clarifies.

        In order for the Chinese room to convince an outside observer of knowing Chinese like a person the room as a whole basically needs to be sentient and understand Chinese. The person in the room doesn’t need to understand Chinese. “The room” understands Chinese.

        The confounding part is the book, pen and paper. It suggests that the room is “dumb”. But to behave like a person the person-not-knowing-Chinese plus book and paper needs to be able to memorize and reason about very complex concepts. You can do that with pen and paper and not-understanding-Chinese person it just takes an awful amount of time and complex set of continuously changing rules in said book.

        Edit: Dall-E made a pretty neat mood illustration

    • worldsayshi@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Yup. Accurately guessing the next thought (or action) is all brains need to do so I don’t see what the alleged “magic” is supposed to solve.