• Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    30 days ago

    It was ChatGPT from earlier this year. It wasn’t a huge deal for me that it made mistakes, because I had a very specific use case and just wanted to save some time; I knew I’d have to troubleshoot grafting it into my function, but even after I pointed out that it was using depreciated syntax (and how to correct it), it just spat out the code again with even more errors and still using depreciated syntax.

    All LLMs will fail like this in some way, because they don’t actually understand what they’re generating (i.e. they have no mechanism for self-evaluating the veracity of their statements).