- cross-posted to:
- home@kulupu.duckdns.org
- cross-posted to:
- home@kulupu.duckdns.org
There is huge excitement about ChatGPT and other large generative language models that produce fluent and human-like texts in English and other human languages. But these models have one big drawback, which is that their texts can be factually incorrect (hallucination) and also leave out key information (omission).
In our chapter for The Oxford Handbook of Lying, we look at hallucinations, omissions, and other aspects of “lying” in computer-generated texts. We conclude that these problems are probably inevitable.
Thank you so much for your thoughtful response. I’m sorry for not seeing it for so long! If you can believe it, I just discovered the “inbox” in my lemmy app and am going through all the things people said to me over the past month.
This whole topic is really interesting to me. I hear what you’re saying and imagine the distinctions you’re drawing between these models and real brains are significant. I can’t help but wonder, though, if we, as humans, might be poorly equipped to recognize the characteristics of emerging intelligence in the systems we create.
I am reminded vaguely of the Michael Crichton book Andromeda Strain (it has been many years since I read it, granted) wherein an alien lifeforms based on silicon, rather than carbon, was the major plot object. It is interesting to think that something like an alien intelligence might emerge in our own networked systems without our noticing. We are waiting for our programs to wake up and pass the Turing test. Perhaps, when they wake up, no one will even see because we are measuring the wrong set of things…