tracyspcy@lemmy.ml to ChatGPT@lemmy.mlEnglish · 1 year agoOver just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study findsfortune.comexternal-linkmessage-square2fedilinkarrow-up13arrow-down11cross-posted to: hackernews@derp.foonlprog@lemmy.intai.technev@lemmy.intai.techtechnology@lemmy.worldtechnology@lemmy.worldtechnology@chat.maiion.com
arrow-up12arrow-down1external-linkOver just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study findsfortune.comtracyspcy@lemmy.ml to ChatGPT@lemmy.mlEnglish · 1 year agomessage-square2fedilinkcross-posted to: hackernews@derp.foonlprog@lemmy.intai.technev@lemmy.intai.techtechnology@lemmy.worldtechnology@lemmy.worldtechnology@chat.maiion.com
minus-squaretaladar@sh.itjust.workslinkfedilinkEnglisharrow-up0·1 year agoA system that has no idea if what it is saying is true or false or what true or false even mean is not very consistent in answering things truthfully?
minus-squareintensely_human@lemm.eelinkfedilinkarrow-up1arrow-down1·1 year agoNo that is not the thesis of this story. If I’m reading the headline correctly, the rate of its being correct has changed from one stable distribution to another one.
A system that has no idea if what it is saying is true or false or what true or false even mean is not very consistent in answering things truthfully?
No that is not the thesis of this story. If I’m reading the headline correctly, the rate of its being correct has changed from one stable distribution to another one.