morrowind@lemmy.ml to Technology@lemmy.worldEnglish · 1 year agoStudy Finds Consumers Are Actively Turned Off by Products That Use AIfuturism.comexternal-linkmessage-square42fedilinkarrow-up129arrow-down11
arrow-up128arrow-down1external-linkStudy Finds Consumers Are Actively Turned Off by Products That Use AIfuturism.commorrowind@lemmy.ml to Technology@lemmy.worldEnglish · 1 year agomessage-square42fedilink
minus-squareoyo@lemm.eelinkfedilinkEnglisharrow-up3·1 year agoLLMs: using statistics to generate reasonable-sounding wrong answers from bad data.
minus-squarepumpkinseedoil@sh.itjust.workslinkfedilinkEnglisharrow-up2arrow-down1·1 year agoOften the answers are pretty good. But you never know if you got a good answer or a bad answer.
minus-squareBlackmist@feddit.uklinkfedilinkEnglisharrow-up2·1 year agoAnd the system doesn’t know either. For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.
minus-squarexantoxis@lemmy.worldlinkfedilinkEnglisharrow-up1·1 year agoAccurate. No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.
LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.
Often the answers are pretty good. But you never know if you got a good answer or a bad answer.
And the system doesn’t know either.
For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.
Accurate.
No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.