AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-23 days agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square131fedilinkarrow-up1622arrow-down132file-text
arrow-up1590arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-23 days agomessage-square131fedilinkfile-text
minus-squareKnock_Knock_Lemmy_In@lemmy.worldlinkfedilinkEnglisharrow-up17arrow-down5·3 days agoWhen given explicit instructions to follow models failed because they had not seen similar instructions before. This paper shows that there is no reasoning in LLMs at all, just extended pattern matching.
When given explicit instructions to follow models failed because they had not seen similar instructions before.
This paper shows that there is no reasoning in LLMs at all, just extended pattern matching.