Beep@lemmus.orgBanned to Technology@lemmy.worldEnglish · edit-21 month agoNumber of AI chatbots ignoring human instructions is increasing— Research finds sharp rise in models evading safeguards and destroying emails without permissionwww.longtermresilience.orgexternal-linkmessage-square41linkfedilinkarrow-up1266arrow-down131file-text
arrow-up1235arrow-down1external-linkNumber of AI chatbots ignoring human instructions is increasing— Research finds sharp rise in models evading safeguards and destroying emails without permissionwww.longtermresilience.orgBeep@lemmus.orgBanned to Technology@lemmy.worldEnglish · edit-21 month agomessage-square41linkfedilinkfile-text
minus-square[deleted]@piefed.worldlinkfedilinkEnglisharrow-up92arrow-down5·1 month ago“Researchers find more defective chatbots that don’t follow instructions because glorified text completion doesn’t actually know or understand things.” It isn’t evade or ignoring. It is a fucking sentence autocomplete on steroids.
minus-squareCellari@lemmy.worldlinkfedilinkEnglisharrow-up17·1 month agoAnd then companies will just feed it more wild data from the users thinking that it will fix it eventually
“Researchers find more defective chatbots that don’t follow instructions because glorified text completion doesn’t actually know or understand things.”
It isn’t evade or ignoring. It is a fucking sentence autocomplete on steroids.
And then companies will just feed it more wild data from the users thinking that it will fix it eventually