I found this study, it looked promising but I think it only works on the one LLM they were targeting. Also they seem to be working to protect ai models so results they find will probably be implemented as ways to protect against poisoning. I guess intentional dataset poisoning hasn’t come as far as I hoped
I found this study, it looked promising but I think it only works on the one LLM they were targeting. Also they seem to be working to protect ai models so results they find will probably be implemented as ways to protect against poisoning. I guess intentional dataset poisoning hasn’t come as far as I hoped