• Rimu@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    15 hours ago

    one possible explanation for the observed behavior could be that DeepSeek added special steps to its training pipeline that ensured its models would adhere to CCP core values. It seems unlikely that they trained their models to specifically produce insecure code. Rather, it seems plausible that the observed behavior might be an instance of emergent misalignment.4 In short, due to the potential pro-CCP training of the model, it may have unintentionally learned to associate words such as “Falun Gong” or “Uyghurs” with negative characteristics, making it produce negative responses when those words appear in its system prompt. In the present study, these negative associations may have been activated when we added these words into DeepSeek-R1’s system prompt. They caused the model to “behave negatively,” which in this instance was expressed in the form of less secure code.