one possible explanation for the observed behavior could be that DeepSeek added special steps to its training pipeline that ensured its models would adhere to CCP core values. It seems unlikely that they trained their models to specifically produce insecure code. Rather, it seems plausible that the observed behavior might be an instance of emergent misalignment.4 In short, due to the potential pro-CCP training of the model, it may have unintentionally learned to associate words such as “Falun Gong” or “Uyghurs” with negative characteristics, making it produce negative responses when those words appear in its system prompt. In the present study, these negative associations may have been activated when we added these words into DeepSeek-R1’s system prompt. They caused the model to “behave negatively,” which in this instance was expressed in the form of less secure code.
The more interesting part is that all baseline code generated by all LLMs is vulnerability ridden mess.
is this with or without the prompt including politically sensitive topics?
Without.
Thanks
It’s pretty clear that deepseek is not open source, or at least shouldn’t be considered within the spirit of open source.
It seems like if we want to have truly open source llms, a new standard for transparency is needed.
Check out Apertus , the Swiss are showing how it should be done. 100% open: architecture, training data, weights, recipes, and final models all publicly available and licenses Apache 2.0. https://ethz.ch/en/news-and-events/eth-news/news/2025/09/press-release-apertus-a-fully-open-transparent-multilingual-language-model.html
Until the Swiss privacy laws change and the clankers start reporting to the new Fash government with the older versions getting deleted.
Just don’t fucking use Ai.
You can run it yourself on a closed network if you’re worried about telemetry, that’s part of the point.







