Amnesty International Models of Chinese AI AI achieve the latest AI, an updated version of the R1 thinking model, great degrees in standards for coding, mathematics and general knowledge, which almost exceeds O3 of Openai. But R1, which was promoted, also known as “R1-0528”, may be less ready to answer controversial questions, especially questions about the topics that the Chinese government considers controversial.
This is according to the test conducted by the pseudonym behind letterA platform to compare how to treat different models of sensitive and controversial topics. The developer, who goes on behalf of the user “XLR8Harder“On X, you claim that R1-0528” is largely less than the subject of the controversial freedom of expression than the previous Deepseek versions, which is “the most controlled Deepseek model so far because of the Chinese government’s criticism.”
As wireless Make up In a piece of January, models in China must follow up on strict information controls. Law 2023 prohibits models from generating the content “harmful to the unity of the country and social harmony”, which can be explained as content that opposes the historical and political accounts of the government. For compliance, Chinese startups often control their models, either by using or settling raintime filters. one Ticket I found that the original R1 of Deepseek refuses to answer 85 % of the questions about the topics that the Chinese government considers controversial politically.
According to XLR8Harder, R1-0528 censorship answers questions on topics such as the Chengyang region in China, where more than a million Muslims from Ouigur were arbitrarily detained. Although he sometimes criticizes the aspects of Chinese government policy – in the XLR8Harder test, it offers Xinjiang camps as an example of human rights violations – the model often gives an official position to the Chinese government when asking questions directly.
Notice this techcrunch in our brief test, as well.
Publicly available artificial intelligence models, including video-generated models such as Magi-1 and Kling cash In the past to control sensitive topics for the Chinese government, such as the Tiananmen Square massacre. In December, Clément Delangue, CEO of AI Dev Platform, warned of unintended consequences for building Western companies at the head of well -licensed Chinese intelligence.