Tech
tech
Jon Keegan

Perplexity claims to have purged Chinese censorship and propaganda from its new DeepSeek clone

When DeepSeek R1 was released, it shocked the AI world.

A small group of Chinese developers had trained a model that matched the performance of OpenAI’s state-of-the-art models, and they say they did it for a fraction of the cost, with less expensive hardware.

But shortly after its release, attention turned to how compliant the model was with Chinese censorship laws.

Much like Meta’s Llama 3 model, DeepSeek R1 model was released as open-source software, anyone could take the model and post-train, distill, or change it for any application. That’s exactly what AI startup Perplexity did.

Perplexity is releasing “R1 1776,” an open-source model that the company says is free of Chinese Communist Party propaganda and censorship restrictions. Aravind Srinivas, Perplexity’s cofounder and CEO, wrote in a LinkedIn post:

“The post-training to remove censorship was done without hurting the core reasoning ability of the model — which is important to keep the model still pretty useful on all practically important tasks.

Some example queries where we remove the censorship: ‘What is China’s form of government?’, ‘Who is Xi Jinping?’, ‘how Taiwan’s independence might impact Nvidia’s stock price’.”

Perplexity said it used “human experts to identify approximately 300 topics known to be censored by the CCP.”

While their tests show that the model will no longer censor queries about Tiananmen Square and Taiwanese independence, there’s no way of knowing exactly what other information the model may spin with a CCP perspective.

As countries rush to develop their own “sovereign AI,” concerns will persist over who decides the ground truth for these models, because it is easy to bake censorship into their training.

But shortly after its release, attention turned to how compliant the model was with Chinese censorship laws.

Much like Meta’s Llama 3 model, DeepSeek R1 model was released as open-source software, anyone could take the model and post-train, distill, or change it for any application. That’s exactly what AI startup Perplexity did.

Perplexity is releasing “R1 1776,” an open-source model that the company says is free of Chinese Communist Party propaganda and censorship restrictions. Aravind Srinivas, Perplexity’s cofounder and CEO, wrote in a LinkedIn post:

“The post-training to remove censorship was done without hurting the core reasoning ability of the model — which is important to keep the model still pretty useful on all practically important tasks.

Some example queries where we remove the censorship: ‘What is China’s form of government?’, ‘Who is Xi Jinping?’, ‘how Taiwan’s independence might impact Nvidia’s stock price’.”

Perplexity said it used “human experts to identify approximately 300 topics known to be censored by the CCP.”

While their tests show that the model will no longer censor queries about Tiananmen Square and Taiwanese independence, there’s no way of knowing exactly what other information the model may spin with a CCP perspective.

As countries rush to develop their own “sovereign AI,” concerns will persist over who decides the ground truth for these models, because it is easy to bake censorship into their training.

More Tech

See all Tech
tech

Waymos reportedly continuing to pass stopped school buses after earlier recall over same issue

The National Transportation Safety Board reported Tuesday that it’s looking into two recent instances of driverless Waymo vehicles passing stopped school buses. The incidents occurred after the Alphabet subsidiary filed a voluntary recall in December over similar behavior.

In the January 12 case, the NTSB says video evidence shows the Waymo vehicle initially stopped for a school bus that had its red lights flashing and stop arms extended. Three human-driven vehicles then passed the bus illegally. While stopped, the Waymo vehicle contacted a remote assistance agent located in Michigan, asking whether the bus had active signals. After the agent responded “no,” the vehicle resumed travel and passed the bus while its stop arms were still extended. No one was hurt.

tech

OpenAI’s new GPT-5.3 Instant: Less “cringe” tone, no more “over-caveating” responses

OpenAI has released GPT-5.3 Instant, a conversational model that the company says will have smoother conversations more to the point.

It seems OpenAI is walking back from its cautious guardrails, allowing a more permissive AI chatbot.

In a video describing the new model, a researcher explained, “People are noticing that our models can sometimes seem like a bit of a nanny.” The company describes this overcautious behavior as “over-caveating,” and this new model aims to relax a bit and let more things slide.

An example showed a question about calculations for “a really long-distance archery scenario.” The previous version of the model noted that the AI could not help with calculations that could be harmful. The new model’s response just went ahead and answered the question, without assuming any bad intent.

The researcher in the video said that these changes did not loosen safety controls, but rather just improved contextual understanding of the user’s query.

The model also features more useful web searches, which attempt to infer the context of why the user is asking the question and then tailor a more useful response. The company says results won’t appear to be just a list of links, but rather a more direct response with the information the user was looking for.

OpenAI said the new model is available today for all users of ChatGPT.

In a video describing the new model, a researcher explained, “People are noticing that our models can sometimes seem like a bit of a nanny.” The company describes this overcautious behavior as “over-caveating,” and this new model aims to relax a bit and let more things slide.

An example showed a question about calculations for “a really long-distance archery scenario.” The previous version of the model noted that the AI could not help with calculations that could be harmful. The new model’s response just went ahead and answered the question, without assuming any bad intent.

The researcher in the video said that these changes did not loosen safety controls, but rather just improved contextual understanding of the user’s query.

The model also features more useful web searches, which attempt to infer the context of why the user is asking the question and then tailor a more useful response. The company says results won’t appear to be just a list of links, but rather a more direct response with the information the user was looking for.

OpenAI said the new model is available today for all users of ChatGPT.

tech

Apple updates Mac chips and amps up AI claims

Apple’s new M5 Pro and M5 Max chips are the latest step in its steady silicon march: faster CPUs, stronger graphics, more memory bandwidth. The 30% CPU performance increase is meaningful but incremental — not the kind of leap that accompanied the original M1 transition from Intel.

What’s more notable is how aggressively Apple is framing this around AI. The company is touting up to 4x higher peak GPU compute for AI workloads and the ability to run larger models locally, leaning hard into the on-device AI narrative as it positions the MacBook Pro as a more capable personal AI development machine.

It may be paying off in unexpected ways, as the explosion of interest in roll-your-own AI agents like Moltbot have made low-cost Macs like the MacMini a hot item.

It may be paying off in unexpected ways, as the explosion of interest in roll-your-own AI agents like Moltbot have made low-cost Macs like the MacMini a hot item.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.