Tech
tech

Google OK’s its AI for use in weapons, surveillance

Google has quietly changed its policies to remove language that prohibited the use its AI to be used for weapons or surveillance.

Wired reports:

“The company removed language promising not to pursue technologies that cause or are likely to cause overall harm, weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people, technologies that gather or use information for surveillance violating internationally accepted norms, and technologies whose purpose contravenes widely accepted principles of international law and human rights.

The shift follows similar changes at Meta, Anthropic, and OpenAI that have led the companies to pursue federal contracts to use their technology in defense, law enforcement, and national security applications.

In a blog post describing the policy changes, Google DeepMind CEO (and Nobel Prize winner) Demis Hassabis and James Manyika, SVP of technology and society, said:

“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

“The company removed language promising not to pursue technologies that cause or are likely to cause overall harm, weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people, technologies that gather or use information for surveillance violating internationally accepted norms, and technologies whose purpose contravenes widely accepted principles of international law and human rights.

The shift follows similar changes at Meta, Anthropic, and OpenAI that have led the companies to pursue federal contracts to use their technology in defense, law enforcement, and national security applications.

In a blog post describing the policy changes, Google DeepMind CEO (and Nobel Prize winner) Demis Hassabis and James Manyika, SVP of technology and society, said:

“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

More Tech

See all Tech
tech

Tesla’s 45 Austin Robotaxis now have 14 crashes on the books since launching in June

Since launching in June 2025, Tesla’s 45 Austin Robotaxis have been involved in 14 crashes, per Electrek reporting citing National Highway Traffic Safety Administration data.

Electrek analysis found that the vehicles have traveled roughly 800,000 paid miles in that time period, amounting to a crash every 57,000 miles. According to the NHTSA, US drivers crash once every 500,000 miles on average.

The article says Tesla submitted five new crash reports in January of this year that happened in December and January. Electrek wrote:

“The new crashes include a collision with a fixed object at 17 mph while the vehicle was driving straight, a crash with a bus while the Tesla was stationary, a collision with a heavy truck at 4 mph, and two separate incidents where the Tesla backed into objects, one into a pole or tree at 1 mph and another into a fixed object at 2 mph.”

Tesla updated a previously reported crash that was originally filed as only having damaged property to include a passenger’s hospitalization.

Last month, Tesla shares climbed after CEO Elon Musk said in a post on X that the company’s Austin Robotaxis had begun operating without a safety monitor.

The article says Tesla submitted five new crash reports in January of this year that happened in December and January. Electrek wrote:

“The new crashes include a collision with a fixed object at 17 mph while the vehicle was driving straight, a crash with a bus while the Tesla was stationary, a collision with a heavy truck at 4 mph, and two separate incidents where the Tesla backed into objects, one into a pole or tree at 1 mph and another into a fixed object at 2 mph.”

Tesla updated a previously reported crash that was originally filed as only having damaged property to include a passenger’s hospitalization.

Last month, Tesla shares climbed after CEO Elon Musk said in a post on X that the company’s Austin Robotaxis had begun operating without a safety monitor.

tech
Jon Keegan

Ahead of IPO, Anthropic adds veteran executive and former Trump administration official to board

Anthropic is moving to put the pieces in place for a successful IPO this year.

Today, the company announced that Chris Liddel would join its board of directors.

Liddel is an seasoned executive who previously served as CFO for Microsoft, GM, and International Paper.

Liddel also comes with experience in government, having served as the deputy White House chief of staff during the first Trump administration.

Ties to the Trump world could be helpful for Anthropic as it pushes to enter the public market. Its reportedly not on the greatest terms with the current administration, as the startup has pushed back on using its Claude AI for surveillance applications.

Liddel is an seasoned executive who previously served as CFO for Microsoft, GM, and International Paper.

Liddel also comes with experience in government, having served as the deputy White House chief of staff during the first Trump administration.

Ties to the Trump world could be helpful for Anthropic as it pushes to enter the public market. Its reportedly not on the greatest terms with the current administration, as the startup has pushed back on using its Claude AI for surveillance applications.

tech
Rani Molla

Meta is bringing back facial recognition for its smart glasses

Meta is reviving its highly controversial facial recognition efforts, with plans to incorporate the tech into its smart glasses as soon as this year, The New York Times reports.

In 2021, around the time Facebook rebranded as Meta, the company shut down the facial recognition software it had used to tag people in photos, saying it needed to “find the right balance.”

Now, according to an internal memo reviewed by the Times, Meta seems to feel that it’s at least found the right moment, noting that the fraught and crowded political climate could allow the feature to attract less scrutiny.

“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” the document reads.

The tech, called “Name Tag” internally, would let smart glass wearers identify and surface information about people they see with the glasses by using Meta’s artificial intelligence assistant.

Now, according to an internal memo reviewed by the Times, Meta seems to feel that it’s at least found the right moment, noting that the fraught and crowded political climate could allow the feature to attract less scrutiny.

“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” the document reads.

The tech, called “Name Tag” internally, would let smart glass wearers identify and surface information about people they see with the glasses by using Meta’s artificial intelligence assistant.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.