Tech
Hey, Waymo, have you tried the Fresh Pond rotary yet?
(Craig F. Walker/Getty Images)

To speed or not to speed? Tesla and Google’s Waymo disagree

Experts say going faster makes accidents more likely and potentially more harmful: “ Do we really want computers to make the decision to put other lives at risk because they want to break traffic rules?”

Google’s Waymo and Tesla are racing to win the driverless taxi market, but only Tesla appears to be going above the speed limit to get there.

A number of Tesla robotaxi videos show the vehicles going five or more miles per hour above the speed limit. Meanwhile, Waymo’s policy is to follow the posted speed limit, though the company says it will go below for construction areas or slightly above to change a lane, for example.

It’s an interesting difference as both companies try to win over the public’s trust in their new — and potentially dangerous — technologies. Tesla influencers who are part of the company’s limited launch have maintained that robotaxis go above the speed limit to match the speed of traffic, a practice they see as safer since it doesn’t disturb the flow of traffic. Tesla’s autonomous technology is trained on real-life drivers, and presumably it’s not screening out speeders in the training data.

The practice of speeding in self-driving Teslas lines up with what the company allows consumers to do in its supervised full self-driving cars. Tesla’s Model Y owner’s manual details a setting called “Max Speed Offset,” which reads:

Max Speed Offset: Set the percentage offset over the currently detected speed limit that Full Self-Driving (Supervised) can drive if it is necessary to drive faster than the speed limit to match the flow of traffic.”

Tesla didn’t respond to requests for comment about the incidents or its speeding policy.

Waymo recently collected data from its coverage areas in Phoenix and San Francisco and found that 33% to 49% of human drivers there were speeding, depending on the road type and location. The company contends that its speed-limit-following vehicles are safer than human drivers, and that speed compliance in those two cities alone could reduce traffic fatalities by 82 deaths annually. (Waymo hasn’t yet released data for its newer Austin market, where Tesla is also operating.)

“I would not be confident that they would see me and that they would detect me.”

Tesla CEO Elon Musk has repeatedly said that Tesla’s full self-driving will be significantly safer than human drivers.

“The standard has to be very high because the moment there’s any kind of accident with an autonomous car, that immediately gets worldwide headlines, even though about 40,000 people die every year in car accidents in the US, and most of them don’t even get mentioned anywhere,” Musk said on an earnings call this year. “But if somebody scrapes a shin with an autonomous car, it’s headline news.”

But when it comes to speeding, there are some hard truths.

Ken Kolosh, statistics manager at the National Safety Council, says that speeding makes accidents more likely and more dangerous when they do happen. The reasons are pure physics: when you’re traveling at a faster speed, it takes longer to stop, so it’s harder to avoid objects — or people — in your path. And higher speeds mean more damage and death.

“Nearly 3 in 10 traffic deaths in 2023 involved speeding — that’s 11,775 people killed, or more than 32 deaths every day,” he told Sherwood News.

The findings are the same over at the National Highway Traffic Safety Administration.

Phil Koopman, an associate professor of electrical and computer engineering at Carnegie Mellon University who specializes in autonomous vehicle safety, says the situation is a little complicated since the speed limit doesn’t always represent the appropriate speed, which varies by road type, condition, and weather, among other variables, but that generally the faster a vehicle drives, the more dangerous it is — even just five miles per hour faster.

Importantly, Koopman also says the comparison between robots and humans speeding doesn’t track because autonomous cars don’t have the same disincentives that people do, like traffic tickets or jail, if they violate the law or hurt someone.

“ Do we really want computers to make the decision to put other lives at risk because they want to break traffic rules?” he said.

When asked if he would personally ride in a Waymo or a Tesla robotaxi, Koopman said he would ride in the former but not the latter. He also said asking which is safer for the rider is the wrong question.

Instead, he pointed to other vehicles, cyclists, and pedestrians as where the concern should lie when considering autonomous vehicles. He said he personally would not walk in front of either.

“I would not be confident that they would see me and that they would detect me,” he said.

More Tech

See all Tech
$100M

Salesforce is using AI to to handle customer service, and it is saving the company $100 million per year, said CEO Marc Benioff at the company’s “Dreamforce” conference, per Bloomberg reporting. Benioff also announced that 12,000 customers are using its “Agentforce” AI-driven customer service platform.

$100 million seems impressive, but to put that number in perspective, last quarter, the company reported over $10 billion in revenue.

Benioff has enthusiastically embraced the use of AI and has slashed thousands of positions as it automates the roles.

tech

Sam Altman says OpenAI fixed ChatGPT’s serious mental health issues in just a month. Anyway, here comes the erotica

Well that was quick. Just over a month ago, OpenAI CEO Sam Altman announced a 120-day plan to roll out new protections for identifying and helping ChatGPT users who are suffering a mental health crisis, after a series of reports of such users harming themselves and others after using the company’s AI chatbot.

Today, Sam Altman says that the company has built new tools to address these issues and “mitigated” these problems.

Altman is so confident they have addressed mental health safety, that the company is reverting ChatGPT’s behavior so it “behaves more like what people liked about 4o.” Altman essentially apologized to users for the changes that were made to address mental health problems that arose with use of the chatbot:

“We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.”

Separately, the company announced the members of its “Expert Council on Well-Being and AI,” an eight-person council of mental health experts.

As a reward for the adults who aren’t suffering mental health issues exacerbated by confiding in the chatbot, Altman says that erotica is on the way.

“In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults.”

In response to Altman’s post on X, Senator Josh Hawley (R-MO) quoted Altman’s post with this message:

“You made ChatGPT “pretty restrictive”? Really. Is that why it has been recommending kids harm and kill themselves?”

Altman is so confident they have addressed mental health safety, that the company is reverting ChatGPT’s behavior so it “behaves more like what people liked about 4o.” Altman essentially apologized to users for the changes that were made to address mental health problems that arose with use of the chatbot:

“We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.”

Separately, the company announced the members of its “Expert Council on Well-Being and AI,” an eight-person council of mental health experts.

As a reward for the adults who aren’t suffering mental health issues exacerbated by confiding in the chatbot, Altman says that erotica is on the way.

“In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults.”

In response to Altman’s post on X, Senator Josh Hawley (R-MO) quoted Altman’s post with this message:

“You made ChatGPT “pretty restrictive”? Really. Is that why it has been recommending kids harm and kill themselves?”

tech

Meta says Instagram teen accounts will default to a PG-13 content limit

Meta is introducing new guidelines for the content on Instagram teen accounts. The company is turning to the well-known PG-13 standard from the Motion Picture Association, used by the film industry.

Any user under the age of 18 will have their content limited to PG-13.

Parents who administer their child’s teen account will have the ability to change the settings — including placing their child in a more restrictive level than PG-13 — but that assumes the teen hasn’t just tried to sign up on their own using a fake birthday.

To counter those wily kids, Instagram will use “age prediction technology” to set content restrictions, according to the company.

In a blog post announcing the new policy, Meta acknowledged the new settings may not catch all prohibited content:

“Just like you might see some suggestive content or hear some strong language in a PG-13 movie, teens may occasionally see something like that on Instagram — but we’re going to keep doing all we can to keep those instances as rare as possible.”

Parents who administer their child’s teen account will have the ability to change the settings — including placing their child in a more restrictive level than PG-13 — but that assumes the teen hasn’t just tried to sign up on their own using a fake birthday.

To counter those wily kids, Instagram will use “age prediction technology” to set content restrictions, according to the company.

In a blog post announcing the new policy, Meta acknowledged the new settings may not catch all prohibited content:

“Just like you might see some suggestive content or hear some strong language in a PG-13 movie, teens may occasionally see something like that on Instagram — but we’re going to keep doing all we can to keep those instances as rare as possible.”

tech

Smartphone upgrades grew for Apple and Samsung last quarter

The global smartphone market grew 2.6% in the third quarter, thanks in part to interest in the latest phones from Apple and Samsung, according to new shipment data from market intelligence firm IDC.

“Apple and Samsung posted strong results as their latest devices encouraged consumers to upgrade in the premium segment, while new, affordable AI-enabled smartphones also drove high upgrades in more affordable price categories,” IDC Vice President of Client Devices Francisco Jeronimo said in a press release for the data, which would include roughly half a month of new iPhone sales. “Demand for Apple’s new iPhone 17 lineup was robust, with pre-orders surpassing those of the previous generation. At the same time, Samsung’s Galaxy Z Fold 7 and Galaxy Z Flip 7 outperformed all earlier foldable models, creating renewed momentum for the foldables segment.”

Here’s the year-over-year growth in third-quarter shipments:

And here’s how the absolute number of shipments compared last quarter:

The “other” bin is made up of dozens of smaller, often regional and low-cost manufacturers.

tech
Jon Keegan

Sora’s ghoulish reanimation of dead celebrities raises alarms

OpenAI’s video generation app Sora has spent its first two weeks at the top of the charts.

The startup’s fast-and-loose approach to enforcing intellectual property rights has seen the app flooded with videos of trademarked characters in all sorts of ugly scenarios.

But another area where Sora users have been pushing the limits involves videos that reanimate dead celebrities.

And we’re not talking just JFK, MLK, and Einstein. Videos featuring more recently deceased figures such as Robin Williams (11 years ago), painter Bob Ross (30 years ago), Stephen Hawking (seven years ago), and even Queen Elizabeth II (three years ago) have been generated. Some of the videos are racist and offensive, shocking the relatives of the figures.

OpenAI told The Washington Post that it is now allowing representatives of “recently deceased” celebrities and public figures to request that their likenesses be blocked from the service, though the company did not give a precise time frame for what it considered recent.

But another area where Sora users have been pushing the limits involves videos that reanimate dead celebrities.

And we’re not talking just JFK, MLK, and Einstein. Videos featuring more recently deceased figures such as Robin Williams (11 years ago), painter Bob Ross (30 years ago), Stephen Hawking (seven years ago), and even Queen Elizabeth II (three years ago) have been generated. Some of the videos are racist and offensive, shocking the relatives of the figures.

OpenAI told The Washington Post that it is now allowing representatives of “recently deceased” celebrities and public figures to request that their likenesses be blocked from the service, though the company did not give a precise time frame for what it considered recent.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.