4 divergent AI takes: Sexy, scary, mysterious, and hyped
AI was the star at Bloomberg’s tech conference
We don’t have an official count of how many times the tech titans at Bloomberg’s tech summit in San Francisco last week mentioned artificial intelligence, but it was definitely a lot. Good, bad, sexy, scary — how they thought about AI varied widely depending on the person. Here are some notable takes:
Sexy AI: Goes on dates for us
Whitney Wolfe Herd, founder and executive chair of dating app Bumble, thinks there’s a better AI dating future than falling in love with bots.
“Our focus with AI is to help create more healthy and actionable relationships.”
She conceived of a hypothetical future in which AI acts as a “dating concierge” that could help you get over your particular hangups and “give you productive tips for communicating with other people.” Sounds good! Wolfe Herd then got a little more “out there.”
“There’s a world where your dating concierge could go on a date for you with other dating concierges.” The idea would be to winnow down the dating pool to the people you, the person, should actually go and meet.
Scary AI: influences the presidential election
How concerned is LinkedIn cofounder and venture capitalist Reid Hoffman about AI’s role in the upcoming election? “Very concerned.” Ruh roh!
He gave the example of people calling real court rulings about Donald Trump “deep fakes.” In other words “true things will be called deep fakes” which will create a “language of nontruth.”
Hype AI: It’s all that and a bag of lenses
While some people think AI is a massive grift, Snap cofounder and CEO Evan Spiegel says “all the excitement around AI is warranted.”
How does that relate to the disappearing photo social media app?
“What’s been really exciting is the way we’ve been able to apply AI to image and video and 3D.” AR lens experiences that used to take graphic artists weeks can now be generated “on the fly using AI,” which Spiegel says will lead to an “explosion in creativity.”
Mysterious AI: We don’t know how it’s trained
OpenAI still won’t say it uses YouTube to train its text-to-video generator Sora.
Bloomberg's @shiringhaffary: “Was Youtube used to train OpenAI’s Sora?”
— Rani Molla (@ranimolla) May 9, 2024
Open AI’s Brad Lightcap: “We’re looking at this problem. It’s really hard.”
Back in March the company’s chief technology officer caused quite a stir after being unable to answer a seemingly simple question from the Wall Street Journal about what data the company used to train the model.
Two months later, that’s still the case. When asked to clear up whether or not Sora is trained on YouTube videos, OpenAI COO Brad Lightcap gave a long winding non-answer.
“The conversation around data is really important. We obviously need to know where data comes from,” he said, before mentioning a recent blog post that also doesn’t answer the question. Lightcap talked about the need for a content ID system that lets creators understand when their content is being used to train AI and to be able to monetize that. “We’re looking at this problem. It’s really hard,” he concluded.
At this point, OpenAI’s omission about where Sora is trained has got to be for legal reasons rather than for lack of knowledge about whether it trained its AI on YouTube. Because what could be worse than “yes”?