Companies probably won’t switch to DeepSeek but they do think it should make AI cheaper
After DeepSeek’s sudden arrival on the AI scene in January, it upended a lot of preexisting assumptions about AI. Namely it subverted the idea that to get better models, companies would have to spend more.
To get an idea of what DeepSeek means for enterprise spending on AI — one of AI’s more promising revenue sources — Enterprise Technology Research surveyed more than 100 business leaders who are “very” or “extremely” familiar with their organization’s usage of large language models. Their companies either used paid subscriptions to tools like ChatGPT or Microsoft Copilot or otherwise integrate LLMs (outside ones or their own) into their businesses.
While more than half of respondents said they believed DeepSeek-R1 offers comparable performance to better-known models from OpenAI, Meta, Google, and Alibaba and that they had a strong interest in “evaluating” DeepSeek in the next six months, few said they trusted its data privacy measures. Partly as a result, most said DeepSeek wouldn’t influence their AI spending plans.
They did think, however, that the advent of DeepSeek should make their AI business expenses cheaper. Some 65% of respondents said DeepSeek will substantially reduce the costs of integrating LLMs into their applications and workflows.
For what it’s worth, those surveyed also seemed to subscribe to Jevons Paradox, with 68% saying that if AI tools were less expensive, their organizations would be investing “much more.”