The idiot’s guide to AI jargon
Don’t like politics at the Thanksgiving table? Here’s some performative AI jargon for you to weaponize so you can win the day.
Unless your entire family has been living under a rock — or you work for Apple, a company that continues to mostly ignore the trend — AI is going to come up at your Thanksgiving.
So, here’s a quick glossary of AI-adjacent vocab, each of which has gone modestly viral at the various peaks and troughs of the hype cycle this year, hopefully empowering you to seem artificially intelligent yourself, much like I am trying to do by writing this piece.
AI slop: We’ve all seen it by now, with our social media feeds clogged up with low-effort videos, images, and illustrations cooked up by AI models like Sora. The latest genre of awful to emerge from this hellscape of content is fake recipes, which have flooded the internet in the lead-up to Thanksgiving.
DeepSeek: There were two major market meltdowns this year. One was tariff-gate; the other was DeepSeek-gate, when the news broke at the end of January that Chinese researchers had replicated much of the performance of Western AI models at a fraction of the cost with DeepSeek — briefly blowing a $1 trillion hole in US stocks.
Vibe coding: I’m going to borrow straight from Replit’s blog: “Vibe coding refers to the practice of instructing AI agents to write code based on natural language prompts. It’s not about being lazy — it’s about focusing your time and energy on the creative aspects of app development rather than getting stuck in technical details.” In my opinion, it is about being lazy, as getting AI to do your work for you seems like the core of the appeal to me.
Agentic AI: A particular favorite of CEOs across America, the term refers to AI chatbots and models that are specifically empowered to take actions on their own, with minimal human oversight — i.e. they have agency.
Behind the meter: As we entered the back half of 2025, much of the discussion turned to how to power the AI revolution, with energy-sapping data centers requiring vast amounts of electricity, often in remote areas. Behind-the-meter solutions provide power on-site, putting less strain on the grid. Goldman Sachs sees BTM solutions such as on-site gas turbines and fuel cells as a key provider of electricity for the data center build-out.
ARC-AGI-2: A visual reasoning test that AI models often find very difficult. Other ways of testing AI chatbots include difficult math and physics questions, or running a fake vending machine business. Google’s Gemini 3 Pro scored the highest on ARC-AGI-2 of any model tested yet (according to Google, at least).
Synthetic data: AI models need lots of data to train on. They love text, images, video, and audio data. The problem is, we’re running out of actual data — so we’re now making up synthetic data to use.
Blackwell chips: If Nvidia is the biggest team in the AI game, then Blackwell is its star player, an AI chip that succeeded the previous Hopper architecture and turned NVDA briefly into a $5 trillion stock. It’s widely regarded as the best on the market, and it’s priced accordingly: each one costs $30,000 to $40,000 a pop. Nvidia can’t sell Blackwell chips in China — a sticking point of trade tensions this year and a source of frustration for the tech giant itself.
TPU: A tensor processing unit, often compared to a GPU (graphics processing units). The term has exploded in the last week because Google’s well-received Gemini 3 was “trained and powered on Google homegrown TPU chips,” suggesting Nvidia might have a little more competition in the AI chip space than previously thought. Read Luke Kawa’s excellent piece on what this might mean for both Alphabet and the wider ecosystem.
Jevons Paradox: A short-lived debate that ignited in the wake of the DeepSeek drama. The paradox, named after an English economist born in 1835, observes that making something cheaper or more efficient to use — and therefore implying we need less of it — actually typically leads to a rise in demand, not a drop. Which, in the case of AI, has been true so far.
Multimodal: An AI tool or model that can interpret multiple media such as text, images, video, etc.
Inference: What it’s all about. The ability for an AI to recognize patterns, interpret signals, or draw conclusions from data it hasn’t explicitly seen before. IBM has a longer definition here.
Tokens: The building blocks of AI’s output. As my colleague Jon Keegan concisely explains: a “token” is like an atomic unit of data. When text is input into a model, the words and sentences get broken down into these tokens for processing. In OpenAI’s case, one token is roughly four characters, and a paragraph is about 100 tokens.
