
A 75-term AI Glossary for Product Teams
AI is transforming how products are designed, built, and experienced. This glossary is designed for anyone looking for an accessible, unified glossary of terms that they'll surely encounter on their AI product journey. Each one includes a quick definition and a product-oriented example to help you apply these concepts directly.

AI Glossary for Product Teams
When we started building AI products, there was always a new term shared in a meeting, referenced by a team member, or introduced to me during research. It was a lot.
AI is transforming how products are designed, built, and experienced. This glossary is designed for anyone looking for an accessible, unified glossary of terms for their AI product journey. Each one includes a quick definition and a product-oriented example to help you apply these concepts directly.
This list will grow over time. The market's moving fast, and so is our understanding of AI products.
1. AI (Artificial Intelligence)
The broad field of creating systems that can perform tasks that normally require human intelligence such as learning, reasoning, or decision-making.
Example: A navigation app using AI to predict traffic and reroute you in real time.
2. Agent
A software entity that can take actions autonomously on behalf of a user, often across multiple systems.
Example: A recruiting agent that screens resumes, schedules interviews, and drafts follow-up emails automatically.
3. AX (Agentic Experience)
An extenstion of UX for the AI Age. The practice of designing agentic products that feel less like tools, and more like relationships, with trust at the center. Pioneered by the team at LCA.
Example: Shortcut AI's agent will ask open questions to refine it's task, and then show reasoning to build trust as it generates the output.
4. Alignment
The process of ensuring AI systems behave in line with human values, goals, and safety standards.
Example: Adjusting a customer service AI so it de-escalates angry users instead of responding aggressively.
5. Ambient AI
AI that operates in the background, surfacing value without explicit prompts.
Example: A smart thermostat that adjusts temperature by learning your habits, without asking.
6. Anthropomorphization
Assigning human-like traits to AI systems (or other entities), intentionally or unintentionally.
Example: Giving a customer-support bot a name, profile picture, and empathetic tone so users trust it more.
7. Automation
The use of AI to fully perform tasks that would otherwise require human effort.
Example: An e-commerce AI that writes, tags, and publishes product listings with no human edits.
8. Benchmark
A standardized test used to evaluate AI model performance on specific tasks.
Example: Using MMLU to compare reasoning ability between GPT-4 and Claude 3.
9. Chain of Thought (CoT)
A reasoning technique where a model outlines its intermediate steps before giving an answer.
Example: When asked a cost calculation, the AI shows line-by-line math before the final result.
10. Cluster (GPU Cluster)
A group of high-performance computers (GPUs) linked together to train or run large-scale AI models.
Example: OpenAI uses GPU clusters with tens of thousands of NVIDIA chips to run GPT-5 at scale.
11. Computer Use
The ability of AI to directly control a computer; opening apps, clicking buttons, or filling out forms.
Example: An AI travel assistant booking flights by controlling your browser in real time.
12. Context
The information a model uses to respond, including conversation history, metadata, or external documents.
Example: A chatbot remembering you already asked about refund policy, so it doesn’t repeat itself.
13. Context Window
The maximum amount of information (measured in tokens) a model can “see” and process at once.
Example: A 200k token context window can store the entire contents of an employee handbook in a single session.
14. Copilot
A design pattern where AI supports a human user but doesn’t act fully autonomously.
Example: GitHub Copilot suggests code while the developer still decides what to use.
15. Credits / Tokens
Billing units for AI use. A token is a chunk of text (~¾ of a word). Credits are pricing units tied to token usage.
Example: Generating a 1,000-word report may consume ~1,300 tokens, billed as credits by the API.
16. Deterministic
Systems that always produce the same output for the same input.
Example: A password validator that always accepts the correct password and rejects the wrong one.
17. Embeddings
Numeric vector representations of data (words, images, audio) that capture meaning and similarity.
Example: Using embeddings to let users search “How do I reset my password?” and retrieve the correct help doc even if the wording differs.
18. Evals
Structured tests for measuring AI model performance, accuracy, and safety.
Example: Running evals to confirm an AI legal assistant consistently extracts “termination date” from contracts.
19. Escape Hatch
A mechanism that lets users exit an AI-driven process and return to a safe, human-controlled flow.
Example: A support chatbot offering a “Speak to a human” button when the AI struggles.
20. Evaluation Harness
A framework for testing and benchmarking AI systematically across tasks.
Example: Nightly automated evals to ensure a customer service AI stays accurate as new data arrives.
21. Explainability
The ability to interpret and understand how an AI system produced its output.
Example: A credit-risk AI that shows the top three factors influencing its loan approval recommendation.
22. Few-Shot Learning
Improving model performance by giving it a handful of labeled examples in the prompt.
Example: Feeding 5 example support tickets labeled “billing” or “technical” so the model classifies new tickets correctly.
23. Feedback Loop
The process of collecting user or system feedback to continuously improve an AI model.
Example: Thumbs up/down buttons in ChatGPT that retrain future responses.
24. Generative AI
AI systems that create new content (text, images, video, audio, code).
Example: MidJourney generating original product mockups from a text description.
25. Generative UI
Interfaces that adapt in real time, designed or modified by AI instead of being fixed.
Example: A product analytics tool that auto-builds the dashboard most relevant to your query.
26. GPT
“Generative Pre-trained Transformer,” OpenAI’s family of large language models.
Example: GPT-5 powers ChatGPT, capable of long-context reasoning and multimodal tasks.
27. Ground Truth
The verified, correct data used as a benchmark for training or evaluating models.
Example: Labeling 1,000 customer emails with the “correct” categories before training an AI classifier.
28. Grounding
Ensuring AI outputs are linked to verifiable, external facts or data sources.
Example: A medical AI answering from Mayo Clinic research instead of its training corpus.
29. Guardrails
Rules and constraints designed to keep AI outputs safe, reliable, and within intended scope.
Example: Blocking a health chatbot from giving unverified medical diagnoses.
30. Hallucination
When AI generates false, misleading, or fabricated information.
Example: A customer bot inventing a product feature that doesn’t exist.
31. Human-in-the-Loop (HITL)
Designing AI systems so humans remain involved in reviewing, approving, or correcting outputs.
Example: An AI drafts credit approvals, but a loan officer must sign off.
32. Inference
The process of running a trained AI model to generate predictions or outputs.
Example: Using a trained recommendation model to suggest your next YouTube video.
33. Instruction-Following Model
A model tuned to follow instructions precisely rather than just predicting text continuations.
Example: InstructGPT, trained to follow human commands, reliably summarizes text when asked.
34. Knowledge Graph
A structured way of organizing information where entities (people, places, concepts) are connected by relationships, enabling richer context for AI reasoning and retrieval.
Example: A customer support AI using a knowledge graph to understand that “password reset,” “login issue,” and “account recovery” are all related concepts.
35. Large Language Model (LLM)
An AI model trained on vast text data to understand and generate human-like language.
Example: Anthropic’s Claude 3 interpreting long policy documents and drafting recommendations.
36. Latency
The delay between a user’s input and the AI’s response.
Example: A 1-second latency feels conversational, but 10 seconds breaks the flow.
37. Latency Budget
The maximum acceptable time a system can take to respond before the user experience breaks.
Example: A shopping chatbot might have a 3-second latency budget; longer feels unusable.
38. Machine Learning (ML)
The broader field of training algorithms to improve from data without explicit programming.
Example: Spotify’s ML models learning your listening habits to recommend playlists.
39. Memory (AI Memory)
An agent’s ability to retain and use information from past interactions across sessions.
Example: A shopping assistant remembering your clothing sizes over time.
40. Middleware
Software that connects AI models to enterprise systems, orchestrating data flow and compliance.
Example: Middleware ensuring an AI copilot pulls only the latest HR policies when answering employee questions.
41. Mini Model
Smaller, cheaper AI models optimized for speed and efficiency.
Example: GPT-4o mini powering lightweight chatbots inside customer apps.
42. Model
A trained system that transforms input data into predictions, outputs, or actions.
Example: A spam detection model that flags unwanted emails.
43. Model Context Protocol (MCP)
A framework for securely connecting AI to private organizational data and workflows.
Example: Using MCP so an internal AI assistant can answer only from a company’s Confluence pages.
44. Multi-Agent Architecture
Systems composed of multiple AI agents with specialized roles, working together toward a goal.
Example: A “writer” agent drafting a blog, a “fact-checker” agent verifying claims, and an “editor” agent refining tone.
45. Multimodal
AI that processes and generates across multiple input/output types (text, image, audio, video).
Example: An AI that interprets a product photo and generates both a written description and a spoken ad script.
46. Natural Language
The way humans communicate using everyday spoken or written language, which AI models are trained to understand and generate.
Example: Asking “What’s the weather tomorrow?” is a natural language query that an AI parses and answers.
47. Natural Language Interface (NLI)
A user interface where people interact with software using natural language instead of structured commands.
Example: Typing “Book me a flight to New York next Tuesday” directly into a travel app’s chat box.
48. Natural Language Processing (NLP)
The field of AI focused on enabling machines to understand, interpret, and generate human language.
Example: Gmail’s “Smart Compose” uses NLP to finish your sentences as you type.
49. Observability
Monitoring, measuring, and debugging AI systems in production.
Example: Tracking hallucination rates or measuring response accuracy for a deployed AI chatbot.
50. One-Shot Learning
When a model learns from a single example to generalize to new cases.
Example: Showing one example of a custom invoice format so the model processes new invoices correctly.
51. Orchestration
The coordination layer that routes tasks across models, agents, and tools.
Example: LangChain orchestrating whether an AI should call search, summarization, or code execution tools.
52. Overfitting
When a model performs well on its training data but poorly on new, unseen data.
Example: A churn prediction model that works perfectly on historical customers but fails on new ones.
53. Personification
Giving AI agents a defined identity, role, or voice to shape how users interact with them.
Example: Naming your finance agent “Lexi” to feel like a trusted advisor.
54. Probabilistic
AI systems that produce outputs with an element of chance. Almost never identical unless explicitly requested, even with the same input.
Example: Asking a chatbot the same question twice may yield slightly different answers.
55. Prompt
The input instruction or query provided to an AI model.
Example: “Write a one-paragraph summary of this meeting transcript.”
56. Prompt Bar
The user interface element where prompts are entered.
Example: The ChatGPT text box or Figma’s AI assistant input field.
57. Prompt Engineering
The practice of designing effective prompts to guide AI behavior and improve output.
Example: Reframing “Summarize” as “Summarize in 3 concise bullets for executives.”
58. RAG (Retrieval-Augmented Generation)
A technique where an LLM retrieves external data before generating an answer.
Example: A support bot pulling answers directly from your knowledge base.
59. Reasoning Model
A model optimized for multi-step, logic-heavy reasoning tasks.
Example: A reasoning model used in legal tech to analyze arguments across hundreds of case files.
60. Reinforcement Learning (RL)
Training AI by rewarding desirable actions and penalizing poor ones.
Example: A recommendation system learning to maximize click-through rate.
61. Safety Layer
Protective filters and guard systems around AI outputs to prevent harm.
Example: A moderation system blocking unsafe chatbot responses before they reach users.
62. Self-Play
An AI training technique where the system learns by competing against itself.
Example: AlphaZero mastering chess and Go by generating its own training data through play.
63. Swarm
A loosely coordinated group of AI agents working on different subtasks of a larger goal.
Example: A swarm of agents each researching different competitors, then consolidating results.
64. Synthetic Data
Data generated artificially (often by AI) to augment training datasets.
Example: Creating synthetic patient data to train a healthcare model without exposing real records.
65. Synthetic Persona
AI-generated user profiles used for product testing, prototyping, or simulation.
Example: Creating 50 synthetic personas (e.g., “busy parent,” “budget traveler”) and running them through an AI-powered prototype.
66. Toolchain
The set of tools, APIs, or services an agent can use to complete tasks.
Example: An agent that calls Stripe for payments, Slack for messaging, and Google Maps for routing.
67. Transfer Learning
Reusing a pre-trained model for a new, related task with limited data.
Example: Fine-tuning a vision model trained on ImageNet to detect dental X-rays.
68. Transformer
A neural network architecture based on attention mechanisms that allow models to weigh relationships between tokens, enabling scale and accuracy.
Example: GPT, Claude, Gemini, and LLaMA all use transformer architectures.
69. Trust Boundary
The point where AI’s probabilistic outputs hand off to deterministic, rule-based systems.
Example: An AI recommends treatment options, but only a deterministic checklist approves prescriptions.
70. Tuning (Fine-Tuning)
Specializing a base model by training it further on domain-specific data.
Example: Fine-tuning GPT with customer support transcripts to reflect brand tone.
71. Vector Database
A database optimized for storing embeddings and enabling semantic similarity search.
Example: Using Pinecone or Weaviate to let users search company policies by meaning instead of keywords.
72. Vibe Coding
Building software through natural language conversation where product specification, design, and code generation happen simultaneously.
Example: A team “vibe codes” a new onboarding flow by chatting with an AI that outputs working code and UI instantly.
73. Vibe Marketing
Developing and executing marketing strategy conversationally, where AI handles planning, asset creation, and deployment via integrations and automations. Pioneered by the team at Boring Marketing.
Example: A CMO “vibe markets” a new campaign - the AI drafts strategy, designs assets, and pushes them live via ad integrations.
74. Voice Agent / Voice Mode
AI that communicates conversationally through speech, often real-time.
Example: ChatGPT’s voice mode acting as a live conversational tutor.
75. Zero-Shot Learning
When a model can generalize to a task without seeing any examples.
Example: Asking a model to summarize legal contracts without training it specifically on legal data.
If you know anyone building AI products, level them up with this glossary.
More to come.