25 terms
AEO (Answer Engine Optimization) #

Structuring your content so AI assistants — Claude, ChatGPT, Perplexity — cite it when answering questions. Like SEO, but for the AI answer box instead of Google's blue links.

Your SEO strategy is incomplete without it. AI-generated answers are eating traditional search traffic, and the sites that get cited are the ones with clear structure and schema markup.

Seen in: Field Note #001: Agent Swarm

Agent #

Software that takes actions autonomously — not just answering questions, but executing tasks, calling APIs, writing files, and making decisions within defined boundaries.

The building block of everything we discuss here. If you understand what an agent is and isn't, the rest of this glossary follows naturally.

Seen in: Field Note #001: Agent Swarm

Agent Swarm #

Multiple agents working in parallel on different parts of the same project, coordinated but independent. Each agent owns its scope and doesn't touch the others' work.

This is how one person does the work of a team. Four agents running simultaneously means four tasks finishing at once — not four tasks queued up.

Seen in: Field Note #001: Agent Swarm

Agentic Workflow #

A process where AI agents handle the steps, decisions, and tool calls — not just the text generation. The human sets the goal; the agents figure out the path.

The difference between using AI as a tool and using AI as a worker. A tool needs you to drive every step. A workflow runs on its own.

Seen in: Field Note #001: Agent Swarm

API (Application Programming Interface) #

A structured way for software to talk to other software. When an agent pulls your revenue data from Square or posts to LinkedIn, it's using an API.

Every agent connects to the outside world through APIs. If a tool has an API, an agent can use it. If it doesn't, the agent is blind to it.

Seen in: Field Note #001: Agent Swarm

Batch Processing #

Sending many requests to an AI model at once — usually at a discount — instead of processing them one at a time in real-time.

Anthropic offers 50% off for batch jobs. We use this for overnight builds — queue up 50 tasks before bed, wake up to finished work at half the cost.

Seen in: Field Note #001: Agent Swarm

Chain-of-Thought #

Making an AI show its reasoning step by step before giving a final answer, rather than jumping straight to the conclusion.

Dramatically improves accuracy on complex tasks. It's the difference between asking someone "what's the answer?" and "walk me through how you got there."

Seen in: General — referenced across FFN essays

Context Window #

How much text an AI can "see" at once, measured in tokens. Think of it as the model's working memory — everything it can consider when generating a response.

Determines whether an agent can read your entire codebase or just one file. A 200K-token window sees your whole project. A 4K window sees one function.

Seen in: Field Note #001: Agent Swarm

Cost Per Token #

What you pay per unit of AI input and output. Input tokens (what you send) and output tokens (what you get back) are priced separately, usually per million.

The difference between a $5/month habit and a $500/month bill is understanding this number. Cheaper models for simple tasks, expensive models only when you need them.

Seen in: Field Note #001: Agent Swarm

Embeddings #

Converting text into numbers (vectors) so software can measure how similar two pieces of text are. "Coffee shop revenue" and "cafe sales" would have similar embeddings.

Powers search, recommendations, and RAG systems. If you want an AI to find relevant information in your documents, embeddings are how it knows what's "relevant."

Seen in: General — referenced across FFN essays

Fine-Tuning #

Training an existing AI model on your specific data to improve its performance on your particular tasks. Like giving a general contractor specialized training in your industry.

Expensive and rarely necessary. Prompt engineering gets you 90% of the way there. Fine-tune only after you've exhausted every other option — most operators never need to.

Seen in: General — referenced across FFN essays

Function Calling / Tool Use #

An AI's ability to use external tools — run code, query databases, call APIs, read files — instead of just generating text. The model decides which tool to use and with what parameters.

This is what makes agents agents. Without tool use, you have a chatbot. With it, you have software that can interact with your entire business stack.

Seen in: Field Note #001: Agent Swarm

Grounding #

Connecting AI responses to verifiable sources — your database, your documents, live data feeds — instead of letting the model rely on its training data alone.

The antidote to hallucination. A grounded agent that checks your actual revenue numbers before reporting them is reliable. An ungrounded one is making things up.

Seen in: General — referenced across FFN essays

Guardrails #

Rules and constraints that prevent an AI agent from doing something you didn't intend — spending limits, file access restrictions, approval requirements for sensitive actions.

The difference between a useful agent and an expensive liability. An agent without guardrails is a junior employee with admin access and no supervision.

Seen in: Field Note #001: Agent Swarm

Hallucination #

When an AI generates plausible-sounding information that is factually wrong. It doesn't know it's wrong — the output looks exactly like a correct response.

The #1 risk in production AI systems. Every agent output that touches numbers, facts, or decisions needs verification. Trust but verify isn't optional.

Seen in: General — referenced across FFN essays

Inference #

The act of running a prompt through an AI model and getting a response. Every time an agent "thinks" — reads a file, decides what to do next, generates output — that's inference.

Every inference costs money. An agent that makes 50 API calls to complete a task costs 50x the inference fee. Understanding this shapes how you design your agent workflows.

Seen in: Field Note #001: Agent Swarm

Large Language Model (LLM) #

The AI engine underneath tools like Claude, ChatGPT, and Gemini. A neural network trained on vast text data that predicts what comes next, scaled up to the point where "prediction" looks a lot like "reasoning."

Understanding what an LLM can and can't do is the foundation of everything else. It's not magic. It's pattern matching at extraordinary scale — and knowing its limits is how you use it safely.

Seen in: Field Note #001: Agent Swarm

Latency #

How long it takes to get a response from an AI model, measured from the moment you send a request to the moment you start receiving output.

The difference between a tool that feels instant and one that feels broken. For interactive agents, latency under 2 seconds matters. For batch jobs running overnight, it doesn't.

Seen in: Field Note #001: Agent Swarm

MCP (Model Context Protocol) #

Anthropic's open standard for connecting AI agents to external data sources and tools. One protocol that lets any agent talk to any tool — databases, calendars, file systems, APIs.

The USB-C of AI integrations. Before MCP, every tool needed custom integration code. Now you connect once and every agent can use it. We run 8 MCP servers across our agent system.

Seen in: Field Note #001: Agent Swarm

Multi-Agent Orchestration #

Coordinating multiple AI agents to work together on complex tasks — assigning roles, managing dependencies, routing outputs between agents, and handling failures.

This is how you scale from one agent to a system. A single agent automates a task. Orchestration automates a business function.

Seen in: Field Note #001: Agent Swarm

Prompt Engineering #

The craft of writing instructions that get reliable, useful output from an AI. Not just "asking nicely" — structuring context, examples, constraints, and output formats so the model does exactly what you need.

The highest-leverage skill in AI. Better prompts beat better models. I've seen a well-prompted Haiku outperform a lazy Opus prompt. The instructions matter more than the engine.

Seen in: Field Note #001: Agent Swarm

RAG (Retrieval-Augmented Generation) #

Giving an AI access to a knowledge base so it can look up facts before answering. The model retrieves relevant documents first, then generates a response grounded in that data.

How you make AI accurate about YOUR data without fine-tuning. Feed it your company docs, your SOPs, your financial records — and it answers from evidence, not from guessing.

Seen in: General — referenced across FFN essays

Structured Output #

Making an AI return data in a specific format — JSON, tables, schemas — instead of free-form text. You define the shape of the response, and the model fills it in.

The bridge between AI output and software that needs to consume it. An agent that returns JSON can feed a dashboard. An agent that returns prose needs a human to read it.

Seen in: Field Note #001: Agent Swarm

Token #

The unit of text AI models process — roughly 4 characters or 3/4 of a word. "Frontier Field Notes" is about 5 tokens. Everything in AI is measured, priced, and limited in tokens.

Understanding tokens is understanding costs. A 1,000-word essay is ~1,300 tokens of input. Knowing this lets you estimate costs before you run anything.

Seen in: Field Note #001: Agent Swarm

Webhook #

An automatic notification sent from one system to another when something happens. When a new member signs up, a webhook can tell your agent to send a welcome sequence immediately.

How your agents find out about events in real-time instead of constantly checking. Polling wastes API calls. Webhooks deliver the news the moment it happens.

Seen in: General — referenced across FFN essays

Get field notes from the frontier.

Two essays a month. AI-first strategy, real numbers, tools you can ship today.