Skip to content

Glossary

Plain-English definitions for common vibe-coding terms. Updated: September 17, 2025.

Diff

A preview of changes between two versions of a file. Lines starting with + will be added; lines starting with - will be removed. Read a diff like a pull request before applying it.

Back to top

Patch

One or more diffs bundled together, typically the exact text you can apply to modify files.

Back to top

Hunk

A contiguous block of changes within a diff. The header (e.g., @@ -1,4 +1,7 @@) shows the affected line ranges.

Back to top

Repo-aware chat

Chat grounded in your repository’s files, not generic examples—answers reference your code, paths, and types.

Back to top

Agent

A system that plans actions, calls tools (like file edits or web fetches), observes results, and iterates toward a goal.

Back to top

Plan & Execute (plan-then-apply)

An agent workflow that first outlines steps (plan) and then performs them (execute), often safer for large changes.

Back to top

Tool use

When an AI calls structured functions—e.g., “read file,” “write file,” “run tests”—instead of only producing text.

Back to top

Multi-file edit

A coordinated set of changes across multiple files. Good assistants explain the plan and provide reviewable diffs.

Back to top

RAG (Retrieval-Augmented Generation)

Supplying relevant documents or code snippets to the model at answer time so responses are grounded in the right context.

Back to top

Context window

The maximum amount of text (tokens) the model can consider at once—larger windows can read more files but still benefit from focused prompts.

Back to top

Embedding

A numeric representation of text or code used for semantic search (e.g., to find relevant files for RAG).

Back to top

Prompt

Your instruction to the model. Clear prompts specify files, goals, constraints, and success checks.

Back to top

System prompt

Hidden or fixed instructions that shape the assistant’s behavior (tone, safety, tools). You usually can’t see it in IDEs.

Back to top

Temperature

Controls randomness. Lower values are more deterministic; higher values produce more varied text but can drift.

Back to top

Hallucination

When a model outputs something that looks plausible but is false (e.g., inventing APIs or files that don’t exist).

Back to top

Refactor

Change the structure of code without changing behavior—often across multiple files—with tests to confirm safety.

Back to top

Test scaffolding

Minimal tests an assistant creates to verify a change. Start simple and extend by hand if needed.

Back to top

Evals (evaluation, benchmarks (lightweight))

Small, repeatable checks (manual or automated) used to compare assistants—e.g., “add a route,” “rename a util,” “write a unit test.”

Back to top

Latency budget

The time you’re willing to wait for a step or answer. Keeping tasks small helps stay under budget.

Back to top

Token

A chunk of text the model processes (roughly ~4 characters in English). Context limits and costs are measured in tokens.

Back to top

Chunking

Splitting large files/docs into smaller pieces for retrieval or processing. Good chunking improves relevance.

Back to top

Grounding

Tying answers to verifiable sources—your repo, docs, or data—so outputs cite where things came from.

Back to top

Guardrails

Rules or checks that constrain what a model can do (e.g., lint/test gates, file allowlists).

Back to top