Builders, APIs & BasinKit
From APIs and MCP to private agents, observability, and BasinKit. Build the technical layer behind production systems, not just demos.
LLM Fundamentals
How transformers work, tokenization, context windows, temperature, and sampling. The foundational understanding every AI engineer needs before touching an API.
API Mastery
Claude API deep dive -- authentication, messages API, tool use, streaming, structured outputs, and error handling. Build reliable integrations that handle real-world edge cases.
Prompt Engineering for Developers
Beyond basic prompting. System prompts, few-shot patterns, chain-of-thought, constitutional AI, output parsing, and the prompt engineering patterns that make production systems reliable.
Building Your First AI Agent
The agent loop: observe, reason, act, iterate. Build a working agent from scratch with tool calling, state management, error recovery, and human-in-the-loop control.
MCP Deep Dive
Model Context Protocol -- the standard for connecting AI to tools and data. Build MCP servers, integrate MCP clients, and understand the protocol that's replacing custom tool integrations.
Multi-Agent Systems
Agent-to-agent communication, delegation protocols, task decomposition, consensus mechanisms, and the coordination patterns we use to run hundreds of agents in production.
RAG Architecture
Retrieval-Augmented Generation done right. Embedding strategies, vector databases, chunking, reranking, hybrid search, and the common failure modes that make RAG systems unreliable.
Claude Code & Vibe Coding
Ship with AI as your co-developer. Claude Code workflows, CLAUDE.md conventions, agentic coding patterns, and how to build at 30x velocity without sacrificing code quality.
Production Deployment
From working prototype to production system. Monitoring, observability, cost management, rate limiting, caching, safety guardrails, and the operational concerns that separate demos from real products.
Capstone: Build a Production Agent
Put it all together. Design, build, and deploy a production-grade AI agent that uses tools, manages state, handles failures gracefully, and solves a real problem.
Builders should leave with code, artifacts, and a reviewable capstone.
This path now has a real completion model: code sample, mini-project, quiz checkpoint, and a final artifact bundle. The goal is not to “understand AI.” The goal is to ship something inspectable.
Working implementation patterns
Each technical block should resolve into something a builder can read and adapt quickly.
- tool-calling operator skeleton
- MCP server starter pattern
- retrieval pipeline example
Projects that force a real operating shape
The middle of the path should stop being theory and start behaving like a small build sprint.
- approval-aware intake agent
- research handoff workflow
- multi-agent delegation exercise
A reviewable deployment packet
Completion should produce a package that another builder can inspect, run, and critique.
- architecture diagram
- repo link + run instructions
- failure modes and guardrails
Quiz and review prompts
Builders should be able to explain their choices before they ship the capstone.
- why agent vs workflow here?
- where is the human approval boundary?
- what fails first under real load?
Use the platform and research tracks while the GitHub artifact layer comes online.
The learning architecture is now set: code samples, quiz checkpoints, mini-projects, and capstones. The next live buildout is the shared learning-artifacts repo, starting with builder-grade examples that map directly to this path and resolve into real runnable outputs.