AI agents struggle with tasks that require interacting with the live web — fetching a competitor’s pricing page, extracting structured data from a JavaScript-heavy dashboard, or automating a multi-step workflow on a real site. It is difficult to integrate the fragmented tooling, which requires teams to combine different providers of search, browser automation and content retrieval.
TinyFishThe Palo Alto startup, which previously released a standalone agent for the web, has now launched what they call a complete platform of infrastructure that will allow AI agents to operate on the web. The launch of four new products is unified by a common API key, and credit system. Web Agent, Web Search, BrowserThen, Web Fetch.
TinyFish and the Shipping of TinyFish
Each product has a specific function:
- Web Agent — Executes autonomous multi-step workflows on real websites. The agent can navigate sites, complete forms, follow flows and return structured results, all without scripting.
- Web Search — Returns structured search results as clean JSON using a custom Chromium engine, with a P50 latency of approximately 488ms. For the same search, competitors in this market average more than 2,800ms.
- Browser — Provides managed stealth Chrome sessions via the Chrome DevTools Protocol (CDP), with a sub-250ms cold start. Competitors typically take 5–10 seconds. The browser includes 28 anti-bot mechanisms built at the C++ level — not via JavaScript injection, which is the more common and more detectable approach.
- Web Fetch — Converts any URL into clean Markdown, HTML, or JSON with full browser rendering. Unlike the native fetch tools built into many AI coding agents, TinyFish Fetch strips irrelevant markup — CSS, scripts, navigation, ads, footers — and returns only the content the agent needs.
Tokens in agent pipelines
Context window pollution is a persistent performance problem in agent pipelines. When an AI agent uses a standard web fetch tool, it typically pulls the entire page — including thousands of tokens of navigation elements, ad code, and boilerplate markup — and puts all of it into the model’s context window before reaching the actual content.
TinyFish Fetch fixes this issue by rendering pages in full-screen browsers and returning just the content that is clean, such as Markdown or JSON. The company’s benchmarks show CLI-based operations using approximately 100 tokens per operation versus roughly 1,500 tokens when routing the same workflow over MCP — an 87% reduction per operation.
It is important to understand that MCP returns output into the agent context window. TinyFish writes to the filesystem and reads what the agent needs. This keeps the context window clean across multi-step tasks and enables composability through native Unix pipes and redirects — something that is not possible with sequential MCP round-trips.
On complex multi-step tasks, TinyFish reports 2× higher task completion rates using CLI + Skills compared to MCP-based execution.
Agent Skill System and the CLI
TinyFish The API is accompanied by two components for developers.
You can also find out more about the following: CLI Installs using a One command
Npm install @tinyfish/cli
This gives terminal access to all four endpoints — Search, Fetch, Browser, and Agent — directly from the command line.
You can also find out more about the following: Agent Skill is a markdown instruction file (SKILL.md) that teaches AI coding agents — including Claude Code, Cursor, Codex, OpenClaw, and OpenCode — how to use the CLI. Install the software:
npx skills add https://github.com/tinyfish-io/skills --skill tinyfish
The agent will learn how to use TinyFish without any manual SDK configuration or integration. The coding agent can be asked to do anything by a developer. “get competitor pricing from these five sites,” and the agent autonomously recognizes the TinyFish skill, calls the appropriate CLI commands, and writes structured output to the filesystem — without the developer writing integration code.
MCP will also be supported, according to the company. It is stated that CLI+Skills is recommended for multi-step, heavy duty web execution, whereas MCP suits discovery.
Why a Unified Stack?
TinyFish Search, Fetch, the Browser, and Agent were built entirely within our company. This makes us stand out from our competitors. Browserbase for instance uses Exa’s Search layer, which is not proprietary. Firecrawl provides search, crawl and an agent-endpoint. However, the agent-endpoint is not reliable for many tasks.
Infrastructure is more than just avoiding dependence on vendors. The system will optimize to achieve a specific outcome when every layer is controlled by the same group. When TinyFish’s agent succeeds or fails using its own search and fetch, the company gets end-to-end signal at every step — what was searched, what was fetched, and exactly where failures occurred. The signal will not be available for companies that use a third party API to run their search or fetch layer.
Teams that integrate multiple providers also face a cost. A page is found that the fetch layer can’t render. Fetch sends back content that agents cannot understand. The context is lost between browser sessions. The result is custom glue code, retry logic, fallback handlers, and validation layers — engineering work that adds up. The component boundaries are removed when using a unified stack.
Platforms also ensure consistency in session across all steps. Same IP, fingerprint and cookies are used throughout the workflow. The target site will see separate tools as multiple clients that are not related. This increases the chances of detection.
Key Metrics


The Key Takeaways
- TinyFish moves from a single web agent to a four-product platform — Web Agent, Web Search, Web Browser, and Web Fetch — all accessible under one API key and one credit system, eliminating the need to manage multiple providers.
- The CLI + Agent Skill combination lets AI coding agents use the live web autonomously — install once and agents like Claude Code, Cursor, and Codex automatically know when and how to call each TinyFish endpoint, with no manual integration code.
- CLI-based operations produce 87% fewer tokens per task than MCP, and write output directly to the filesystem instead of dumping it into the agent’s context window — keeping context clean across multi-step workflows.
- Every layer of the stack — Search, Fetch, Browser, and Agent — is built in-house, giving end-to-end signals when a task succeeds or fails, a data feedback loop that cannot be replicated by assembling third-party APIs.
- TinyFish maintains a single session identity across an entire workflow — same IP, fingerprint, and cookies — whereas separate tools appear to target sites as multiple unrelated clients, increasing detection risk and failure rates.
Getting Started
TinyFish is offering 500 steps for free with no credit cards required. tinyfish.ai. You can download the open-source recipe and skill files at github.com/tinyfish-io/tinyfish-cookbookThe CLI documentation can be found at docs.tinyfish.ai/cli.
Note: Tinyfish’s leadership has been instrumental in providing this article with details and support.

