BotAnon

Anonymous web access for AI agents, scrapers & autonomous bots

Your AI agents need to fetch data from the web. But every request exposes your server IP, your location, and your identity. BotAnon is the missing layer between your agent and the internet.

One API call. Multiple proxy strategies. Automatic failover. Zero exposure. Whether you're running Claude Code, Hermes agents, OpenClaw scrapers, or custom bots — BotAnon keeps them anonymous and unblocked.

🔄

Proxy Rotation

Automatic failover through multiple proxy providers. If one fails, the next kicks in seamlessly.

💾

Smart Caching

7-day response cache reduces costs and latency. Force refresh when you need fresh data.

🌐

JS Rendering

Full headless browser via Cloudflare for JavaScript-heavy SPAs and dynamic content.

🛡

IP Anonymity

Your agent's requests come from rotating IPs worldwide. Your server is never exposed.

Built for AI Agents

Claude Code / MCP

AI-Powered Research Agents

Your Claude Code agent needs to scrape competitor pricing, gather market data, or pull content from multiple sources. BotAnon ensures every fetch is anonymous — no rate limits, no IP bans, no fingerprinting back to your infrastructure.

Hermes / Autonomous

Autonomous Bot Workflows

Running Hermes agents or custom autonomous systems that make hundreds of web requests? BotAnon rotates through proxy chains automatically, handles failures gracefully, and caches repeated lookups so your agents stay fast and invisible.

OpenClaw / Scrapers

Data Collection Pipelines

Scraping Google results, monitoring websites, extracting contact info, pulling WHOIS data? BotAnon handles the hard parts: proxy rotation, CAPTCHAs via JS rendering, connection pooling, and automatic retries with exponential backoff on failures.

Any Agent / Any Language

Simple REST API Integration

One HTTP call from any language — PHP, Python, Node.js, Go, Rust. Send us a URL, get back the content with full timing data, HTTP codes, and headers. Your agent code stays clean while BotAnon handles the proxy complexity.

How It Works

1

Your Agent Calls BotAnon

Simple GET or POST to our API with the target URL and your API key

2

We Route Through Proxies

Request goes through our proxy chain — Cloudflare Workers, residential IPs, or premium APIs

3

Data Returned Anonymously

Full response with body, headers, timing, and HTTP codes. Cached for 7 days. Your IP never exposed.

About BotAnon

BotAnon was born from a simple problem: every project we built needed proxy logic. WHOIS lookups, social media scrapers, SEO tools, fraud detection systems — each one had its own hardcoded proxy setup, its own retry logic, its own failure handling. When a proxy provider went down, we had to fix it in ten different codebases.

So we built one service to rule them all. BotAnon centralises proxy management into a single API that any project, any language, and any AI agent can call. The proxy chain handles failover automatically. The cache eliminates redundant requests. The logging tells you exactly what's happening. And your agents stay completely anonymous.

We run multiple proxy strategies simultaneously — free Cloudflare Workers for basic fetches, premium APIs like Zyte and ScrapeOwl for tough targets, residential IPs for sites that block datacenters, and full headless browser rendering for JavaScript-heavy pages. Your agent doesn't need to know which one works best — BotAnon figures it out and falls back automatically.

API Usage

GET — Simple fetch
GET /api/fetch?url=https://example.com&apikey=YOUR_KEY&tag=my_agent
POST — Complex request with POST data
POST /api/fetch
{
  "url": "https://example.com/api/search",
  "apikey": "YOUR_KEY",
  "method": "POST",
  "post_data": "{\"query\": \"AI agents\"}",
  "post_type": "json",
  "tag": "research_agent",
  "ua": "random"
}
Python example
# Works with any HTTP library
import requests

resp = requests.get("https://botanon.com/api/fetch", params={
    "url": "https://target-site.com/data",
    "apikey": "YOUR_KEY",
    "tag": "my_agent",
})
data = resp.json()
print(data["body"])  # The scraped content
print(data["proxy"]["driver"])  # Which proxy was used

Response Format

{
  "success": true,
  "request_id": "a7c3e9f2-4b1d-...",
  "http_code": 200,
  "body": "<html>...",
  "body_size": 45230,
  "body_compressed_size": 8102,
  "timing": {
    "total_ms": 1230,
    "connect_ms": 150,
    "ssl_ms": 80
  },
  "proxy": {
    "driver": "eris",
    "name": "Eris Worker"
  },
  "cached": false,
  "customer": {
    "requests_today": 1423,
    "requests_this_month": 42350
  }
}

Parameters

ParameterDescription
urlTarget URL to fetch (required)
apikeyYour API authentication key (required)
tagLabel for tracking — filter your logs by agent, task, or project
proxyForce a specific proxy: eris, zyte, scrapeowl, residential, browser, none
refreshSet to 1 to bypass cache and force a fresh request
timeoutRequest timeout in seconds (default: 30, max: 120)
browserSet to 1 to enable full JS rendering via headless browser
uaCustom User-Agent, or random for a random modern browser UA
methodHTTP method: GET or POST
post_dataRequest body for POST requests
post_typePOST encoding: form or json
compressSet to 1 to receive gzip-compressed response
originator_ipPass the end-user IP for abuse tracking

Compression

Add compress=1 to get gzip-compressed responses (~80% smaller). Most HTTP clients handle decompression automatically.

curl --compressed 'https://botanon.com/api/fetch?url=...&apikey=xxx&compress=1'