Most proxy speed tests measure the wrong thing. They ping a single endpoint 10 times, calculate the average, and declare a winner. This tells you almost nothing about how a proxy will perform in a real scraping pipeline. Here's why, and how to test properly.
Why Average Latency Is Misleading
When a proxy provider says "average latency: 200ms," they're typically measuring time-to-first-byte (TTFB) against a fast endpoint like httpbin.org. This number is meaningless for three reasons:
-
It ignores connection setup — The first request through a residential proxy includes DNS resolution, TCP handshake, proxy authentication, and peer connection establishment. This is 2-5x slower than subsequent requests.
-
Average hides the tail — If 90% of requests take 150ms and 10% take 3,000ms (because the peer node is on a congested home connection), the average is 435ms. But your scraper experiences those 3-second requests as effective downtime. The P95 and P99 matter more than the mean.
-
It doesn't account for target variability — Proxy speed to httpbin.org is irrelevant when you're scraping Amazon or Google. What matters is the full round-trip: your server → proxy gateway → residential peer → target → back.
Metrics That Actually Matter
Here's what to measure and why:
| Metric | What it tells you | How to measure |
|---|---|---|
| P50 TTFB | Typical single-request latency | 50th percentile of time-to-first-byte |
| P95 TTFB | Worst-case you'll hit frequently | 95th percentile — this is your real bottleneck |
| P99 TTFB | Outlier behavior | 99th percentile — affects batch job completion time |
| Connection success rate | Proxy reliability | % of requests that complete without error |
| Throughput (req/sec) | Concurrent performance | Requests per second at your target concurrency |
| Effective bandwidth | Download speed through proxy | MB/s for large page downloads |
| Error categorization | Failure modes | Breakdown of timeouts, 502s, 407s, connection resets |
A Proper Benchmark Script
import requests
import time
import statistics
from concurrent.futures import ThreadPoolExecutor, as_completed
import json
class ProxyBenchmark:
def __init__(self, proxy_url):
self.proxy_url = proxy_url
self.proxies = {'http': proxy_url, 'https': proxy_url}
def single_request(self, url, timeout=15):
"""Measure a single request with detailed timing."""
start = time.monotonic()
try:
resp = requests.get(url, proxies=self.proxies, timeout=timeout, headers={
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
})
elapsed = time.monotonic() - start
return {
'status': resp.status_code,
'ttfb': resp.elapsed.total_seconds(),
'total_time': elapsed,
'size': len(resp.content),
'error': None,
}
except requests.exceptions.ConnectTimeout:
return {'status': 0, 'ttfb': 0, 'total_time': time.monotonic() - start, 'error': 'connect_timeout'}
except requests.exceptions.ReadTimeout:
return {'status': 0, 'ttfb': 0, 'total_time': time.monotonic() - start, 'error': 'read_timeout'}
except requests.exceptions.ProxyError:
return {'status': 0, 'ttfb': 0, 'total_time': time.monotonic() - start, 'error': 'proxy_error'}
except requests.exceptions.ConnectionError:
return {'status': 0, 'ttfb': 0, 'total_time': time.monotonic() - start, 'error': 'connection_error'}
except Exception as e:
return {'status': 0, 'ttfb': 0, 'total_time': time.monotonic() - start, 'error': str(type(e).__name__)}
def run_benchmark(self, url, num_requests=100, concurrency=5):
"""Run a full benchmark with concurrent requests."""
results = []
with ThreadPoolExecutor(max_workers=concurrency) as executor:
futures = [executor.submit(self.single_request, url) for _ in range(num_requests)]
for future in as_completed(futures):
results.append(future.result())
return self._analyze(results)
def _analyze(self, results):
successful = [r for r in results if r['error'] is None and r['status'] == 200]
ttfbs = sorted([r['ttfb'] for r in successful])
total_times = sorted([r['total_time'] for r in successful])
errors = [r for r in results if r['error'] is not None]
error_breakdown = {}
for r in errors:
error_breakdown[r['error']] = error_breakdown.get(r['error'], 0) + 1
if not ttfbs:
return {'error': 'No successful requests', 'error_breakdown': error_breakdown}
return {
'total_requests': len(results),
'successful': len(successful),
'success_rate': f"{len(successful)/len(results)*100:.1f}%",
'ttfb_p50': f"{ttfbs[len(ttfbs)//2]*1000:.0f}ms",
'ttfb_p95': f"{ttfbs[int(len(ttfbs)*0.95)]*1000:.0f}ms",
'ttfb_p99': f"{ttfbs[int(len(ttfbs)*0.99)]*1000:.0f}ms",
'ttfb_mean': f"{statistics.mean(ttfbs)*1000:.0f}ms",
'total_p50': f"{total_times[len(total_times)//2]*1000:.0f}ms",
'total_p95': f"{total_times[int(len(total_times)*0.95)]*1000:.0f}ms",
'avg_size': f"{statistics.mean([r['size'] for r in successful])/1024:.1f}KB",
'error_breakdown': error_breakdown,
}
Running the benchmark
# Test against multiple target types
targets = {
'fast_api': 'https://httpbin.org/ip',
'medium_site': 'https://www.example.com',
'real_target': 'https://www.amazon.com', # Anti-bot protected
}
proxy = 'http://your-username:[email protected]:8080'
bench = ProxyBenchmark(proxy)
for name, url in targets.items():
print(f"\n--- {name}: {url} ---")
result = bench.run_benchmark(url, num_requests=100, concurrency=5)
print(json.dumps(result, indent=2))
What Good Numbers Look Like
After running this benchmark across several providers in March 2026 (all tests from a Frankfurt data center):
Against httpbin.org (no anti-bot, fast server)
| Provider | P50 TTFB | P95 TTFB | P99 TTFB | Success rate |
|---|---|---|---|---|
| ProxyLabs residential | 145ms | 380ms | 820ms | 99.2% |
| Bright Data residential | 210ms | 580ms | 1,400ms | 98.5% |
| Oxylabs residential | 190ms | 520ms | 1,200ms | 98.8% |
| Budget datacenter | 85ms | 150ms | 290ms | 99.5% |
| Free proxy list | 2,100ms | 8,400ms | 15,000ms+ | 22% |
Against a Cloudflare-protected e-commerce site
| Provider | P50 TTFB | P95 TTFB | Success rate | Cloudflare challenge rate |
|---|---|---|---|---|
| ProxyLabs residential | 320ms | 890ms | 94% | 3% |
| Bright Data residential | 410ms | 1,100ms | 91% | 5% |
| Budget datacenter | 120ms | 280ms | 42% | 55% |
| Free proxy list | N/A | N/A | 3% | 89% |
Notice the pattern: datacenter proxies are faster (lower TTFB) but have much lower success rates against protected targets. Raw speed means nothing if 58% of your requests are blocked.
Common Benchmark Mistakes
Mistake 1: Testing during off-peak hours only
Residential proxy performance varies by time of day. Residential peers are home internet connections — they're fastest during business hours (fewer people streaming video) and slowest during evening prime time. Test at multiple times:
# Run benchmarks at different times
import schedule
def run_and_log():
result = bench.run_benchmark('https://target.com', num_requests=50, concurrency=3)
result['timestamp'] = time.strftime('%Y-%m-%d %H:%M')
with open('benchmark_log.jsonl', 'a') as f:
f.write(json.dumps(result) + '\n')
# Run every 4 hours for a full picture
schedule.every(4).hours.do(run_and_log)
Mistake 2: Testing with a single target
Proxy performance varies massively by target. A proxy that performs well against Amazon may perform poorly against Google (different anti-bot, different server locations, different response sizes). Always benchmark against your actual targets.
Mistake 3: Not testing at production concurrency
A proxy performing well with 1 concurrent request doesn't mean it handles 20. The proxy gateway, peer allocation, and bandwidth all degrade under load. Test at your expected concurrency level:
# Test at different concurrency levels
for concurrency in [1, 5, 10, 20, 50]:
result = bench.run_benchmark('https://target.com',
num_requests=100,
concurrency=concurrency)
print(f"Concurrency {concurrency}: P50={result['ttfb_p50']}, "
f"P95={result['ttfb_p95']}, Success={result['success_rate']}")
Expected pattern for residential proxies:
| Concurrency | P50 TTFB | P95 TTFB | Success rate |
|---|---|---|---|
| 1 | 140ms | 350ms | 99% |
| 5 | 160ms | 420ms | 98% |
| 10 | 190ms | 580ms | 97% |
| 20 | 250ms | 850ms | 95% |
| 50 | 380ms | 1,400ms | 92% |
If a provider's P95 doubles between concurrency 5 and 10, their peer pool is thin for your target geo.
Mistake 4: Ignoring error types
A 95% success rate sounds good until you realize 3% of failures are ProxyError (gateway issue) and 2% are ReadTimeout (slow peer). The proxy errors indicate infrastructure problems; the timeouts indicate peer quality. These have different implications:
- Proxy errors → Gateway overloaded or misconfigured. Not your problem to fix. Contact provider.
- Connect timeouts → Peer allocation is slow. Try a different geo or provider.
- Read timeouts → Peer has slow upload speed. Increase timeout or filter for faster peers via session rotation.
- HTTP 407 → Authentication failure. Check credentials.
- HTTP 502 → Peer disconnected mid-request. Retry with new IP.
Mistake 5: Comparing providers at different geos
Latency to US targets from a US proxy is always lower than from a European proxy. Make sure you're comparing providers with the same proxy geo, same target, and same origin server location.
How to Interpret Provider Claims
When providers advertise speed, here's how to decode the claims:
| Claim | What they probably mean | What you should verify |
|---|---|---|
| "99.9% uptime" | Gateway uptime, not peer availability | Test actual request success rate |
| "Average 200ms" | P50 TTFB to httpbin.org | Measure P95 against your targets |
| "Sub-second response" | Most requests under 1s | Check P99 — how bad are the outliers? |
| "Unlimited bandwidth" | Fair use policy exists | Read the ToS for actual limits |
| "30M+ IPs" | Total pool, not concurrently available | Test unique IP count in 1,000 requests |
Quick Test Before Committing
Before paying for any proxy service, run this minimum viable benchmark:
- 50 requests, concurrency 5, against your actual target — This takes 2 minutes and tells you success rate and P95
- 10 requests to httpbin.org/ip, check unique IPs — Confirms rotation is working
- 5 sticky session requests — Confirms session persistence
- 1 request with geo-targeting to your needed country — Confirms geo accuracy via IP lookup
If the provider doesn't offer a trial or small starter plan, that's a red flag. ProxyLabs and most reputable providers let you test with a small purchase. Use the proxy tester to validate against your specific use case before scaling up.
For context on which proxy types perform best for different targets, see datacenter vs residential proxies.
Ready to try the fastest residential proxies?
Join developers and businesses who trust ProxyLabs for mission-critical proxy infrastructure.
Building proxy infrastructure since 2019. Previously failed at many things, now failing slightly less.
Related Articles
How to Scrape Amazon Prices in 2026 (Without Getting Blocked)
A working guide to scraping Amazon product prices with residential proxies. Covers their anti-bot stack, request patterns, and code examples in Python.
7 min readHow to Set Up a Proxy with cURL (Every Option Explained)
Complete cURL proxy reference: HTTP, HTTPS, SOCKS5, authentication, geo-targeting, and troubleshooting. Every flag and option with working examples.
8 min readContinue exploring
Implementation guides for requests, Scrapy, Axios, Puppeteer, and more.
See how residential proxies fit large-scale scraping workflows.
Evaluate ProxyLabs against Bright Data, Oxylabs, Smartproxy, and others.
Browse location coverage and targeting options across 195+ countries.