All articles
seleniumweb scrapingresidential proxies

Selenium Proxy Setup: Complete Guide for Python & Java

JL
James Liu
Lead Engineer @ ProxyLabs
March 15, 2026
6 min read
Share

Selenium's built-in proxy configuration has a fundamental limitation: it doesn't support authenticated proxies out of the box. If you're using residential proxies that require username/password auth (which is most of them), you'll hit a browser authentication popup that Selenium can't handle natively. This guide covers the actual working solutions for both Python and Java, including the workarounds you'll need.

The Authentication Problem

Selenium's Proxy class only sets the proxy address — it has no fields for credentials. When the proxy server returns a 407 Proxy Authentication Required response, Chrome shows a native dialog that WebDriver can't interact with.

There are three real solutions:

MethodLanguageComplexityReliability
Chrome extension injectionPython / JavaMediumHigh — works with all auth proxies
selenium-wirePython onlyLowHigh — drops in as replacement
CLI args + IP whitelistPython / JavaLowDepends on proxy provider

Method 1: selenium-wire (Python — Recommended)

selenium-wire is a drop-in replacement for selenium that intercepts requests at the driver level and injects auth headers. It's the cleanest solution.

from seleniumwire import webdriver
from selenium.webdriver.chrome.options import Options

chrome_options = Options()
chrome_options.add_argument('--headless=new')
chrome_options.add_argument('--disable-blink-features=AutomationControlled')

# Rotating proxy — new IP every request
proxy_options = {
    'proxy': {
        'http': 'http://your-username:[email protected]:8080',
        'https': 'http://your-username:[email protected]:8080',
        'no_proxy': 'localhost,127.0.0.1'
    }
}

driver = webdriver.Chrome(
    options=chrome_options,
    seleniumwire_options=proxy_options
)

driver.get('https://httpbin.org/ip')
print(driver.page_source)  # Shows the residential IP
driver.quit()

Geo-Targeted Requests

Append country and city modifiers to the username:

# US IP from New York
proxy_options = {
    'proxy': {
        'http': 'http://your-username-country-US-city-NewYork:[email protected]:8080',
        'https': 'http://your-username-country-US-city-NewYork:[email protected]:8080',
    }
}

Sticky Sessions

For multi-page flows (login → navigate → checkout), you need the same IP across requests. Add a session ID:

import uuid

session_id = uuid.uuid4().hex[:12]

proxy_options = {
    'proxy': {
        'http': f'http://your-username-session-{session_id}:[email protected]:8080',
        'https': f'http://your-username-session-{session_id}:[email protected]:8080',
    }
}

# This session ID keeps the same IP for up to 30 minutes
driver = webdriver.Chrome(
    options=chrome_options,
    seleniumwire_options=proxy_options
)

For more on when to use sticky vs rotating sessions, see our sticky sessions guide.

Method 2: Chrome Extension Injection (Python & Java)

This method creates a tiny Chrome extension at runtime that handles proxy auth. It works in both languages and doesn't require extra libraries beyond standard Selenium.

Python Implementation

import zipfile
import os
from selenium import webdriver
from selenium.webdriver.chrome.options import Options

def create_proxy_extension(proxy_host, proxy_port, proxy_user, proxy_pass):
    manifest = """{
        "version": "1.0.0",
        "manifest_version": 2,
        "name": "Proxy Auth",
        "permissions": ["proxy", "tabs", "unlimitedStorage", "storage",
                        "<all_urls>", "webRequest", "webRequestBlocking"],
        "background": {"scripts": ["background.js"]},
        "minimum_chrome_version": "22.0.0"
    }"""

    background = """var config = {
        mode: "fixed_servers",
        rules: {
            singleProxy: { scheme: "http", host: "%s", port: parseInt(%s) },
            bypassList: ["localhost"]
        }
    };
    chrome.proxy.settings.set({value: config, scope: "regular"}, function(){});
    function callbackFn(details) {
        return { authCredentials: { username: "%s", password: "%s" } };
    }
    chrome.webRequest.onAuthRequired.addListener(
        callbackFn, {urls: ["<all_urls>"]}, ['blocking']
    );""" % (proxy_host, proxy_port, proxy_user, proxy_pass)

    ext_path = '/tmp/proxy_auth_extension.zip'
    with zipfile.ZipFile(ext_path, 'w') as zp:
        zp.writestr("manifest.json", manifest)
        zp.writestr("background.js", background)
    return ext_path

# Build the extension
ext = create_proxy_extension(
    'gate.proxylabs.app', '8080',
    'your-username-country-GB', 'your-password'
)

chrome_options = Options()
chrome_options.add_extension(ext)
chrome_options.add_argument('--disable-blink-features=AutomationControlled')
# Note: extensions don't work in headless mode in Chrome < 130
# Use --headless=new with Chrome 130+

driver = webdriver.Chrome(options=chrome_options)
driver.get('https://httpbin.org/ip')
print(driver.find_element('tag name', 'pre').text)
driver.quit()

Java Implementation

import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import java.io.*;
import java.util.zip.*;

public class ProxySetup {

    public static String createProxyExtension(
            String host, int port, String user, String pass) throws IOException {

        String manifest = String.format("""
            {"version":"1.0.0","manifest_version":2,"name":"Proxy Auth",
             "permissions":["proxy","tabs","unlimitedStorage","storage",
             "<all_urls>","webRequest","webRequestBlocking"],
             "background":{"scripts":["background.js"]},
             "minimum_chrome_version":"22.0.0"}""");

        String background = String.format("""
            var config = {
                mode: "fixed_servers",
                rules: {
                    singleProxy: {scheme:"http", host:"%s", port:%d},
                    bypassList: ["localhost"]
                }
            };
            chrome.proxy.settings.set({value:config, scope:"regular"}, function(){});
            chrome.webRequest.onAuthRequired.addListener(
                function(details) {
                    return {authCredentials:{username:"%s",password:"%s"}};
                }, {urls:["<all_urls>"]}, ['blocking']
            );""", host, port, user, pass);

        File ext = File.createTempFile("proxy_auth", ".zip");
        try (ZipOutputStream zos = new ZipOutputStream(new FileOutputStream(ext))) {
            zos.putNextEntry(new ZipEntry("manifest.json"));
            zos.write(manifest.getBytes());
            zos.closeEntry();
            zos.putNextEntry(new ZipEntry("background.js"));
            zos.write(background.getBytes());
            zos.closeEntry();
        }
        return ext.getAbsolutePath();
    }

    public static void main(String[] args) throws Exception {
        String extPath = createProxyExtension(
            "gate.proxylabs.app", 8080,
            "your-username-country-US", "your-password"
        );

        ChromeOptions options = new ChromeOptions();
        options.addExtensions(new File(extPath));
        options.addArguments("--disable-blink-features=AutomationControlled");

        WebDriver driver = new ChromeDriver(options);
        driver.get("https://httpbin.org/ip");
        System.out.println(driver.getPageSource());
        driver.quit();
    }
}

Anti-Detection Essentials

Proxy auth is half the battle. If Selenium's automation markers are still exposed, you'll burn through bandwidth getting blocked. The minimum changes:

from seleniumwire import webdriver
from selenium.webdriver.chrome.options import Options

chrome_options = Options()
chrome_options.add_argument('--headless=new')
chrome_options.add_argument('--disable-blink-features=AutomationControlled')
chrome_options.add_argument('--window-size=1920,1080')
chrome_options.add_experimental_option('excludeSwitches', ['enable-automation'])
chrome_options.add_experimental_option('useAutomationExtension', False)

proxy_options = {
    'proxy': {
        'http': 'http://your-username-session-sess001:[email protected]:8080',
        'https': 'http://your-username-session-sess001:[email protected]:8080',
    }
}

driver = webdriver.Chrome(
    options=chrome_options,
    seleniumwire_options=proxy_options
)

# Remove webdriver flag
driver.execute_cdp_cmd('Page.addScriptToEvaluateOnNewDocument', {
    'source': '''
        Object.defineProperty(navigator, 'webdriver', {get: () => undefined});
        window.chrome = { runtime: {}, loadTimes: function() {}, csi: function() {} };
    '''
})

driver.get('https://example.com')

For a deeper dive into all 6 automation markers that modern anti-bot systems check, see our Playwright proxy setup tutorial — the detection markers are the same across all Chromium-based tools.

Parallel Scraping with Proxy Rotation

Running multiple Selenium instances with different IPs:

from seleniumwire import webdriver
from selenium.webdriver.chrome.options import Options
from concurrent.futures import ThreadPoolExecutor
import uuid

def scrape_url(url):
    session_id = uuid.uuid4().hex[:12]
    chrome_options = Options()
    chrome_options.add_argument('--headless=new')
    chrome_options.add_argument('--no-sandbox')
    chrome_options.add_argument('--disable-dev-shm-usage')

    proxy_options = {
        'proxy': {
            'http': f'http://your-username-session-{session_id}:[email protected]:8080',
            'https': f'http://your-username-session-{session_id}:[email protected]:8080',
        }
    }

    driver = webdriver.Chrome(
        options=chrome_options,
        seleniumwire_options=proxy_options
    )
    try:
        driver.set_page_load_timeout(30)
        driver.get(url)
        return driver.page_source
    except Exception as e:
        print(f"Failed on {url}: {e}")
        return None
    finally:
        driver.quit()

urls = ['https://example.com/page/1', 'https://example.com/page/2', ...]

with ThreadPoolExecutor(max_workers=5) as executor:
    results = list(executor.map(scrape_url, urls))

Each thread gets a unique session ID, so each gets a different residential IP from ProxyLabs' pool. Keep max_workers reasonable — 5-10 concurrent Selenium instances already uses significant memory (~300MB per instance).

Verifying Your Proxy Connection

Before running any large scrape, verify the proxy is working and returning the correct geo:

from seleniumwire import webdriver
from selenium.webdriver.chrome.options import Options
import json

chrome_options = Options()
chrome_options.add_argument('--headless=new')

proxy_options = {
    'proxy': {
        'http': 'http://your-username-country-DE:[email protected]:8080',
        'https': 'http://your-username-country-DE:[email protected]:8080',
    }
}

driver = webdriver.Chrome(options=chrome_options, seleniumwire_options=proxy_options)
driver.get('https://httpbin.org/ip')
ip_data = json.loads(driver.find_element('tag name', 'pre').text)
print(f"Proxy IP: {ip_data['origin']}")
driver.quit()

You can also verify using the ProxyLabs IP Lookup tool to confirm the IP is residential and in the correct country.

Common Pitfalls

DNS leaks: By default, Selenium resolves DNS locally before sending through the proxy. This means the target site sees your proxy IP but your DNS resolver — which can be a detection signal. Add --host-resolver-rules="MAP * 0.0.0.0, EXCLUDE localhost" to force DNS through the proxy.

Extension + headless conflict: Chrome extensions didn't work in --headless mode until Chrome 130's --headless=new flag. If you're on an older Chrome, use selenium-wire instead of the extension method.

Memory leaks: Always call driver.quit() in a finally block. Selenium instances that aren't properly closed accumulate zombie Chrome processes. On a scraping server, this will eat all your RAM within hours.

For more scraping strategies, check out our guide on scraping without getting blocked and how to avoid IP bans.

Ready to try the fastest residential proxies?

Join developers and businesses who trust ProxyLabs for mission-critical proxy infrastructure.

~200ms responseBest anti-bot bypass£2.50/GB
Start Building NowNo subscription required
seleniumweb scrapingresidential proxiespythonjavabrowser automation
JL
James Liu
Lead Engineer @ ProxyLabs

Building proxy infrastructure since 2019. Previously failed at many things, now failing slightly less.

Found this helpful? Share it with others.

Share