How to Use Proxies with Selenium
Selenium is the definitive industry standard for cross-browser testing and web automation, offering support for almost every major browser engine. While native Selenium has historically struggled with authenticated proxies—often triggering un-automatable browser login popups—modern extensions like 'selenium-wire' have revolutionized the integration process. By intercepting network requests at the driver level, these tools allow for the seamless injection of ProxyLabs residential credentials, enabling you to automate headed or headless browsers with full geographic flexibility. Selenium's ability to interact with page elements just like a human user, combined with high-quality residential IPs, makes it an exceptionally powerful tool for bypassing sophisticated anti-bot systems that rely on behavioral analysis and browser fingerprinting. Whether you are performing automated QA across multiple regions or scraping data from high-security targets, Selenium provides the most realistic browsing environment available today.
Focus: working config first, then the mistakes that usually cause traffic to bypass the proxy or break under concurrency.
Using Proxies with Selenium: What to Know
Selenium's proxy management relies on the underlying WebDriver's ability to communicate with the browser's network layer. When you specify a proxy server, the WebDriver configures the browser process to route all outbound TCP traffic through that gateway. For secure connections, the browser uses the HTTP CONNECT method to establish an encrypted pipe to the target server. This ensures that while the ProxyLabs gateway facilitates the transport, it never has access to the actual content of your HTTPS requests or the sensitive data they contain.
One of the biggest hurdles in Selenium is handling authenticated proxies. Because the W3C WebDriver standard does not define a mechanism for passing proxy credentials, the browser defaults to showing a native authentication dialog. Tools like 'selenium-wire' bypass this limitation by acting as a local reverse proxy that injects the 'Proxy-Authorization' header into every outbound request. This allows you to use high-quality residential proxies without sacrificing the automation's stability or requiring complex workarounds like auto-IT scripts.
Headed vs. Headless mode is an important consideration when using proxies. While headless mode is more resource-efficient, some advanced anti-bot systems can detect the subtle timing differences in JavaScript execution that occur when a display is not present. If you find your residential IPs being flagged despite using clean proxies, try running Selenium in a headed mode within a virtual framebuffer like Xvfb. This makes your automated session indistinguishable from a real user browsing on a standard desktop environment.
Bandwidth management is critical when using Selenium with metered residential plans. Unlike a simple HTTP client, a full browser will download every asset on a page, including tracking scripts, heavy images, and video advertisements. By leveraging Selenium's 'add_experimental_option' to block images and using 'selenium-wire' to abort requests to known analytics domains, you can significantly reduce your per-page cost. These optimizations not only save money but also improve page load times and overall scraper reliability.
Stealth in Selenium goes beyond just hiding your IP address. Anti-bot providers check for various browser markers that indicate automation. Even with a residential proxy, the presence of the 'navigator.webdriver' flag or the 'AutomationControlled' blink feature can lead to an immediate block. We recommend using the Chrome DevTools Protocol (CDP) through Selenium to patch these markers at runtime. This creates a multi-layered defense where your network identity is masked by the proxy and your browser identity is normalized to look like a standard consumer device.
Parallelism in Selenium must be handled with care due to its high memory footprint. Each browser instance can consume between 200MB and 500MB of RAM. If you are running multiple threads with unique residential sessions, your server's resources can be quickly exhausted. We recommend using a task queue like Celery or a thread pool with a strictly limited worker count. Each worker should be responsible for initializing its own driver with a unique ProxyLabs session ID to ensure that every task exits from a different residential IP.
Finally, always monitor the health of your proxy connections. Residential peers can go offline without warning, leading to connection resets or timeouts in Selenium. A robust scraping script should include a global exception handler that catches these network-level errors and gracefully restarts the driver with a fresh session. By implementing these self-healing mechanisms, you can ensure that your Selenium-based scrapers can operate continuously for days or weeks without manual intervention.
Installation
pip install selenium webdriver-manager selenium-wirecopy to clipboardWorking Examples
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
options = Options()
options.add_argument("--proxy-server=http://gate.proxylabs.app:8080")
options.add_argument("--headless=new")
driver = webdriver.Chrome(
service=Service(ChromeDriverManager().install()),
options=options,
)
try:
# Selenium Wire or a Chrome extension is needed for proxy auth
# Alternative: use seleniumwire for authenticated proxies
driver.get("https://httpbin.org/ip")
print(driver.find_element("tag name", "body").text)
finally:
driver.quit()from seleniumwire import webdriver
proxy_options = {
"proxy": {
"http": "http://your-username-session-abc123:[email protected]:8080",
"https": "http://your-username-session-abc123:[email protected]:8080",
}
}
driver = webdriver.Chrome(seleniumwire_options=proxy_options)
try:
urls = [
"https://example.com/login",
"https://example.com/dashboard",
"https://example.com/settings",
]
for url in urls:
driver.get(url)
print(f"Loaded: {url} — {driver.title}")
finally:
driver.quit()from seleniumwire import webdriver
proxy_options = {
"proxy": {
"http": "http://your-username-country-US-city-NewYork:[email protected]:8080",
"https": "http://your-username-country-US-city-NewYork:[email protected]:8080",
}
}
driver = webdriver.Chrome(seleniumwire_options=proxy_options)
try:
driver.get("https://httpbin.org/ip")
ip_info = driver.find_element("tag name", "body").text
print(f"NYC IP: {ip_info}")
finally:
driver.quit()What matters in practice
- Native support for all major browser engines, including Chrome, Firefox, and Edge, ensuring maximum compatibility with any web target.
- Sophisticated network request interception via Selenium Wire, allowing for the injection of proxy headers and custom cookies at the transport layer.
- Dynamic proxy configuration that can be adjusted during the driver's lifecycle to simulate different geographic locations or network conditions.
- Full compatibility with both headless and headed browser modes, providing flexibility for both high-speed scraping and visual debugging.
- Extensive ecosystem of plugins and community tools for handling complex anti-bot challenges such as CAPTCHAs and behavioral tracking.
- Support for detailed browser logging and performance metrics, which can be used to diagnose proxy-related latency or connection failures.
Operational Notes
Always utilize 'selenium-wire' when your proxy gateway requires authentication. Standard Selenium 'ChromeOptions' do not support passing a username and password to the '--proxy-server' flag, which will lead to un-automatable auth popups.
Ensure you call 'driver.quit()' within a 'finally' block. If your script crashes without closing the driver, the proxy-enabled browser process will remain active, leaking system memory and maintaining unnecessary connections.
Set a realistic 'driver.set_page_load_timeout()' to handle the variable latency of residential proxy connections. A 30-to-60 second limit is generally recommended to ensure your script doesn't hang indefinitely on a slow residential peer.
For most scraping operations, prefer the 'headless=new' option in Chrome. This modern engine provides better performance and improved compatibility with advanced web features compared to the legacy headless mode.
When using Selenium in a Docker container, add the '--disable-dev-shm-usage' and '--no-sandbox' flags to your options. This prevents the browser from crashing due to memory limits in shared environments.
Combine residential proxies with Selenium's 'execute_cdp_cmd' to patch automation markers like navigator.webdriver. Masking these signals is essential when using proxies to access high-security domains protected by DataDome or Akamai.
Frequently Asked Questions
Why does my residential proxy fail when I run Selenium in headless mode?
In older versions of Chromium, the headless engine handled command-line proxy flags differently than the headed version, often leading to connection failures. To resolve this, ensure you are using the '--headless=new' argument (or simply '--headless' in the latest Chrome versions), which uses the modern engine. Additionally, if you are using the Chrome extension method for proxy authentication, be aware that extensions were not supported in the legacy headless mode. Switching to 'selenium-wire' is the most reliable way to handle authenticated residential proxies in a headless environment.
Can I use Selenium with mobile proxies from ProxyLabs?
Yes, Selenium is fully compatible with mobile proxies. You simply need to update the proxy URL and credentials in your 'selenium-wire' options or Chrome extension configuration. From the browser's perspective, a mobile proxy is just another HTTP/HTTPS tunnel. Using mobile proxies with Selenium is particularly effective for scraping mobile-first websites or social media platforms, as the combination of a realistic browser fingerprint and a mobile IP is highly trusted by anti-bot systems.
How do I disable image loading in Selenium to save residential bandwidth?
To conserve your residential proxy data, you can configure Chrome to block images by setting the 'profile.managed_default_content_settings.images' preference to 2. This is done through the 'add_experimental_option' method in your Chrome Options. Because residential bandwidth is metered, this optimization is crucial for long-running scraping tasks, as it prevents the browser from downloading megabytes of unnecessary visual assets while still allowing it to execute the JavaScript needed for data extraction.
Nearby Guides
Need residential IPs for Selenium?
Get access to 30M+ residential IPs in 195+ countries. Pay-as-you-go from £2.50/GB. No subscriptions, no commitments.
GET STARTEDRelated guides