Proxy-Seller Complete Guide: Setup, Configuration & Best Practices (2026)

By ExamineIP / Last Updated: April 26, 2026


Table of Contents

  1. Quick Start Guide
  2. Technical Setup by Use Case
  3. Advanced Configuration
  4. Troubleshooting Common Issues
  5. Best Practices & Optimization
  6. Frequently Asked Questions

๐Ÿ’ก Affiliate Disclosure: ExamineIP earns a commission if you purchase Proxy-Seller through our links. Use code EXAMINEIP for 15% off. Learn more.


Quick Start Guide

Step 1: Purchase Your Proxies

  1. Visit Proxy-Seller
  2. Choose proxy type (Residential, ISP, Mobile, or Datacenter)
  3. Select quantity and locations
  4. Enter discount code: EXAMINEIP (15% off)
  5. Complete payment

Step 2: Receive Credentials

After purchase, you’ll receive:

Proxy format: IP:Port:Username:Password Example: 45.67.89.123:8080:user12345:pass67890 Or in URL format: http://user12345:pass67890@45.67.89.123:8080

Step 3: Test Your Proxy

Quick browser test:

  1. Configure proxy in Firefox/Chrome settings
  2. Visit tools.examineip.com
  3. Verify IP shows proxy location, not your real IP

Command line test:

curl -x http://username:password@proxy-ip:port https://tools.examineip.com/api/ip.php

Expected output:

{ “ip”: “45.67.89.123”, “country”: “United States”, “city”: “New York” }


Technical Setup by Use Case

Web Scraping with Python

Using Requests library:

import requests # Single proxy configuration proxies = { ‘http’: ‘http://username:password@45.67.89.123:8080’, ‘https’: ‘http://username:password@45.67.89.123:8080’ } # Make request through proxy response = requests.get(‘https://example.com’, proxies=proxies) print(response.text)

Rotating proxies (residential with sticky sessions):

import requests import time # List of residential proxies (session-based rotation) proxy_list = [ ‘http://user1:pass1@45.67.89.123:8080’, ‘http://user2:pass2@45.67.89.124:8080’, ‘http://user3:pass3@45.67.89.125:8080’ ] current_proxy_index = 0 def get_next_proxy(): global current_proxy_index proxy = proxy_list[current_proxy_index] current_proxy_index = (current_proxy_index + 1) % len(proxy_list) return {‘http’: proxy, ‘https’: proxy} # Scraping loop with rotation urls = [‘https://example.com/page1’, ‘https://example.com/page2’, ‘https://example.com/page3’] for url in urls: proxies = get_next_proxy() response = requests.get(url, proxies=proxies) print(f”Scraped {url} via {proxies[‘http’]}”) time.sleep(2) # Polite delay

Using Scrapy framework:

# settings.py # Enable proxy middleware DOWNLOADER_MIDDLEWARES = { ‘scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware’: 110, } # Configure proxy PROXY = ‘http://username:password@45.67.89.123:8080’


Selenium / Browser Automation

Chrome with Selenium:

from selenium import webdriver from selenium.webdriver.chrome.options import Options # Configure Chrome to use proxy chrome_options = Options() chrome_options.add_argument(f’–proxy-server=http://45.67.89.123:8080′) # For authenticated proxies, use extension (see below) driver = webdriver.Chrome(options=chrome_options) driver.get(‘https://tools.examineip.com/’) print(driver.page_source) driver.quit()

Authenticated proxy extension (Chrome):

import zipfile def create_proxy_auth_extension(proxy_host, proxy_port, proxy_user, proxy_pass): manifest_json = “”” { “version”: “1.0.0”, “manifest_version”: 2, “name”: “Proxy Auth”, “permissions”: [“proxy”, “tabs”, “unlimitedStorage”, “storage”, “<all_urls>”, “webRequest”, “webRequestBlocking”], “background”: {“scripts”: [“background.js”]}, “minimum_chrome_version”: “22.0.0” } “”” background_js = “”” var config = { mode: “fixed_servers”, rules: { singleProxy: { scheme: “http”, host: “%s”, port: %d }, bypassList: [“localhost”] } }; chrome.proxy.settings.set({value: config, scope: “regular”}, function() {}); function callbackFn(details) { return { authCredentials: { username: “%s”, password: “%s” } }; } chrome.webRequest.onAuthRequired.addListener( callbackFn, { urls: [“<all_urls>”] }, [‘blocking’] ); “”” % (proxy_host, int(proxy_port), proxy_user, proxy_pass) with zipfile.ZipFile(‘proxy_auth_extension.zip’, ‘w’) as zp: zp.writestr(“manifest.json”, manifest_json) zp.writestr(“background.js”, background_js) return ‘proxy_auth_extension.zip’ # Usage extension_path = create_proxy_auth_extension(‘45.67.89.123’, 8080, ‘username’, ‘password’) chrome_options.add_extension(extension_path) driver = webdriver.Chrome(options=chrome_options)


Node.js / Puppeteer

Puppeteer with proxy:

const puppeteer = require(‘puppeteer’); (async () => { const browser = await puppeteer.launch({ args: [‘–proxy-server=http://45.67.89.123:8080’] }); const page = await browser.newPage(); // For authenticated proxies await page.authenticate({ username: ‘username’, password: ‘password’ }); await page.goto(‘https://tools.examineip.com/’); const content = await page.content(); console.log(content); await browser.close(); })();


cURL (Command Line)

Basic proxy request:

curl -x http://username:password@45.67.89.123:8080 https://example.com

With custom headers:

curl -x http://username:password@45.67.89.123:8080 \ -H “User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)” \ -H “Accept-Language: en-US,en;q=0.9” \ https://example.com

Save cookies:

curl -x http://username:password@45.67.89.123:8080 \ -c cookies.txt \ -b cookies.txt \ https://example.com


Social Media Automation (Instagram, Twitter)

Best practices:

  • Use ISP proxies (static IP per account)
  • One proxy = One account (never share)
  • Maintain consistent IP for each account
  • Avoid rapid proxy switching

Example account management:

Account 1 (@business_account) โ†’ Proxy 45.67.89.123:8080 (New York ISP) Account 2 (@personal_account) โ†’ Proxy 45.67.89.124:8080 (Los Angeles ISP) Account 3 (@brand_account) โ†’ Proxy 45.67.89.125:8080 (Miami ISP)

Never:

  • Switch IPs for the same account
  • Log into multiple accounts from the same IP
  • Use datacenter IPs (easily flagged)

Advanced Configuration

Sticky Sessions (Residential Proxies)

What are sticky sessions?
Sticky sessions keep the same IP address for a specified duration (e.g., 5 minutes, 10 minutes, 30 minutes) instead of rotating on every request.

Use cases:

  • Multi-step scraping (login โ†’ navigate โ†’ extract)
  • Shopping cart workflows
  • Session-based authentication

Configuration:

Proxy-Seller residential proxies support sticky sessions via session parameters:

Format: http://username-session-[ID]:password@proxy-ip:port Example: http://user123-session-abc456:password@45.67.89.123:8080

Session ID = Any string you choose. Same ID = Same IP (for session duration).

import requests import uuid # Generate unique session ID session_id = str(uuid.uuid4()) # Proxy with sticky session proxy = f’http://username-session-{session_id}:password@45.67.89.123:8080′ proxies = {‘http’: proxy, ‘https’: proxy} # All requests with this session ID will use the same IP response1 = requests.get(‘https://example.com/step1’, proxies=proxies) response2 = requests.get(‘https://example.com/step2’, proxies=proxies) response3 = requests.get(‘https://example.com/step3′, proxies=proxies) # To rotate IP, generate new session ID session_id = str(uuid.uuid4()) proxy = f’http://username-session-{session_id}:password@45.67.89.123:8080’


Geographic Targeting

Country-level targeting:

Format: http://username-country-[CODE]:password@proxy-ip:port Examples: http://username-country-us:password@45.67.89.123:8080 # United States http://username-country-uk:password@45.67.89.123:8080 # United Kingdom http://username-country-de:password@45.67.89.123:8080 # Germany

City-level targeting (if supported):

Format: http://username-country-[CODE]-city-[CITY]:password@proxy-ip:port Example: http://username-country-us-city-newyork:password@45.67.89.123:8080

Country codes: Use ISO 3166-1 alpha-2 codes (US, UK, DE, FR, JP, etc.)


Rotation Strategies

1. Time-based rotation (change IP every N minutes):

import requests import time import uuid def scrape_with_timed_rotation(urls, rotation_interval=300): # 5 minutes session_id = str(uuid.uuid4()) last_rotation = time.time() for url in urls: # Check if it’s time to rotate if time.time() – last_rotation > rotation_interval: session_id = str(uuid.uuid4()) last_rotation = time.time() print(f”Rotated to new IP (session: {session_id})”) proxy = f’http://username-session-{session_id}:password@45.67.89.123:8080′ proxies = {‘http’: proxy, ‘https’: proxy} response = requests.get(url, proxies=proxies, timeout=10) print(f”Scraped {url} – Status: {response.status_code}”) time.sleep(2)

2. Request-count rotation (change IP every N requests):

def scrape_with_request_rotation(urls, requests_per_ip=10): session_id = str(uuid.uuid4()) request_count = 0 for url in urls: if request_count >= requests_per_ip: session_id = str(uuid.uuid4()) request_count = 0 print(f”Rotated to new IP”) proxy = f’http://username-session-{session_id}:password@45.67.89.123:8080′ proxies = {‘http’: proxy, ‘https’: proxy} response = requests.get(url, proxies=proxies) request_count += 1 time.sleep(1)

3. Random rotation (change IP randomly):

import random def scrape_with_random_rotation(urls, rotation_probability=0.2): session_id = str(uuid.uuid4()) for url in urls: if random.random() < rotation_probability: session_id = str(uuid.uuid4()) print(f”Randomly rotated IP”) proxy = f’http://username-session-{session_id}:password@45.67.89.123:8080′ proxies = {‘http’: proxy, ‘https’: proxy} response = requests.get(url, proxies=proxies) time.sleep(random.uniform(1, 3))


Troubleshooting Common Issues

Issue 1: “Proxy Authentication Required” (407 Error)

Cause: Username/password incorrect or improperly formatted

Solutions:

  1. Double-check credentials from Proxy-Seller dashboard
  2. Ensure no extra spaces in username/password
  3. Try URL-encoding special characters in password:

from urllib.parse import quote username = “user123” password = “p@ss#word!” password_encoded = quote(password, safe=”) proxy = f’http://{username}:{password_encoded}@45.67.89.123:8080′


Issue 2: “Connection Timed Out”

Cause: Proxy server unreachable or proxy IP blocked

Solutions:

  1. Check proxy status in Proxy-Seller dashboard
  2. Verify your IP isn’t blocked (some proxies whitelist client IPs)
  3. Try different proxy from your pool
  4. Increase request timeout:

response = requests.get(url, proxies=proxies, timeout=30)


Issue 3: IP Detected as Proxy / Bot

Cause: Datacenter proxies or poor quality IPs

Solutions:

  1. Switch to residential or ISP proxies
  2. Add realistic headers:

headers = { ‘User-Agent’: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36’, ‘Accept’: ‘text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8’, ‘Accept-Language’: ‘en-US,en;q=0.5’, ‘Accept-Encoding’: ‘gzip, deflate, br’, ‘DNT’: ‘1’, ‘Connection’: ‘keep-alive’, ‘Upgrade-Insecure-Requests’: ‘1’ } response = requests.get(url, proxies=proxies, headers=headers)

  1. Add delays between requests (1-3 seconds)
  2. Rotate IPs more frequently

Issue 4: Slow Proxy Speed

Cause: Distance to proxy server, network congestion, or proxy type

Solutions:

  1. Choose proxies geographically closer to target
  2. Use ISP proxies (faster than residential)
  3. Use datacenter proxies (fastest, but easier to detect)
  4. Enable compression:

headers = {‘Accept-Encoding’: ‘gzip, deflate’} response = requests.get(url, proxies=proxies, headers=headers)


Issue 5: IP Banned Despite Using Proxies

Cause: Too aggressive scraping, poor rotation, or detectable patterns

Solutions:

  1. Slow down: Add 2-5 second delays
  2. Rotate more: Change IP every 5-10 requests
  3. Randomize behavior:

import random import time def human_like_delay(): time.sleep(random.uniform(1.5, 4.5)) def scrape_safely(urls): for url in urls: # Rotate IP session_id = str(uuid.uuid4()) proxy = f’http://username-session-{session_id}:password@45.67.89.123:8080′ # Random headers user_agents = [ ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64)…’, ‘Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)…’, ‘Mozilla/5.0 (X11; Linux x86_64)…’ ] headers = {‘User-Agent’: random.choice(user_agents)} # Make request response = requests.get(url, proxies={‘http’: proxy, ‘https’: proxy}, headers=headers) # Random delay human_like_delay()


Best Practices & Optimization

1. Respect robots.txt

Always check https://example.com/robots.txt before scraping:

import requests from urllib.robotparser import RobotFileParser def can_fetch(url): rp = RobotFileParser() rp.set_url(url.rstrip(‘/’) + ‘/robots.txt’) rp.read() return rp.can_fetch(‘*’, url) url = ‘https://example.com/products’ if can_fetch(url): response = requests.get(url, proxies=proxies) else: print(f”Blocked by robots.txt: {url}”)


2. Implement Retry Logic

Handle temporary failures gracefully:

import time from requests.exceptions import RequestException def fetch_with_retry(url, proxies, max_retries=3): for attempt in range(max_retries): try: response = requests.get(url, proxies=proxies, timeout=10) response.raise_for_status() return response except RequestException as e: print(f”Attempt {attempt + 1} failed: {e}”) if attempt < max_retries – 1: time.sleep(2 ** attempt) # Exponential backoff else: print(f”Failed after {max_retries} attempts”) return None


3. Monitor Proxy Health

Track success/failure rates:

class ProxyPool: def __init__(self, proxies): self.proxies = {p: {‘success’: 0, ‘failure’: 0} for p in proxies} def get_best_proxy(self): # Return proxy with highest success rate return max(self.proxies.items(), key=lambda x: x[1][‘success’] / (x[1][‘success’] + x[1][‘failure’] + 1))[0] def record_success(self, proxy): self.proxies[proxy][‘success’] += 1 def record_failure(self, proxy): self.proxies[proxy][‘failure’] += 1 # Usage pool = ProxyPool([‘http://user:pass@45.67.89.123:8080’, …]) proxy = pool.get_best_proxy() try: response = requests.get(url, proxies={‘http’: proxy, ‘https’: proxy}) pool.record_success(proxy) except: pool.record_failure(proxy)


4. Use Concurrent Requests (Carefully)

Speed up scraping with threading (but don’t overwhelm target):

import concurrent.futures def fetch_url(url, proxy): proxies = {‘http’: proxy, ‘https’: proxy} return requests.get(url, proxies=proxies) urls = [‘https://example.com/page1’, ‘https://example.com/page2’, …] proxy_list = […] # Your proxy list with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: # Limit to 5 concurrent futures = [executor.submit(fetch_url, url, proxy_list[i % len(proxy_list)]) for i, url in enumerate(urls)] for future in concurrent.futures.as_completed(futures): response = future.result() print(f”Got response: {response.status_code}”)


5. Handle CAPTCHAs

Some sites use CAPTCHAs to block bots. Strategies:

  1. Slow down โ€” Reduce request rate to avoid triggering CAPTCHA
  2. Rotate more โ€” Fresh IPs less likely to be flagged
  3. CAPTCHA solving services โ€” 2Captcha, Anti-Captcha (paid)
  4. Session persistence โ€” Once you solve a CAPTCHA, keep that session/IP

# Example with 2Captcha integration from twocaptcha import TwoCaptcha solver = TwoCaptcha(‘YOUR_API_KEY’) result = solver.recaptcha(sitekey=’SITE_KEY’, url=’https://example.com’) # Submit solved captcha to site


Frequently Asked Questions

How many proxies do I need?

It depends on your use case:

  • Web scraping (1000 pages/day): 10-20 residential proxies with rotation
  • Social media (5 accounts): 5 ISP proxies (1 per account)
  • SEO rank tracking (100 keywords): 5-10 datacenter proxies
  • E-commerce scraping (5000 products/day): 50+ residential proxies or traffic-based plan

General rule: 1 proxy per 50-100 requests/hour for residential, 1 proxy per account for ISP.


Can I use Proxy-Seller for streaming (Netflix, Hulu)?

Not recommended. Streaming services aggressively block datacenter and known proxy IPs.

For streaming: Use a dedicated VPN service (PureVPN or IPVanish) with streaming-optimized servers.

Proxy-Seller is for: Web scraping, automation, testing โ€” not bypassing geo-restrictions for media.


What’s the difference between HTTP and SOCKS5 proxies?

FeatureHTTP/HTTPSSOCKS5
Use caseWeb browsing, API requestsAny TCP/UDP traffic (torrents, games, email)
SpeedFastSlightly slower
Protocol supportHTTP/HTTPS onlyAll protocols
CompatibilityMost toolsRequires SOCKS5 support

Recommendation: Use HTTP/HTTPS for web scraping, SOCKS5 for applications requiring non-HTTP protocols.


How do I avoid getting my proxies banned?

Best practices:

  1. โœ… Rotate IPs: Change proxy every 10-20 requests
  2. โœ… Add delays: 1-3 seconds between requests
  3. โœ… Use residential/ISP: More trusted than datacenter
  4. โœ… Mimic humans: Realistic headers, random user agents
  5. โœ… Respect limits: Don’t exceed 1-2 requests/second per IP
  6. โœ… Monitor bans: If proxy gets banned, retire it and use a fresh one

Anti-ban checklist:

  • [ ] Headers include realistic User-Agent
  • [ ] Accept-Language, Accept-Encoding headers present
  • [ ] Referer header when navigating between pages
  • [ ] Cookies enabled and persisted across requests
  • [ ] JavaScript execution (if using headless browser)
  • [ ] Random delays (1-5 seconds)
  • [ ] IP rotation (every 5-10 requests)

Can I share proxies across multiple team members?

Yes, but with limitations:

  • Residential proxies (traffic-based): Share credentials, but usage counts against total GB
  • Datacenter/ISP (per-IP): Technically shareable, but may violate TOS (check with Proxy-Seller)

Best practice: Assign specific proxies to specific team members/tasks to avoid conflicts.


What happens if my proxy gets banned from a website?

Short answer: Nothing catastrophic. Just rotate to a new proxy.

Long answer:

  • Datacenter proxies: Ban typically affects only that specific IP. Your other proxies unaffected.
  • Residential proxies: Banned IP rotates out automatically (you get a new IP on next request).
  • ISP proxies: If static IP is banned, you may need to wait (bans often temporary: 1 hour to 24 hours) or request replacement from Proxy-Seller.

Prevention is better than cure: Follow best practices above to avoid bans in the first place.


Do proxies hide my identity like a VPN?

No. Proxies mask your IP but typically don’t encrypt your traffic.

For privacy/anonymity: Use a VPN (comparison guide)

For scraping/automation: Use proxies (Proxy-Seller)

Key differences:

FeatureProxyVPN
Hides IPโœ… Yesโœ… Yes
Encrypts trafficโŒ No (except HTTPS)โœ… Yes
Protects on public WiFiโŒ Noโœ… Yes
Good for scrapingโœ… Yesโš ๏ธ Limited IPs
Good for privacyโŒ Noโœ… Yes

Can I use Proxy-Seller for illegal activities?

No. Proxy-Seller’s Terms of Service prohibit:

  • Hacking, unauthorized access
  • Fraud, phishing, identity theft
  • DDOS attacks
  • Copyright infringement (piracy)
  • Spamming, malware distribution
  • Any illegal activity under US or international law

Violation = Account termination + potential legal action.

Legal uses: Web scraping (with respect to robots.txt), market research, SEO tools, geo-testing, social media management, ad verification, QA testing.


How do I test if my proxy is working?

Method 1: ExamineIP Tools (Easiest)

  1. Configure your proxy
  2. Visit tools.examineip.com
  3. Check displayed IP โ€” should be proxy IP, not your real IP
  4. Run VPN Leak Test โ€” should show no leaks

Method 2: Command line

curl -x http://username:password@proxy-ip:port https://tools.examineip.com/api/ip.php

Should return proxy IP in JSON.

Method 3: Python script

import requests proxies = {‘http’: ‘http://username:password@proxy-ip:port’, ‘https’: ‘…’} response = requests.get(‘https://tools.examineip.com/api/ip.php’, proxies=proxies) print(response.json())


What should I do if Proxy-Seller proxies aren’t working?

Troubleshooting checklist:

  1. โœ… Verify credentials: Copy/paste from dashboard (no typos)
  2. โœ… Check proxy format: Should be http://username:password@ip:port
  3. โœ… Test connectivity: curl -x http://proxy https://google.com
  4. โœ… Check proxy status: Login to Proxy-Seller dashboard โ†’ verify proxy is active
  5. โœ… IP whitelist: Some plans require whitelisting your client IP
  6. โœ… Firewall: Ensure outbound proxy port (8080, 3128, etc.) isn’t blocked
  7. โœ… Try different proxy: If one fails, try another from your pool

Still not working? Contact Proxy-Seller support (24/7 live chat).


Get Started with Proxy-Seller

Ready to use professional proxies for your projects?

Get 15% Off with Code EXAMINEIP โ†’

What you get:

  • โœ… Residential, ISP, Mobile, and Datacenter proxies
  • โœ… 100+ countries, city-level targeting
  • โœ… HTTP/HTTPS and SOCKS5 protocols
  • โœ… 24/7 customer support
  • โœ… Money-back guarantee (24 hours)
  • โœ… Instant proxy delivery

Related Resources


Last updated: April 26, 2026

Author: ExamineIP Team | About Us

Scroll to Top