Use Cases

Financial Data Scraping with CAPTCHA Handling

Financial platforms like stock screeners, SEC EDGAR, and trading platforms protect data with CAPTCHAs to prevent automated extraction. CaptchaAI handles these challenges programmatically so you can collect market data at scale.


Where CAPTCHAs Appear in Finance

Source CAPTCHA Type Trigger Data Value
SEC EDGAR reCAPTCHA v2 High volume requests Company filings
Yahoo Finance reCAPTCHA v2 Scraping detection Stock quotes, history
Bloomberg Cloudflare Turnstile All automated access Market data
Finviz reCAPTCHA v2 Stock screener access Screening results
TradingView Cloudflare Challenge Rate limiting Charts, indicators
Morningstar reCAPTCHA v3 Data export pages Fund analytics

Stock Screener Scraping

import requests
import time
from bs4 import BeautifulSoup
import re

CAPTCHAAI_KEY = "YOUR_API_KEY"
CAPTCHAAI_URL = "https://ocr.captchaai.com"


def solve_captcha(method, sitekey, pageurl, **kwargs):
    data = {
        "key": CAPTCHAAI_KEY,
        "method": method,
        "googlekey": sitekey,
        "pageurl": pageurl,
        "json": 1,
    }
    data.update(kwargs)

    resp = requests.post(f"{CAPTCHAAI_URL}/in.php", data=data)
    task_id = resp.json()["request"]

    for _ in range(60):
        time.sleep(5)
        result = requests.get(f"{CAPTCHAAI_URL}/res.php", params={
            "key": CAPTCHAAI_KEY, "action": "get",
            "id": task_id, "json": 1,
        })
        r = result.json()
        if r["request"] != "CAPCHA_NOT_READY":
            return r["request"]

    raise TimeoutError("Solve timeout")


class FinancialScraper:
    def __init__(self, proxy=None):
        self.session = requests.Session()
        if proxy:
            self.session.proxies = {"http": proxy, "https": proxy}
        self.session.headers.update({
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
            "AppleWebKit/537.36 Chrome/126.0.0.0 Safari/537.36",
            "Accept-Language": "en-US,en;q=0.9",
        })

    def scrape_screener(self, url):
        """Scrape stock screener, handling CAPTCHA if triggered."""
        resp = self.session.get(url, timeout=30)

        # Check for CAPTCHA
        sitekey_match = re.search(r'data-sitekey="([^"]+)"', resp.text)
        if sitekey_match:
            sitekey = sitekey_match.group(1)
            token = solve_captcha("userrecaptcha", sitekey, url)

            # Resubmit with token
            resp = self.session.post(url, data={
                "g-recaptcha-response": token,
            })

        return self._parse_stocks(resp.text)

    def _parse_stocks(self, html):
        soup = BeautifulSoup(html, "html.parser")
        stocks = []
        for row in soup.select("table.screener-table tr")[1:]:
            cols = row.select("td")
            if len(cols) >= 8:
                stocks.append({
                    "ticker": cols[1].get_text(strip=True),
                    "company": cols[2].get_text(strip=True),
                    "sector": cols[3].get_text(strip=True),
                    "price": cols[6].get_text(strip=True),
                    "change": cols[7].get_text(strip=True),
                })
        return stocks


# Usage
scraper = FinancialScraper(
    proxy="http://user:pass@residential.proxy.com:5000"
)
stocks = scraper.scrape_screener("https://screener.example.com/screener.ashx?v=111")
for stock in stocks[:5]:
    print(f"{stock['ticker']}: {stock['price']} ({stock['change']})")

SEC EDGAR Filing Extraction

SEC EDGAR implements rate limiting and CAPTCHAs for high-volume access:

import json


class SECFilingScraper:
    BASE_URL = "https://efts.sec.gov/LATEST"

    def __init__(self, user_agent_email, proxy=None):
        self.session = requests.Session()
        if proxy:
            self.session.proxies = {"http": proxy, "https": proxy}
        # SEC requires identifying User-Agent
        self.session.headers.update({
            "User-Agent": f"CompanyName admin@{user_agent_email}",
            "Accept": "application/json",
        })

    def search_filings(self, company, filing_type="10-K"):
        """Search EDGAR for specific filing types."""
        url = f"{self.BASE_URL}/search-index"
        params = {
            "q": company,
            "dateRange": "custom",
            "forms": filing_type,
        }

        resp = self.session.get(url, params=params, timeout=30)

        # Handle CAPTCHA if triggered
        if "captcha" in resp.text.lower() or resp.status_code == 403:
            sitekey = self._extract_sitekey(resp.text)
            if sitekey:
                token = solve_captcha("userrecaptcha", sitekey, url)
                resp = self.session.post(url, data={
                    **params,
                    "g-recaptcha-response": token,
                })

        return resp.json() if resp.status_code == 200 else {}

    def download_filing(self, filing_url):
        """Download individual filing document."""
        resp = self.session.get(filing_url, timeout=60)
        if resp.status_code == 200:
            return resp.text
        return None

    def _extract_sitekey(self, html):
        match = re.search(r'data-sitekey="([^"]+)"', html)
        return match.group(1) if match else None


# Usage
sec = SECFilingScraper(
    user_agent_email="example.com",
    proxy="http://user:pass@proxy.example.com:5000",
)
filings = sec.search_filings("Apple Inc", "10-K")

Turnstile-Protected Market Data

def scrape_turnstile_market_data(url, sitekey):
    """Handle Cloudflare Turnstile on financial data sites."""
    token = solve_captcha("turnstile", sitekey, url)

    session = requests.Session()
    session.headers.update({
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
        "AppleWebKit/537.36 Chrome/126.0.0.0 Safari/537.36",
    })

    resp = session.post(url, data={
        "cf-turnstile-response": token,
    }, timeout=30)

    return resp.json() if resp.status_code == 200 else None

Scheduled Data Collection

import csv
from datetime import datetime


def daily_market_snapshot(tickers, output_dir="data"):
    """Collect daily stock data, handling CAPTCHAs automatically."""
    scraper = FinancialScraper(
        proxy="http://user:pass@residential.proxy.com:5000"
    )

    date_str = datetime.now().strftime("%Y-%m-%d")
    results = []

    for ticker in tickers:
        url = f"https://screener.example.com/quote.ashx?t={ticker}"
        try:
            data = scraper.scrape_screener(url)
            if data:
                results.extend(data)
            time.sleep(2)  # Rate limit
        except Exception as e:
            print(f"Error on {ticker}: {e}")

    # Save to CSV
    filepath = f"{output_dir}/market_{date_str}.csv"
    with open(filepath, "w", newline="") as f:
        writer = csv.DictWriter(f, fieldnames=["ticker", "company", "sector", "price", "change"])
        writer.writeheader()
        writer.writerows(results)

    print(f"Saved {len(results)} records to {filepath}")
    return results


# Run daily
tickers = ["AAPL", "GOOGL", "MSFT", "AMZN", "TSLA"]
daily_market_snapshot(tickers)

Rate Limiting Best Practices

Financial sites are stricter about automated access:

Practice Recommendation
Request delay 2-5 seconds between pages
Concurrent connections Max 3-5 per domain
Proxy type Residential or ISP
Session length 5-10 min sticky sessions
User-Agent Realistic, consistent per session
SEC EDGAR Include contact email in UA (required)
Market hours Scrape during off-peak when possible

Troubleshooting

Issue Cause Fix
403 on SEC EDGAR Missing User-Agent with email Add CompanyName email@domain header
CAPTCHA on every request Rate limit exceeded Add 3-5s delays between requests
Stale price data Cached response Add cache-bust query parameter
JSON parse error CAPTCHA page returned instead Check for CAPTCHA before parsing
IP blocked Too many requests from same IP Switch to rotating residential proxy

FAQ

Public financial data (SEC filings, stock quotes) is generally permissible. Always respect terms of service and rate limits. SEC EDGAR explicitly provides EDGAR access for research purposes.

Why do financial sites use CAPTCHAs?

To prevent high-volume automated extraction that could enable market manipulation, competitive intelligence gathering, or excessive server load.

How often should I scrape market data?

For stock prices: once per minute during market hours maximum. For filings: once daily is typical. Over-scraping triggers CAPTCHAs faster.



Collect financial data without CAPTCHA interruptions — get your CaptchaAI key and automate market research.

Discussions (0)

No comments yet.

Related Posts

Use Cases Job Board Scraping with CAPTCHA Handling Using CaptchaAI
Scrape job listings from Indeed, Linked In, Glassdoor, and other job boards that use CAPTCHAs with Captcha AI integration.

Scrape job listings from Indeed, Linked In, Glassdoor, and other job boards that use CAPTCHAs with Captcha AI...

Python reCAPTCHA v2 Cloudflare Turnstile
Feb 28, 2026
Explainers How Proxy Quality Affects CAPTCHA Solve Success Rate
Understand how proxy quality, IP reputation, and configuration affect CAPTCHA frequency and solve success rates with Captcha AI.

Understand how proxy quality, IP reputation, and configuration affect CAPTCHA frequency and solve success rate...

Python reCAPTCHA v2 Cloudflare Turnstile
Feb 06, 2026
Tutorials Handling Multiple CAPTCHAs on a Single Page
how to detect and solve multiple CAPTCHAs on a single web page using Captcha AI.

Learn how to detect and solve multiple CAPTCHAs on a single web page using Captcha AI. Covers multi-iframe ext...

Python reCAPTCHA v2 Cloudflare Turnstile
Apr 09, 2026
Integrations Selenium Wire + CaptchaAI: Request Interception for CAPTCHA Solving
Complete guide to using Selenium Wire for request interception, proxy routing, and automated CAPTCHA solving with Captcha AI in Python.

Complete guide to using Selenium Wire for request interception, proxy routing, and automated CAPTCHA solving w...

Python reCAPTCHA v2 Cloudflare Turnstile
Mar 13, 2026
Use Cases Shipping and Logistics Rate Scraping with CAPTCHA Solving
Scrape shipping rates, tracking data, and logistics information from carrier websites protected by CAPTCHAs using Captcha AI.

Scrape shipping rates, tracking data, and logistics information from carrier websites protected by CAPTCHAs us...

Python reCAPTCHA v2 Cloudflare Turnstile
Jan 25, 2026
Use Cases Multi-Step Workflow Automation with CaptchaAI
Manage workflows across multiple accounts on CAPTCHA-protected platforms — , action, and data collection at scale.

Manage workflows across multiple accounts on CAPTCHA-protected platforms — , action, and data collection at sc...

Automation Python reCAPTCHA v2
Apr 06, 2026
Troubleshooting CAPTCHA Appears After Login: Mid-Session CAPTCHA Handling
Handle CAPTCHAs that appear mid-session after — triggered by suspicious activity, rate limits, or session age.

Handle CAPTCHAs that appear mid-session after — triggered by suspicious activity, rate limits, or session age....

Python reCAPTCHA v2 Cloudflare Turnstile
Apr 01, 2026
Use Cases Academic Research Web Scraping with CAPTCHA Solving
How researchers can collect data from academic databases, journals, and citation sources protected by CAPTCHAs using Captcha AI.

How researchers can collect data from academic databases, journals, and citation sources protected by CAPTCHAs...

Python reCAPTCHA v2 Cloudflare Turnstile
Apr 06, 2026
Explainers Mobile Proxies for CAPTCHA Solving: Higher Success Rates Explained
Why mobile proxies produce the lowest CAPTCHA trigger rates and how to use them with Captcha AI for maximum success.

Why mobile proxies produce the lowest CAPTCHA trigger rates and how to use them with Captcha AI for maximum su...

Python reCAPTCHA v2 Cloudflare Turnstile
Apr 03, 2026
Use Cases Retail Site Data Collection with CAPTCHA Handling
Amazon uses image CAPTCHAs to block automated access.

Amazon uses image CAPTCHAs to block automated access. When you hit their anti-bot threshold, you'll see a page...

Web Scraping Image OCR
Apr 07, 2026
Use Cases Event Ticket Monitoring with CAPTCHA Handling
Build an event ticket availability monitor that handles CAPTCHAs using Captcha AI.

Build an event ticket availability monitor that handles CAPTCHAs using Captcha AI. Python workflow for checkin...

Automation Python reCAPTCHA v2
Jan 17, 2026
Use Cases Automated Form Submission with CAPTCHA Handling
Complete guide to automating web form submissions that include CAPTCHA challenges — re CAPTCHA, Turnstile, and image CAPTCHAs with Captcha AI.

Complete guide to automating web form submissions that include CAPTCHA challenges — re CAPTCHA, Turnstile, and...

Python reCAPTCHA v2 Cloudflare Turnstile
Mar 21, 2026