Use Cases

News and Media Aggregation with CAPTCHA Handling

News outlets and media platforms use CAPTCHAs to protect content from automated aggregation. Media monitors, PR firms, and research organizations need to collect articles programmatically. CaptchaAI handles CAPTCHA challenges across news sources.


CAPTCHAs on News Platforms

Source Type CAPTCHA Trigger Content
Major news outlets Cloudflare Turnstile Bot detection Articles, headlines
Wire services (AP, Reuters) reCAPTCHA v2 Bulk access Breaking news
Paywalled publications reCAPTCHA v3 Access attempts Premium articles
Local news sites reCAPTCHA v2 Rate limiting Regional news
News aggregators Cloudflare Challenge Scraping detection Aggregated feeds
Press release sites Image CAPTCHA Download pages PR content

News Aggregator

import requests
import time
import re
from bs4 import BeautifulSoup
from datetime import datetime
import json

CAPTCHAAI_KEY = "YOUR_API_KEY"
CAPTCHAAI_URL = "https://ocr.captchaai.com"


def solve_captcha(method, sitekey, pageurl, **kwargs):
    data = {
        "key": CAPTCHAAI_KEY, "method": method,
        "googlekey": sitekey, "pageurl": pageurl, "json": 1,
    }
    data.update(kwargs)
    resp = requests.post(f"{CAPTCHAAI_URL}/in.php", data=data)
    task_id = resp.json()["request"]
    for _ in range(60):
        time.sleep(5)
        result = requests.get(f"{CAPTCHAAI_URL}/res.php", params={
            "key": CAPTCHAAI_KEY, "action": "get",
            "id": task_id, "json": 1,
        })
        r = result.json()
        if r["request"] != "CAPCHA_NOT_READY":
            return r["request"]
    raise TimeoutError("Timeout")


class NewsAggregator:
    def __init__(self, proxy=None):
        self.session = requests.Session()
        if proxy:
            self.session.proxies = {"http": proxy, "https": proxy}
        self.session.headers.update({
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
            "AppleWebKit/537.36 Chrome/126.0.0.0 Safari/537.36",
            "Accept-Language": "en-US,en;q=0.9",
        })

    def collect_headlines(self, source_url, section=None):
        """Collect headlines from a news source."""
        url = f"{source_url}/{section}" if section else source_url
        resp = self.session.get(url, timeout=30)

        if self._has_captcha(resp.text):
            resp = self._solve_and_retry(resp.text, url)

        soup = BeautifulSoup(resp.text, "html.parser")
        articles = []

        for item in soup.select("article, .story, .headline-item, h2 a, h3 a"):
            link = item if item.name == "a" else item.select_one("a")
            if link:
                articles.append({
                    "title": link.get_text(strip=True),
                    "url": self._abs_url(source_url, link.get("href", "")),
                    "source": source_url,
                    "collected_at": datetime.now().isoformat(),
                })

        return articles

    def get_article(self, article_url):
        """Fetch full article content."""
        resp = self.session.get(article_url, timeout=30)

        if self._has_captcha(resp.text):
            resp = self._solve_and_retry(resp.text, article_url)

        soup = BeautifulSoup(resp.text, "html.parser")

        # Remove unwanted elements
        for tag in soup.select("script, style, nav, footer, .ad, .sidebar"):
            tag.decompose()

        content_el = soup.select_one(
            "article, .article-body, .story-body, .entry-content"
        )

        return {
            "url": article_url,
            "title": self._text(soup, "h1, .article-title"),
            "author": self._text(soup, ".author, .byline, [rel='author']"),
            "date": self._text(soup, "time, .publish-date, .article-date"),
            "content": content_el.get_text(separator="\n", strip=True) if content_el else "",
            "word_count": len(content_el.get_text().split()) if content_el else 0,
        }

    def aggregate_sources(self, sources, max_articles_per=20):
        """Aggregate headlines across multiple sources."""
        all_articles = []

        for source in sources:
            try:
                articles = self.collect_headlines(source["url"], source.get("section"))
                all_articles.extend(articles[:max_articles_per])
                print(f"{source['name']}: {len(articles)} headlines")
            except Exception as e:
                print(f"{source['name']}: Error - {e}")
            time.sleep(3)

        return all_articles

    def _has_captcha(self, html):
        return any(tag in html.lower() for tag in [
            'data-sitekey', 'g-recaptcha', 'cf-turnstile',
        ])

    def _solve_and_retry(self, html, url):
        match = re.search(r'data-sitekey="([^"]+)"', html)
        if not match:
            return self.session.get(url)
        sitekey = match.group(1)
        if 'cf-turnstile' in html:
            token = solve_captcha("turnstile", sitekey, url)
            return self.session.post(url, data={"cf-turnstile-response": token})
        token = solve_captcha("userrecaptcha", sitekey, url)
        return self.session.post(url, data={"g-recaptcha-response": token})

    def _text(self, soup, selector):
        el = soup.select_one(selector)
        return el.get_text(strip=True) if el else ""

    def _abs_url(self, base, href):
        if href.startswith("http"):
            return href
        return base.rstrip("/") + "/" + href.lstrip("/")


# Usage
aggregator = NewsAggregator(
    proxy="http://user:pass@residential.proxy.com:5000"
)

sources = [
    {"name": "Tech News A", "url": "https://technews-a.example.com", "section": "latest"},
    {"name": "Business B", "url": "https://business-b.example.com", "section": "tech"},
    {"name": "Industry C", "url": "https://industry-c.example.com"},
]

headlines = aggregator.aggregate_sources(sources)
print(f"Total: {len(headlines)} headlines collected")

Keyword-Based News Monitoring

class NewsMonitor:
    def __init__(self, keywords, sources, proxy=None):
        self.keywords = [kw.lower() for kw in keywords]
        self.aggregator = NewsAggregator(proxy=proxy)
        self.sources = sources
        self.seen_urls = set()

    def scan(self):
        """Scan for articles matching keywords."""
        headlines = self.aggregator.aggregate_sources(self.sources)
        matches = []

        for article in headlines:
            if article["url"] in self.seen_urls:
                continue

            title_lower = article["title"].lower()
            matched_kws = [kw for kw in self.keywords if kw in title_lower]

            if matched_kws:
                article["matched_keywords"] = matched_kws
                matches.append(article)
                self.seen_urls.add(article["url"])

        return matches

    def continuous_monitor(self, interval_min=30):
        """Run continuous monitoring with alerts."""
        while True:
            matches = self.scan()
            if matches:
                print(f"\n=== {len(matches)} new matches found ===")
                for m in matches:
                    print(f"  [{', '.join(m['matched_keywords'])}] {m['title']}")
                    print(f"    {m['url']}")
            else:
                print(f"No new matches at {datetime.now().strftime('%H:%M')}")

            time.sleep(interval_min * 60)


# Monitor for specific topics
monitor = NewsMonitor(
    keywords=["captcha", "bot detection", "web scraping", "automation"],
    sources=sources,
    proxy="http://user:pass@residential.proxy.com:5000",
)
matches = monitor.scan()

Troubleshooting

Issue Cause Fix
Cloudflare blocking all requests Aggressive bot detection Use residential proxy + realistic UA
Paywall instead of article Content behind subscription Detect paywall, skip or handle
CAPTCHA on every page IP flagged Rotate proxy, add 5+ sec delays
Article content empty JS-rendered content Use Selenium/Puppeteer for SPA sites
Duplicate articles Same story from multiple sources Deduplicate by title similarity

FAQ

Collecting headlines and metadata for research or monitoring is common practice. Reproducing full copyrighted articles without permission is not. Use snippets and link back to the original source.

How do I handle paywalled content?

Detect the paywall (look for paywall CSS classes or limited content length) and flag it. Only access content you're authorized to view.

Which proxy type works best for news sites?

Rotating residential proxies work best. Major news outlets use Cloudflare, which blocks datacenter IPs aggressively.



Aggregate news from any source — get your CaptchaAI key and automate content collection.

Discussions (0)

No comments yet.

Related Posts

Reference CAPTCHA Token Injection Methods Reference
Complete reference for injecting solved CAPTCHA tokens into web pages.

Complete reference for injecting solved CAPTCHA tokens into web pages. Covers re CAPTCHA, Turnstile, and Cloud...

Automation Python reCAPTCHA v2
Apr 08, 2026
Reference Browser Session Persistence for CAPTCHA Workflows
Manage browser sessions, cookies, and storage across CAPTCHA-solving runs to reduce repeat challenges and maintain authenticated state.

Manage browser sessions, cookies, and storage across CAPTCHA-solving runs to reduce repeat challenges and main...

Automation Python reCAPTCHA v2
Feb 24, 2026
Use Cases CAPTCHA Solving in Ticket Purchase Automation
How to handle CAPTCHAs on ticketing platforms Ticketmaster, AXS, and event sites using Captcha AI for automated purchasing workflows.

How to handle CAPTCHAs on ticketing platforms Ticketmaster, AXS, and event sites using Captcha AI for automate...

Automation Python reCAPTCHA v2
Feb 25, 2026
Tutorials Caching CAPTCHA Tokens for Reuse
Cache and reuse CAPTCHA tokens with Captcha AI to reduce API calls and costs.

Cache and reuse CAPTCHA tokens with Captcha AI to reduce API calls and costs. Covers token lifetimes, cache st...

Automation Python reCAPTCHA v2
Feb 15, 2026
Explainers Reducing CAPTCHA Solve Costs: 10 Strategies
Cut CAPTCHA solving costs with Captcha AI using 10 practical strategies — from skipping unnecessary solves to batching and caching tokens.

Cut CAPTCHA solving costs with Captcha AI using 10 practical strategies — from skipping unnecessary solves to...

Python reCAPTCHA v2 Cloudflare Turnstile
Mar 11, 2026
Use Cases Job Board Scraping with CAPTCHA Handling Using CaptchaAI
Scrape job listings from Indeed, Linked In, Glassdoor, and other job boards that use CAPTCHAs with Captcha AI integration.

Scrape job listings from Indeed, Linked In, Glassdoor, and other job boards that use CAPTCHAs with Captcha AI...

Python reCAPTCHA v2 Cloudflare Turnstile
Feb 28, 2026
Use Cases Multi-Step Checkout Automation with CAPTCHA Solving
Automate multi-step e-commerce checkout flows that include CAPTCHA challenges at cart, payment, or confirmation stages using Captcha AI.

Automate multi-step e-commerce checkout flows that include CAPTCHA challenges at cart, payment, or confirmatio...

Automation Python reCAPTCHA v2
Mar 21, 2026
Explainers How Proxy Quality Affects CAPTCHA Solve Success Rate
Understand how proxy quality, IP reputation, and configuration affect CAPTCHA frequency and solve success rates with Captcha AI.

Understand how proxy quality, IP reputation, and configuration affect CAPTCHA frequency and solve success rate...

Python reCAPTCHA v2 Cloudflare Turnstile
Feb 06, 2026
Comparisons Headless vs Headed Chrome for CAPTCHA Solving
Compare headless and headed Chrome for CAPTCHA automation — detection differences, performance trade-offs, and when to use each mode with Captcha AI.

Compare headless and headed Chrome for CAPTCHA automation — detection differences, performance trade-offs, and...

Automation Python reCAPTCHA v2
Mar 09, 2026
API Tutorials CaptchaAI API Latency Optimization: Faster Solves
Reduce CAPTCHA solve latency with Captcha AI by optimizing poll intervals, connection pooling, prefetching, and proxy selection.

Reduce CAPTCHA solve latency with Captcha AI by optimizing poll intervals, connection pooling, prefetching, an...

Automation Python reCAPTCHA v2
Feb 27, 2026
Use Cases Retail Site Data Collection with CAPTCHA Handling
Amazon uses image CAPTCHAs to block automated access.

Amazon uses image CAPTCHAs to block automated access. When you hit their anti-bot threshold, you'll see a page...

Web Scraping Image OCR
Apr 07, 2026
Use Cases Event Ticket Monitoring with CAPTCHA Handling
Build an event ticket availability monitor that handles CAPTCHAs using Captcha AI.

Build an event ticket availability monitor that handles CAPTCHAs using Captcha AI. Python workflow for checkin...

Automation Python reCAPTCHA v2
Jan 17, 2026
Use Cases Automated Form Submission with CAPTCHA Handling
Complete guide to automating web form submissions that include CAPTCHA challenges — re CAPTCHA, Turnstile, and image CAPTCHAs with Captcha AI.

Complete guide to automating web form submissions that include CAPTCHA challenges — re CAPTCHA, Turnstile, and...

Python reCAPTCHA v2 Cloudflare Turnstile
Mar 21, 2026