Use Cases

Academic Research Web Scraping with CAPTCHA Solving

Academic databases and journal portals use CAPTCHAs to limit automated access. Researchers conducting literature reviews, bibliometric analysis, and meta-studies need to collect data from these sources at scale. CaptchaAI handles the CAPTCHA challenges automatically.


Academic Sources and CAPTCHAs

Source CAPTCHA Type Trigger Data
Google Scholar reCAPTCHA v3 High-volume queries Citations, papers
PubMed reCAPTCHA v2 Repeated searches Biomedical literature
Web of Science Cloudflare Turnstile Bulk downloads Citation metrics
Scopus reCAPTCHA v2 Export operations Bibliometric data
IEEE Xplore reCAPTCHA v2 Search + download Engineering papers
JSTOR reCAPTCHA v2 Access pages Humanities/social science

Citation Data Collector

import requests
import time
import re
from bs4 import BeautifulSoup
import csv

CAPTCHAAI_KEY = "YOUR_API_KEY"
CAPTCHAAI_URL = "https://ocr.captchaai.com"


def solve_captcha(method, sitekey, pageurl, **kwargs):
    data = {
        "key": CAPTCHAAI_KEY, "method": method,
        "googlekey": sitekey, "pageurl": pageurl, "json": 1,
    }
    data.update(kwargs)
    resp = requests.post(f"{CAPTCHAAI_URL}/in.php", data=data)
    task_id = resp.json()["request"]
    for _ in range(60):
        time.sleep(5)
        result = requests.get(f"{CAPTCHAAI_URL}/res.php", params={
            "key": CAPTCHAAI_KEY, "action": "get",
            "id": task_id, "json": 1,
        })
        r = result.json()
        if r["request"] != "CAPCHA_NOT_READY":
            return r["request"]
    raise TimeoutError("Timeout")


class AcademicScraper:
    def __init__(self, proxy=None):
        self.session = requests.Session()
        if proxy:
            self.session.proxies = {"http": proxy, "https": proxy}
        self.session.headers.update({
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
            "AppleWebKit/537.36 Chrome/126.0.0.0 Safari/537.36",
            "Accept-Language": "en-US,en;q=0.9",
        })

    def search_papers(self, search_url, query, max_pages=10):
        """Search academic database for papers matching query."""
        all_papers = []

        for page in range(max_pages):
            url = f"{search_url}?q={query}&start={page * 10}"
            resp = self.session.get(url, timeout=30)

            # Handle CAPTCHA
            if self._has_captcha(resp.text):
                resp = self._solve_and_retry(resp.text, url)

            papers = self._parse_results(resp.text)
            if not papers:
                break  # No more results

            all_papers.extend(papers)
            print(f"Page {page + 1}: {len(papers)} papers")
            time.sleep(5)  # Respectful delay

        return all_papers

    def get_paper_details(self, paper_url):
        """Get detailed metadata for a single paper."""
        resp = self.session.get(paper_url, timeout=30)

        if self._has_captcha(resp.text):
            resp = self._solve_and_retry(resp.text, paper_url)

        soup = BeautifulSoup(resp.text, "html.parser")
        return {
            "title": self._safe_text(soup, "h1, .article-title"),
            "authors": self._safe_text(soup, ".authors, .author-list"),
            "abstract": self._safe_text(soup, ".abstract, #abstract"),
            "doi": self._safe_text(soup, ".doi, [data-doi]"),
            "journal": self._safe_text(soup, ".journal-name, .publication"),
            "year": self._safe_text(soup, ".pub-date, .year"),
            "citations": self._safe_text(soup, ".citation-count, .cited-by"),
        }

    def export_to_csv(self, papers, filename):
        """Export collected papers to CSV."""
        if not papers:
            return
        keys = papers[0].keys()
        with open(filename, "w", newline="", encoding="utf-8") as f:
            writer = csv.DictWriter(f, fieldnames=keys)
            writer.writeheader()
            writer.writerows(papers)
        print(f"Exported {len(papers)} papers to {filename}")

    def _has_captcha(self, html):
        return any(tag in html.lower() for tag in [
            'data-sitekey', 'g-recaptcha', 'cf-turnstile',
        ])

    def _solve_and_retry(self, html, url):
        match = re.search(r'data-sitekey="([^"]+)"', html)
        if not match:
            return self.session.get(url)

        sitekey = match.group(1)
        if 'cf-turnstile' in html:
            token = solve_captcha("turnstile", sitekey, url)
            return self.session.post(url, data={"cf-turnstile-response": token})
        else:
            token = solve_captcha("userrecaptcha", sitekey, url)
            return self.session.post(url, data={"g-recaptcha-response": token})

    def _parse_results(self, html):
        soup = BeautifulSoup(html, "html.parser")
        papers = []
        for item in soup.select(".gs_r, .search-result, article.result"):
            title_el = item.select_one("h3 a, .result-title a")
            if title_el:
                papers.append({
                    "title": title_el.get_text(strip=True),
                    "url": title_el.get("href", ""),
                    "snippet": self._safe_text(item, ".gs_rs, .abstract-snippet"),
                    "authors": self._safe_text(item, ".gs_a, .author-info"),
                })
        return papers

    def _safe_text(self, soup, selector):
        el = soup.select_one(selector)
        return el.get_text(strip=True) if el else ""


# Usage — Literature review
scraper = AcademicScraper(
    proxy="http://user:pass@residential.proxy.com:5000"
)

papers = scraper.search_papers(
    "https://scholar.example.com/scholar",
    query="machine learning CAPTCHA solving",
    max_pages=5,
)

# Get details for top papers
detailed = []
for paper in papers[:20]:
    if paper["url"]:
        detail = scraper.get_paper_details(paper["url"])
        detailed.append(detail)
        time.sleep(3)

scraper.export_to_csv(detailed, "literature_review.csv")

Bibliometric Analysis

def bibliometric_analysis(scraper, seed_papers, depth=2):
    """Follow citations to build a citation network."""
    visited = set()
    network = []

    def _crawl(paper_url, current_depth):
        if current_depth > depth or paper_url in visited:
            return
        visited.add(paper_url)

        try:
            details = scraper.get_paper_details(paper_url)
            network.append(details)

            # Follow "cited by" links
            resp = scraper.session.get(f"{paper_url}/citations", timeout=30)
            if scraper._has_captcha(resp.text):
                resp = scraper._solve_and_retry(resp.text, f"{paper_url}/citations")

            citations = scraper._parse_results(resp.text)
            for cite in citations[:5]:  # Limit breadth
                if cite["url"]:
                    _crawl(cite["url"], current_depth + 1)
                    time.sleep(3)

        except Exception as e:
            print(f"Error crawling {paper_url}: {e}")

    for paper in seed_papers:
        _crawl(paper["url"], 0)

    return network

Rate Limiting for Academic Sites

Source Recommended Delay Max Pages/Hour
Google Scholar 10-15 sec 40-50
PubMed 3-5 sec 100
Web of Science 5-10 sec 60
Scopus 5-10 sec 60
IEEE 3-5 sec 100
JSTOR 5-10 sec 60

Academic sites ban IPs quickly. Use conservative delays.


Troubleshooting

Issue Cause Fix
CAPTCHA on every search Academic site flagged IP Switch proxy, increase delay to 15+ sec
No results returned CAPTCHA page returned instead Check for CAPTCHA before parsing
Abstract missing Behind paywall Use institutional proxy or open access
Scholar blocks IP Exceeded rate limit Wait 30 min, use different IP
Export limited Site caps bulk downloads Download in smaller batches

FAQ

Is scraping academic databases allowed?

Public metadata (titles, authors, abstracts) is generally accessible. Full-text access depends on licensing. PubMed explicitly supports programmatic access via their E-utilities API. Always prefer official APIs when available.

How do I avoid getting blocked on Google Scholar?

Use 10-15 second delays between requests, rotate residential proxies, and limit to 50 queries per hour. Scholar is aggressive about blocking automated access.

Can I use CaptchaAI with an institutional proxy?

Yes. Set your institutional proxy for the browsing session and CaptchaAI for CAPTCHA solving — they work independently.



Accelerate your literature review — get your CaptchaAI key and automate academic data collection.

Discussions (0)

No comments yet.

Related Posts

Explainers Mobile Proxies for CAPTCHA Solving: Higher Success Rates Explained
Why mobile proxies produce the lowest CAPTCHA trigger rates and how to use them with Captcha AI for maximum success.

Why mobile proxies produce the lowest CAPTCHA trigger rates and how to use them with Captcha AI for maximum su...

Python Cloudflare Turnstile reCAPTCHA v2
Apr 03, 2026
Explainers Rotating Residential Proxies: Best Practices for CAPTCHA Solving
Best practices for using rotating residential proxies with Captcha AI to reduce CAPTCHA frequency and maintain high solve rates.

Best practices for using rotating residential proxies with Captcha AI to reduce CAPTCHA frequency and maintain...

Python Cloudflare Turnstile reCAPTCHA v2
Mar 01, 2026
Use Cases Job Board Scraping with CAPTCHA Handling Using CaptchaAI
Scrape job listings from Indeed, Linked In, Glassdoor, and other job boards that use CAPTCHAs with Captcha AI integration.

Scrape job listings from Indeed, Linked In, Glassdoor, and other job boards that use CAPTCHAs with Captcha AI...

Python Cloudflare Turnstile reCAPTCHA v2
Feb 28, 2026
Explainers How Proxy Quality Affects CAPTCHA Solve Success Rate
Understand how proxy quality, IP reputation, and configuration affect CAPTCHA frequency and solve success rates with Captcha AI.

Understand how proxy quality, IP reputation, and configuration affect CAPTCHA frequency and solve success rate...

Python Cloudflare Turnstile reCAPTCHA v2
Feb 06, 2026
Reference CAPTCHA Token Injection Methods Reference
Complete reference for injecting solved CAPTCHA tokens into web pages.

Complete reference for injecting solved CAPTCHA tokens into web pages. Covers re CAPTCHA, Turnstile, and Cloud...

Python Automation Cloudflare Turnstile
Apr 08, 2026
Tutorials Handling Multiple CAPTCHAs on a Single Page
how to detect and solve multiple CAPTCHAs on a single web page using Captcha AI.

Learn how to detect and solve multiple CAPTCHAs on a single web page using Captcha AI. Covers multi-iframe ext...

Python Cloudflare Turnstile reCAPTCHA v2
Apr 09, 2026
Tutorials Extracting reCAPTCHA Parameters from Page Source
Extract re CAPTCHA parameters from any web page — sitekey, action, data-s, enterprise flag, and version — using regex, DOM queries, and network interception.

Extract all re CAPTCHA parameters from any web page — sitekey, action, data-s, enterprise flag, and version —...

Python reCAPTCHA v2 Web Scraping
Apr 07, 2026
Use Cases Multi-Step Workflow Automation with CaptchaAI
Manage workflows across multiple accounts on CAPTCHA-protected platforms — , action, and data collection at scale.

Manage workflows across multiple accounts on CAPTCHA-protected platforms — , action, and data collection at sc...

Python Automation Cloudflare Turnstile
Apr 06, 2026
Troubleshooting CAPTCHA Appears After Login: Mid-Session CAPTCHA Handling
Handle CAPTCHAs that appear mid-session after — triggered by suspicious activity, rate limits, or session age.

Handle CAPTCHAs that appear mid-session after — triggered by suspicious activity, rate limits, or session age....

Python Cloudflare Turnstile reCAPTCHA v2
Apr 01, 2026
Use Cases Retail Site Data Collection with CAPTCHA Handling
Amazon uses image CAPTCHAs to block automated access.

Amazon uses image CAPTCHAs to block automated access. When you hit their anti-bot threshold, you'll see a page...

Web Scraping Image OCR
Apr 07, 2026
Use Cases Cyrillic Text CAPTCHA Solving with CaptchaAI
Solve Cyrillic text CAPTCHAs on Russian, Ukrainian, and other Slavic-language websites — handle character recognition, confusable glyphs, and encoding for Cyril...

Solve Cyrillic text CAPTCHAs on Russian, Ukrainian, and other Slavic-language websites — handle character reco...

Python Image OCR
Mar 28, 2026
Use Cases Airline Fare Monitoring with CAPTCHA Handling
Build an airline fare monitoring system that handles CAPTCHAs on travel sites using Captcha AI.

Build an airline fare monitoring system that handles CAPTCHAs on travel sites using Captcha AI. Python workflo...

Python Cloudflare Turnstile reCAPTCHA v2
Apr 01, 2026