This skill provides access to Parallel Web Systems APIs for web search, deep research, and content extraction. It is the primary tool for all web-related operations in the scientific writer workflow.
Primary interface: Parallel Chat API (OpenAI-compatible) for search and research. Secondary interface: Extract API for URL verification and special cases only.
API Documentation: https://docs.parallel.ai
API Key: https://platform.parallel.ai
Environment Variable: PARALLEL_API_KEY
Use this skill for ALL of the following:
Use Extract API only for:
Do NOT use this skill for:
research-lookup which routes to Perplexity for purely academic queries)citation-management skill)search command)Search the web via the Parallel Chat API (base model) and get a synthesized summary with cited sources.
Best for: General web searches, current events, fact-finding, technical lookups, news, market data.
# Basic search
python scripts/parallel_web.py search "latest advances in quantum computing 2025"
# Use core model for more complex queries
python scripts/parallel_web.py search "compare EV battery chemistries NMC vs LFP" --model core
# Save results to file
python scripts/parallel_web.py search "renewable energy policy updates" -o results.txt
# JSON output for programmatic use
python scripts/parallel_web.py search "AI regulation landscape" --json -o results.json
Key Parameters:
objective: Natural language description of what you want to find--model: Chat model to use (base default, or core for deeper research)-o: Output file path--json: Output as JSONResponse includes: Synthesized summary organized by themes, with inline citations and a sources list.
research command)Run comprehensive multi-source research via the Parallel Chat API (core model) that produces detailed intelligence reports with citations.
Best for: Market research, comprehensive analysis, competitive intelligence, technology surveys, industry reports, any research question requiring synthesis of multiple sources.
# Default deep research (core model)
python scripts/parallel_web.py research "comprehensive analysis of the global EV battery market"
# Save research report to file
python scripts/parallel_web.py research "AI adoption in healthcare 2025" -o report.md
# Use base model for faster, lighter research
python scripts/parallel_web.py research "latest funding rounds in AI startups" --model base
# JSON output
python scripts/parallel_web.py research "renewable energy storage market in Europe" --json -o data.json
Key Parameters:
query: Research question or topic--model: Chat model to use (core default for deep research, or base for faster results)-o: Output file path--json: Output as JSONextract command) — Verification OnlyExtract content from specific URLs. Use only for citation verification and special cases.
For general research, use search or research instead.
# Verify a citation's content
python scripts/parallel_web.py extract "https://example.com/article" --objective "key findings"
# Get full page content for verification
python scripts/parallel_web.py extract "https://docs.example.com/api" --full-content
# Save extraction to file
python scripts/parallel_web.py extract "https://paper-url.com" --objective "methodology" -o extracted.md
The Chat API supports two research models. Use base for most searches and core for deep research.
| Model | Latency | Strengths | Use When |
|---|---|---|---|
base |
15s-100s | Standard research, factual queries | Web searches, quick lookups |
core |
60s-5min | Complex research, multi-source synthesis | Deep research, comprehensive reports |
Recommendations:
search command defaults to base — fast, good for most queriesresearch command defaults to core — thorough, good for comprehensive reports--model when you need different depth/speed tradeoffsfrom parallel_web import ParallelSearch
searcher = ParallelSearch()
result = searcher.search(
objective="Find latest information about transformer architectures in NLP",
model="base",
)
if result["success"]:
print(result["response"]) # Synthesized summary
for src in result["sources"]:
print(f" {src['title']}: {src['url']}")
from parallel_web import ParallelDeepResearch
researcher = ParallelDeepResearch()
result = researcher.research(
query="Comprehensive analysis of AI regulation in the EU and US",
model="core",
)
if result["success"]:
print(result["response"]) # Full research report
print(f"Citations: {result['citation_count']}")
from parallel_web import ParallelExtract
extractor = ParallelExtract()
result = extractor.extract(
urls=["https://docs.example.com/api-reference"],
objective="API authentication methods and rate limits",
)
if result["success"]:
for r in result["results"]:
print(r["excerpts"])
Every web search and deep research result MUST be saved to the project's sources/ folder.
This ensures all research is preserved for reproducibility, auditability, and context window recovery.
| Operation | -o Flag Target |
Filename Pattern |
|---|---|---|
| Web Search | sources/search_<topic>.md |
search_YYYYMMDD_HHMMSS_<brief_topic>.md |
| Deep Research | sources/research_<topic>.md |
research_YYYYMMDD_HHMMSS_<brief_topic>.md |
| URL Extract | sources/extract_<source>.md |
extract_YYYYMMDD_HHMMSS_<brief_source>.md |
-o Flag)CRITICAL: Every call to parallel_web.py MUST include the -o flag pointing to the sources/ folder.
# Web search — ALWAYS save to sources/
python scripts/parallel_web.py search "latest advances in quantum computing 2025" \
-o sources/search_20250217_143000_quantum_computing.md
# Deep research — ALWAYS save to sources/
python scripts/parallel_web.py research "comprehensive analysis of the global EV battery market" \
-o sources/research_20250217_144000_ev_battery_market.md
# URL extraction (verification only) — save to sources/
python scripts/parallel_web.py extract "https://example.com/article" --objective "key findings" \
-o sources/extract_20250217_143500_example_article.md
sources/
sources/ folder provides complete transparency into how information was gatheredsources/ for existing resultsWhen saving research results, always log:
[HH:MM:SS] SAVED: Search results to sources/search_20250217_143000_quantum_computing.md
[HH:MM:SS] SAVED: Deep research report to sources/research_20250217_144000_ev_battery_market.md
Before calling parallel_web.py, check if a relevant result already exists in sources/:
ls sources/ # Check existing saved results
| Task | Tool | Command |
|---|---|---|
| Web search (any) | parallel_web.py search |
python scripts/parallel_web.py search "query" -o sources/search_<topic>.md |
| Deep research | parallel_web.py research |
python scripts/parallel_web.py research "query" -o sources/research_<topic>.md |
| Citation verification | parallel_web.py extract |
python scripts/parallel_web.py extract "url" -o sources/extract_<source>.md |
| Academic paper search | research_lookup.py |
Routes to Perplexity sonar-pro-search |
| DOI/metadata lookup | parallel_web.py extract |
Extract from DOI URLs (verification) |
search or research to gather background information — save results to sources/
research-lookup (which routes academic queries to Perplexity) — save results to sources/
parallel_web.py extract — save results to sources/
parallel_web.py research --model core — save results to sources/
sources/ for existing results to avoid duplicate API calls# Required: Set your Parallel API key
export PARALLEL_API_KEY="your_api_key_here"
# Required Python packages
pip install openai # For Chat API (search/research)
pip install parallel-web # For Extract API (verification only)
Get your API key at https://platform.parallel.ai
The script handles errors gracefully and returns structured error responses:
{
"success": false,
"error": "Error description",
"timestamp": "2025-02-14 12:00:00"
}
Common issues:
PARALLEL_API_KEY not set: Set the environment variableopenai not installed: Run pip install openai
parallel-web not installed: Run pip install parallel-web (only needed for extract)Rate limit exceeded: Wait and retry (default: 300 req/min for Chat API)| Skill | Use For |
|---|---|
research-lookup |
Academic paper searches (routes to Perplexity for scholarly queries) |
citation-management |
Google Scholar, PubMed, CrossRef database searches |
literature-review |
Systematic literature reviews across academic databases |
scientific-schematics |
Generate diagrams from research findings |