Skip to main content

Your First Scrape

Make your first scraping request with Wryn in just a few minutes.

Prerequisites

Before starting, ensure you have:

  • ✅ Created a Wryn account
  • ✅ Obtained your API key
  • ✅ Set up your environment

If not, see Account Setup.

Quick Start

Let's scrape a simple webpage to extract its title and content.

cURL Example

curl -X POST https://api.wryn.io/v1/<end_point> \
-H "x-api-key: wryn_live_1234567890abcdefghijklmnopqrstuvwxyz" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com/",
"action": "extract_title",
"engine": "simple",
"retries": 2
}'

Python Example

import os
from wrynai import WrynAI, Engine

api_key = os.environ.get("WRYNAI_API_KEY", "your-api-key-here")

client = WrynAI(api_key=api_key)

result = client.extract_text(
url="https://example.com",
extract_main_content=True,
engine=Engine.SIMPLE,
)
print(f"Text (first 200 chars): {result.text[:200]}...")
print()

Understanding the Response

A successful scrape returns:

{
"status": "success",
"scrape_id": "scr_1234567890",
"data": {
"title": "Example Domain",
"description": "This domain is for use in illustrative examples in documents."
},
"metadata": {
"url": "https://example.com",
"scraped_at": "2025-12-06T10:30:00Z",
"response_time": 1.2,
"status_code": 200
}
}

Common errors:

CodeMessageSolution
401UnauthorizedCheck API key
404Page not foundVerify URL
429Rate limit exceededWait and retry
500Server errorContact support

Next Steps

Troubleshooting

No Data Returned

Problem: Response has empty data field

Solutions:

  • Verify field names match page content
  • Try using custom selectors
  • Enable JavaScript rendering if needed
  • Check if page requires authentication

Slow Responses

Problem: Requests take too long

Solutions:

  • Reduce wait_time if set too high
  • Request fewer fields
  • Use async mode for large jobs
  • Check target website performance

Rate Limited

Problem: Getting 429 errors

Solutions:

  • Add delays between requests
  • Upgrade to higher tier plan
  • Use batch API for bulk scraping
  • Contact support for custom limits

Need Help?