Skip to main content

Introduction to REST API

The Scrapest REST API provides programmatic access to Twitter/X data, webhooks, tracking management, and real-time streaming capabilities.

API Overview

Base URL

https://api.scrape.st

Authentication

All API requests (except public endpoints) require authentication using an API key:
Authorization: Bearer YOUR_API_KEY

Rate Limiting

  • Standard Endpoints: 1000 requests per hour per API key
  • Streaming Endpoints: 100 requests per hour per API key
  • Health Endpoints: 100 requests per hour per API key

Response Format

All API responses follow consistent JSON format:
{
  "data": { ... },
  "message": "Success",
  "timestamp": "2024-01-15T10:30:00Z"
}

Available API Categories

Webhooks

Manage webhook subscriptions for real-time data delivery:
  • Create Webhook: Set up new webhook endpoints
  • List Webhooks: Retrieve all active webhooks
  • Delete Webhook: Remove webhook subscriptions

Tracking

Manage data tracking and monitoring:
  • Create Tracking: Set up new tracking configurations
  • List Tracking: Retrieve active tracking configurations
  • Delete Tracking: Remove tracking configurations

X Queries

Access Twitter/X data and user information:
  • User Information: Get user profile data
  • Tweet Data: Retrieve tweet information and metrics

Getting Started

1. Get Your API Key

  1. Sign up at Scrapest Dashboard
  2. Navigate to API Keys section
  3. Generate a new API key
  4. Copy and securely store your API key

2. Make Your First Request

curl -X GET https://api.scrape.st/health \
  -H "Authorization: Bearer YOUR_API_KEY"

3. Explore Endpoints

API Design Principles

RESTful Design

  • HTTP Methods: Use appropriate HTTP verbs (GET, POST, DELETE)
  • Resource URLs: Clear, hierarchical resource naming
  • Status Codes: Standard HTTP status codes for responses
  • Stateless: Each request contains all necessary information

Consistency

  • Response Format: Uniform response structure across all endpoints
  • Error Handling: Consistent error response format
  • Pagination: Standardized pagination for list endpoints
  • Filtering: Consistent query parameter patterns

Performance

  • Caching: Appropriate caching headers for static data
  • Compression: gzip compression for response payloads
  • Rate Limiting: Fair usage limits with clear headers
  • Async Processing: Long-running operations use async patterns

Common Patterns

Error Handling

All API errors follow this format:
{
  "error": "Error description",
  "code": 400,
  "timestamp": "2024-01-15T10:30:00Z",
  "requestId": "req_1234567890"
}

Pagination

List endpoints support pagination:
GET /webhooks?page=1&limit=20
Response includes pagination metadata:
{
  "data": [...],
  "pagination": {
    "page": 1,
    "limit": 20,
    "total": 150,
    "totalPages": 8
  }
}

Filtering

Many endpoints support filtering:
GET /tracking?status=active&source=twitter

SDKs and Libraries

Official SDKs

  • JavaScript/Node.js: npm install @scrapest/api
  • Python: pip install scrapest-api
  • cURL: Native command-line support

Community Libraries

  • Go: Community-maintained Go client
  • Ruby: Community-maintained Ruby gem
  • PHP: Community-maintained PHP package

Best Practices

Security

  • API Key Protection: Never expose API keys in client-side code
  • HTTPS Only: Always use HTTPS for API requests
  • Input Validation: Validate all user inputs before sending to API
  • Rate Limiting: Implement client-side rate limiting

Performance

  • Batch Requests: Use batch operations when possible
  • Caching: Cache responses to reduce API calls
  • Connection Reuse: Reuse HTTP connections for multiple requests
  • Async Operations: Use async/await for non-blocking operations

Error Handling

  • Retry Logic: Implement exponential backoff for failed requests
  • Status Code Handling: Handle different HTTP status codes appropriately
  • Logging: Log API requests and responses for debugging
  • Graceful Degradation: Handle API unavailability gracefully

Support and Resources

Documentation

  • API Reference: Detailed endpoint documentation
  • Code Examples: Practical implementation examples
  • Best Practices: Recommended patterns and guidelines
  • Troubleshooting: Common issues and solutions

Community

  • GitHub: Open-source issues and discussions
  • Discord: Real-time community support
  • Stack Overflow: Technical questions and answers
  • Blog: Product updates and technical articles

Support

  • Email Support: support@scrape.st
  • Status Page: Real-time system status
  • API Status: Health monitoring and metrics
  • Documentation Feedback: Report documentation issues

Next Steps

Ready to dive in? Choose your starting point:
For streaming capabilities, see the Streams documentation.