Introduction to REST API
The Scrapest REST API provides programmatic access to Twitter/X data, webhooks, tracking management, and real-time streaming capabilities.API Overview
Base URL
Authentication
All API requests (except public endpoints) require authentication using an API key:Rate Limiting
- Standard Endpoints: 1000 requests per hour per API key
- Streaming Endpoints: 100 requests per hour per API key
- Health Endpoints: 100 requests per hour per API key
Response Format
All API responses follow consistent JSON format:Available API Categories
Webhooks
Manage webhook subscriptions for real-time data delivery:- Create Webhook: Set up new webhook endpoints
- List Webhooks: Retrieve all active webhooks
- Delete Webhook: Remove webhook subscriptions
Tracking
Manage data tracking and monitoring:- Create Tracking: Set up new tracking configurations
- List Tracking: Retrieve active tracking configurations
- Delete Tracking: Remove tracking configurations
X Queries
Access Twitter/X data and user information:- User Information: Get user profile data
- Tweet Data: Retrieve tweet information and metrics
Getting Started
1. Get Your API Key
- Sign up at Scrapest Dashboard
- Navigate to API Keys section
- Generate a new API key
- Copy and securely store your API key
2. Make Your First Request
3. Explore Endpoints
- Webhooks: Create your first webhook
- Tracking: Set up data tracking
- X Queries: Query Twitter/X data
API Design Principles
RESTful Design
- HTTP Methods: Use appropriate HTTP verbs (GET, POST, DELETE)
- Resource URLs: Clear, hierarchical resource naming
- Status Codes: Standard HTTP status codes for responses
- Stateless: Each request contains all necessary information
Consistency
- Response Format: Uniform response structure across all endpoints
- Error Handling: Consistent error response format
- Pagination: Standardized pagination for list endpoints
- Filtering: Consistent query parameter patterns
Performance
- Caching: Appropriate caching headers for static data
- Compression: gzip compression for response payloads
- Rate Limiting: Fair usage limits with clear headers
- Async Processing: Long-running operations use async patterns
Common Patterns
Error Handling
All API errors follow this format:Pagination
List endpoints support pagination:Filtering
Many endpoints support filtering:SDKs and Libraries
Official SDKs
- JavaScript/Node.js:
npm install @scrapest/api - Python:
pip install scrapest-api - cURL: Native command-line support
Community Libraries
- Go: Community-maintained Go client
- Ruby: Community-maintained Ruby gem
- PHP: Community-maintained PHP package
Best Practices
Security
- API Key Protection: Never expose API keys in client-side code
- HTTPS Only: Always use HTTPS for API requests
- Input Validation: Validate all user inputs before sending to API
- Rate Limiting: Implement client-side rate limiting
Performance
- Batch Requests: Use batch operations when possible
- Caching: Cache responses to reduce API calls
- Connection Reuse: Reuse HTTP connections for multiple requests
- Async Operations: Use async/await for non-blocking operations
Error Handling
- Retry Logic: Implement exponential backoff for failed requests
- Status Code Handling: Handle different HTTP status codes appropriately
- Logging: Log API requests and responses for debugging
- Graceful Degradation: Handle API unavailability gracefully
Support and Resources
Documentation
- API Reference: Detailed endpoint documentation
- Code Examples: Practical implementation examples
- Best Practices: Recommended patterns and guidelines
- Troubleshooting: Common issues and solutions
Community
- GitHub: Open-source issues and discussions
- Discord: Real-time community support
- Stack Overflow: Technical questions and answers
- Blog: Product updates and technical articles
Support
- Email Support: support@scrape.st
- Status Page: Real-time system status
- API Status: Health monitoring and metrics
- Documentation Feedback: Report documentation issues
Next Steps
Ready to dive in? Choose your starting point:- Webhooks: Webhooks Introduction - Set up real-time data delivery
- Tracking: Tracking Introduction - Configure data monitoring
- X Queries: X Queries Introduction - Access Twitter/X data
For streaming capabilities, see the Streams documentation.