Performance Metrics
Access general performance metrics for the Scrapest platform through the metrics API endpoint.Metrics Endpoint
HTTP Request
Response Format
Response Fields
Time Window
- window_hours: Number of hours included in the metrics calculation (always 24)
- timestamp: When the metrics were calculated
Request Counts
- count.source: Number of source API requests in the time window
- count.internal: Number of internal processing operations
Latency Percentiles
- source_latency_ms: Source API latency in milliseconds
- p50: Median latency (50th percentile)
- p95: 95th percentile latency
- p99: 99th percentile latency
- internal_latency_ms: Internal processing latency in milliseconds
- p50: Median internal processing time
- p95: 95th percentile internal processing time
- p99: 99th percentile internal processing time
Implementation Examples
JavaScript (Node.js)
Python
cURL
Performance Monitoring
Setting Up Monitoring
Data Visualization
Creating Dashboards
Historical Analysis
Tracking Trends
Error Handling
Common Error Responses
Authentication Error
Rate Limit Error
Service Unavailable
Error Handling Implementation
Best Practices
For Monitoring
- Regular Checks: Monitor metrics every 5-15 minutes
- Alert Thresholds: Set appropriate alert thresholds for your use case
- Historical Analysis: Track trends over time to identify patterns
- Baseline Establishment: Establish performance baselines for comparison
For Performance Optimization
- Focus on P95: P95 latency represents most user experiences
- Monitor Trends: Watch for gradual performance degradation
- Correlate Events: Correlate performance issues with system events
- Proactive Monitoring: Set up alerts before issues become critical