API Documentation
Complete guide to integrate with our AI-powered chat API. Build intelligent conversations into your applications.
Quick Start
Get started in minutes with our simple API
No Auth Required
Secure & Reliable
Enterprise-grade security and 99.9% uptime
Production Ready
Global CDN
Low latency worldwide with edge deployment
Global Scale
Base URL
https://python-test-server-uld3.onrender.com
API Endpoints
GET /health
Check if the API is running and healthy
Response:
{
"status": "healthy",
"timestamp": "2025-01-27T10:30:00",
"environment": "production",
"version": "1.0.0"
}
Code Examples
JavaScript/Node.js Example
// Simple chat example
async function sendMessage(message, systemPrompt = null) {
const response = await fetch('https://python-test-server-uld3.onrender.com/chat/simple', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
message: message,
system_prompt: systemPrompt
})
});
const data = await response.json();
return data.response;
}
// Full chat completions example
async function chatCompletion(messages, options = {}) {
const response = await fetch('https://python-test-server-uld3.onrender.com/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
messages: messages,
system_prompt: options.systemPrompt,
max_tokens: options.maxTokens || 1000,
temperature: options.temperature || 0.7
})
});
return await response.json();
}
Error Handling
The API returns standard HTTP status codes
200 - Success
Request completed successfully
400 - Bad Request
Invalid parameters
500 - Server Error
Internal server error
Error Response Format:
{
"error": "Validation Error",
"detail": "Invalid temperature value. Must be between 0.0 and 2.0",
"timestamp": "2025-01-27T10:30:00"
}
Need Help?
Check our support resources or get in touch