API Reference
Complete reference for the DeepSeek API, providing detailed documentation for all endpoints, parameters, and response formats.
Base URL
https://api.deepseek.com
Authentication
All API requests require authentication using an API key in the Authorization header:
Authorization: Bearer YOUR_API_KEY
Chat Completions
Create Chat Completion
Create a chat completion response for the given conversation.
Endpoint: POST /chat/completions
Request Body
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
],
"max_tokens": 1024,
"temperature": 0.7,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
"stream": false
}
Parameters
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
model | string | Yes | - | Model to use for completion |
messages | array | Yes | - | List of messages in the conversation |
max_tokens | integer | No | 1024 | Maximum tokens to generate |
temperature | number | No | 0.7 | Sampling temperature (0-2) |
top_p | number | No | 1 | Nucleus sampling parameter |
frequency_penalty | number | No | 0 | Frequency penalty (-2 to 2) |
presence_penalty | number | No | 0 | Presence penalty (-2 to 2) |
stream | boolean | No | false | Enable streaming responses |
stop | string/array | No | null | Stop sequences |
logit_bias | object | No | null | Token logit bias |
Response
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "deepseek-chat",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm doing well, thank you for asking. How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 19,
"total_tokens": 31
}
}
Streaming Chat Completion
For streaming responses, set "stream": true
in the request body.
Streaming Response Format
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"deepseek-chat","choices":[{"index":0,"delta":{"role":"assistant","content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"deepseek-chat","choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"deepseek-chat","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
data: [DONE]
Code Completions
Create Code Completion
Generate code completions using DeepSeek Coder models.
Endpoint: POST /code/completions
Request Body
{
"model": "deepseek-coder",
"prompt": "def fibonacci(n):",
"max_tokens": 150,
"temperature": 0.2,
"language": "python"
}
Parameters
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
model | string | Yes | - | Code model to use |
prompt | string | Yes | - | Code prompt to complete |
max_tokens | integer | No | 150 | Maximum tokens to generate |
temperature | number | No | 0.2 | Sampling temperature |
language | string | No | auto | Programming language |
stop | string/array | No | null | Stop sequences |
Response
{
"id": "codecmpl-123",
"object": "code.completion",
"created": 1677652288,
"model": "deepseek-coder",
"choices": [
{
"text": "\n if n <= 1:\n return n\n return fibonacci(n-1) + fibonacci(n-2)",
"index": 0,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 25,
"total_tokens": 30
}
}
Function Calling
Function Call Format
Functions can be defined in chat completion requests to enable structured outputs.
Request with Functions
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "What's the weather like in San Francisco?"
}
],
"functions": [
{
"name": "get_weather",
"description": "Get current weather information",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit"
}
},
"required": ["location"]
}
}
],
"function_call": "auto"
}
Function Call Response
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "deepseek-chat",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": null,
"function_call": {
"name": "get_weather",
"arguments": "{\"location\": \"San Francisco\", \"unit\": \"celsius\"}"
}
},
"finish_reason": "function_call"
}
]
}
Vision API
Analyze Images
Analyze images using DeepSeek Vision models.
Endpoint: POST /vision/analyze
Request Body
{
"model": "deepseek-vision",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What's in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/image.jpg"
}
}
]
}
],
"max_tokens": 300
}
Parameters
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
model | string | Yes | - | Vision model to use |
messages | array | Yes | - | Messages with image content |
max_tokens | integer | No | 300 | Maximum tokens to generate |
detail | string | No | auto | Image detail level (low/high/auto) |
Audio API
Speech to Text
Convert audio to text using DeepSeek Audio models.
Endpoint: POST /audio/transcriptions
Request Body (multipart/form-data)
file: audio_file.mp3
model: deepseek-audio
language: en
response_format: json
Parameters
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
file | file | Yes | - | Audio file to transcribe |
model | string | Yes | - | Audio model to use |
language | string | No | auto | Audio language |
response_format | string | No | json | Response format (json/text/srt/vtt) |
temperature | number | No | 0 | Sampling temperature |
Response
{
"text": "Hello, this is a transcription of the audio file."
}
Models
List Available Models
Get a list of available models.
Endpoint: GET /models
Response
{
"object": "list",
"data": [
{
"id": "deepseek-chat",
"object": "model",
"created": 1677610602,
"owned_by": "deepseek",
"permission": [],
"root": "deepseek-chat",
"parent": null
},
{
"id": "deepseek-coder",
"object": "model",
"created": 1677610602,
"owned_by": "deepseek",
"permission": [],
"root": "deepseek-coder",
"parent": null
}
]
}
Retrieve Model
Get details about a specific model.
Endpoint: GET /models/{model_id}
Response
{
"id": "deepseek-chat",
"object": "model",
"created": 1677610602,
"owned_by": "deepseek",
"permission": [],
"root": "deepseek-chat",
"parent": null,
"description": "Advanced conversational AI model"
}
Error Handling
Error Response Format
{
"error": {
"message": "Invalid API key provided",
"type": "invalid_request_error",
"param": null,
"code": "invalid_api_key"
}
}
Common Error Codes
Code | Status | Description |
---|---|---|
invalid_api_key | 401 | Invalid or missing API key |
insufficient_quota | 429 | Rate limit exceeded |
model_not_found | 404 | Requested model not found |
invalid_request_error | 400 | Invalid request parameters |
server_error | 500 | Internal server error |
Rate Limits
Rate limits are applied per API key:
- Chat Completions: 60 requests per minute
- Code Completions: 100 requests per minute
- Vision API: 30 requests per minute
- Audio API: 20 requests per minute
Rate limit headers are included in responses:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 59
X-RateLimit-Reset: 1677652348
SDKs and Libraries
Python SDK
pip install deepseek-python
from deepseek import DeepSeek
client = DeepSeek(api_key="your-api-key")
response = client.chat.completions.create(
model="deepseek-chat",
messages=[{"role": "user", "content": "Hello!"}]
)
JavaScript SDK
npm install deepseek-js
import DeepSeek from 'deepseek-js';
const client = new DeepSeek({
apiKey: 'your-api-key'
});
const response = await client.chat.completions.create({
model: 'deepseek-chat',
messages: [{ role: 'user', content: 'Hello!' }]
});
cURL Examples
Basic Chat Completion
curl https://api.deepseek.com/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "deepseek-chat",
"messages": [{"role": "user", "content": "Hello!"}],
"max_tokens": 100
}'
Streaming Response
curl https://api.deepseek.com/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "deepseek-chat",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true
}' \
--no-buffer
Best Practices
Optimization Tips
- Use appropriate models for your use case
- Set reasonable token limits to control costs
- Implement proper error handling for robust applications
- Cache responses when appropriate
- Use streaming for real-time applications
Security Considerations
- Never expose API keys in client-side code
- Use environment variables for API key storage
- Implement rate limiting in your applications
- Validate user inputs before sending to API
- Monitor API usage for unusual patterns
Support and Resources
- Documentation: https://docs.deepseek.com
- API Status: https://status.deepseek.com
- Support: support@deepseek.com
- Community: https://community.deepseek.com
For additional help and examples, visit our comprehensive documentation and community forums.