A unified, provider-agnostic chat completions API server supporting OpenAI and AWS Bedrock
This guide will help you get the Open Bedrock Server up and running quickly.
Before you begin, ensure you have:
If you don’t have uv
installed:
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# Or with pip
pip install uv
git clone https://github.com/teabranch/open-bedrock-server.git
cd open-bedrock-server
# Install in development mode
uv pip install -e .
# Or with pip
pip install -e .
bedrock-chat --version
The easiest way to configure the server is using the interactive setup:
bedrock-chat config set
This will prompt you for:
Alternatively, create a .env
file in your project directory:
# Required
OPENAI_API_KEY=sk-your_openai_api_key
API_KEY=your-server-api-key
# File Storage (optional - for file query features)
S3_FILES_BUCKET=your-s3-bucket-name
# AWS Configuration (choose one method)
# Method 1: Static credentials
AWS_ACCESS_KEY_ID=your_aws_access_key_id
AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key
AWS_REGION=us-east-1
# Method 2: AWS Profile
AWS_PROFILE=your_aws_profile
AWS_REGION=us-east-1
# Optional settings
DEFAULT_OPENAI_MODEL=gpt-4o-mini
LOG_LEVEL=INFO
The server supports multiple AWS authentication methods:
For detailed AWS authentication setup, see the AWS Authentication Guide.
bedrock-chat serve
This starts the server on http://localhost:8000
.
bedrock-chat serve --host 0.0.0.0 --port 8000 --workers 4
bedrock-chat serve --reload --log-level debug
Once the server is running, test it with a simple API call:
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{"role": "user", "content": "Hello! How are you?"}
],
"stream": false
}'
import requests
response = requests.post(
"http://localhost:8000/v1/chat/completions",
headers={
"Content-Type": "application/json",
"Authorization": "Bearer your-api-key"
},
json={
"model": "gpt-4o-mini",
"messages": [
{"role": "user", "content": "Hello! How are you?"}
],
"stream": False
}
)
print(response.json())
The server supports uploading and querying files in chat completions. This enables you to analyze documents, data files, and other content.
Add S3 configuration to your .env
file:
# Required for file operations
S3_FILES_BUCKET=your-s3-bucket-name
AWS_REGION=us-east-1
# AWS credentials (same as for Bedrock)
AWS_ACCESS_KEY_ID=your_aws_access_key_id
AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key
# Upload a CSV data file
curl -X POST http://localhost:8000/v1/files \
-H "Authorization: Bearer your-api-key" \
-F "file=@sales_data.csv" \
-F "purpose=assistants"
# Response includes file ID: {"id": "file-abc123def456", ...}
Use the file ID from the upload response in your chat completions:
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{"role": "user", "content": "What trends do you see in this sales data?"}
],
"file_ids": ["file-abc123def456"]
}'
The system automatically:
# List uploaded files
curl -H "Authorization: Bearer your-api-key" \
http://localhost:8000/v1/files
# Get file details
curl -H "Authorization: Bearer your-api-key" \
http://localhost:8000/v1/files/file-abc123def456
# Download file content
curl -H "Authorization: Bearer your-api-key" \
http://localhost:8000/v1/files/file-abc123def456/content \
-o downloaded_file.csv
# Delete a file
curl -X DELETE \
-H "Authorization: Bearer your-api-key" \
http://localhost:8000/v1/files/file-abc123def456
Try the built-in chat interface:
bedrock-chat chat --model gpt-4o-mini
This starts an interactive chat session where you can:
/model <model-name>
/settings
/save <filename>
/help
The unified endpoint supports multiple input/output formats:
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}]
}'
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{
"anthropic_version": "bedrock-2023-05-31",
"model": "anthropic.claude-3-haiku-20240307-v1:0",
"max_tokens": 1000,
"messages": [{"role": "user", "content": "Hello!"}]
}'
Convert between formats using the target_format
parameter:
# OpenAI input → Bedrock Claude output
curl -X POST "http://localhost:8000/v1/chat/completions?target_format=bedrock_claude" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Verify the server is running properly:
# General health check
curl http://localhost:8000/health
# Unified endpoint health check
curl http://localhost:8000/v1/chat/completions/health
# List available models
curl -H "Authorization: Bearer your-api-key" \
http://localhost:8000/v1/models
Now that you have the server running:
Server won’t start:
bedrock-chat config show
API calls fail:
AWS/Bedrock errors:
For a complete list of configuration options, see your .env
file or run:
bedrock-chat config show
Key configuration variables:
Variable | Description | Required |
---|---|---|
OPENAI_API_KEY |
OpenAI API key | For OpenAI models |
API_KEY |
Server authentication key | Yes |
S3_FILES_BUCKET |
S3 bucket for file storage | For file operations |
AWS_REGION |
AWS region | For Bedrock/file operations |
AWS_PROFILE |
AWS profile name | Alternative to static credentials |
DEFAULT_OPENAI_MODEL |
Default OpenAI model | No |
LOG_LEVEL |
Logging level | No |