A unified, provider-agnostic chat completions API server supporting OpenAI and AWS Bedrock
This directory contains comprehensive integration tests that use real API credentials from your .env
file to test the actual functionality of OpenAI and AWS Bedrock services.
These tests make REAL API calls and incur costs!
The tests are designed with safety in mind:
real_api
marker: Tests only run when explicitly requestedThe test_real_api_integration.py
file contains tests that:
.env
file is properly configured with API credentials:# OpenAI Configuration
OPENAI_API_KEY="sk-proj-..."
# AWS Bedrock Configuration
AWS_ACCESS_KEY_ID="AKIA..."
AWS_SECRET_ACCESS_KEY="..."
AWS_SESSION_TOKEN="..." # If using temporary credentials
AWS_REGION="us-east-1"
pip install pytest pytest-asyncio pytest-cov
real_api
marker: Tests that make actual API calls and cost money# Safe: Run only configuration tests (NO API CALLS)
pytest tests/test_real_api_integration.py -k "not real_api"
# COSTS MONEY: Run real API tests (requires explicit marker)
pytest tests/test_real_api_integration.py -m real_api
The test runner includes safety prompts and cost warnings:
# Run quick smoke tests (includes cost warning)
python run_real_api_tests.py
# Skip the cost confirmation prompt
python run_real_api_tests.py --yes
# Run all real API tests
python run_real_api_tests.py --mode all
# Run only OpenAI tests
python run_real_api_tests.py --mode openai
# Run only Bedrock tests
python run_real_api_tests.py --mode bedrock
# Run ONLY configuration tests (NO API CALLS, NO COSTS)
python run_real_api_tests.py --mode config
# Run with verbose output
python run_real_api_tests.py --verbose
# Stop on first failure
python run_real_api_tests.py --failfast
β οΈ These commands make real API calls and cost money!
# Run all real API tests (COSTS MONEY)
pytest tests/test_real_api_integration.py -m real_api -v
# Run only OpenAI tests (COSTS MONEY)
pytest tests/test_real_api_integration.py -m real_api -k "TestRealOpenAI" -v
# Run only Bedrock tests (COSTS MONEY)
pytest tests/test_real_api_integration.py -m real_api -k "TestRealBedrock" -v
# Run SAFE configuration tests only (NO API CALLS)
pytest tests/test_real_api_integration.py -k "not real_api" -v
# Run with logging (COSTS MONEY)
pytest tests/test_real_api_integration.py -m real_api --log-cli-level=INFO -s
The tests use pytest markers to ensure safety:
@pytest.mark.real_api
: REQUIRED for tests that make real API calls@pytest.mark.skipif(not OPENAI_AVAILABLE)
: Skip if OpenAI not configured@pytest.mark.skipif(not AWS_AVAILABLE)
: Skip if AWS not configuredWhen tests pass, you should see:
Tests will be automatically skipped if:
real_api
marker (for safety)π° API Usage Costs:
Cost Minimization Features:
# This will NOT run real API tests (safe)
pytest tests/test_real_api_integration.py
# This WILL run real API tests (costs money)
pytest tests/test_real_api_integration.py -m real_api
The test runner will prompt before running costly tests:
β οΈ COST WARNING:
These tests make REAL API calls that will incur costs!
Estimated cost per run:
β’ Quick mode: ~$0.01-0.02
β’ Full test suite: ~$0.05-0.10
Continue? [y/N]:
# Run ONLY configuration tests (zero API calls)
python run_real_api_tests.py --mode config
No tests ran matching the given pattern
-m real_api
to run the real API tests--mode config
for configuration-only testsConfigurationError: API key not configured
.env
file contains valid credentialsModelNotFoundError: Model not supported in region
RateLimitError: Too many requests
To debug test failures:
python run_real_api_tests.py --mode config
python run_real_api_tests.py --verbose --yes
pytest tests/test_real_api_integration.py::TestRealOpenAIIntegration::test_openai_chat_completion_basic -m real_api -v -s
For automated testing, consider these safety measures:
Example GitHub Actions configuration:
- name: Run Configuration Tests (Safe)
run: python run_real_api_tests.py --mode config
# This runs on every push - no API calls
- name: Run Real API Tests (Costs Money)
env:
OPENAI_API_KEY: $
AWS_ACCESS_KEY_ID: $
AWS_SECRET_ACCESS_KEY: $
run: python run_real_api_tests.py --mode quick --yes
if: github.event_name == 'workflow_dispatch' # Manual trigger only
When adding new tests:
real_api
marker for tests that make API calls# Test your setup without any API calls (FREE)
python run_real_api_tests.py --mode config
# Test that your APIs work with minimal cost (~$0.01)
python run_real_api_tests.py --mode quick
# Run basic validation directly (costs money)
PYTHONPATH=. python tests/test_real_api_integration.py
Remember: Always check your API usage dashboards after running real API tests to monitor costs!