Open Bedrock Server

A unified, provider-agnostic chat completions API server supporting OpenAI and AWS Bedrock

View the Project on GitHub teabranch/open-bedrock-server

Real API Integration Tests

This directory contains comprehensive integration tests that use real API credentials from your .env file to test the actual functionality of OpenAI and AWS Bedrock services.

⚠️ IMPORTANT SAFETY NOTICE

These tests make REAL API calls and incur costs!

The tests are designed with safety in mind:

Overview

The test_real_api_integration.py file contains tests that:

Prerequisites

  1. Environment Configuration: Ensure your .env file is properly configured with API credentials:
# OpenAI Configuration
OPENAI_API_KEY="sk-proj-..."

# AWS Bedrock Configuration
AWS_ACCESS_KEY_ID="AKIA..."
AWS_SECRET_ACCESS_KEY="..."
AWS_SESSION_TOKEN="..."  # If using temporary credentials
AWS_REGION="us-east-1"
  1. Dependencies: Make sure all required packages are installed:
pip install pytest pytest-asyncio pytest-cov

Running the Tests

🚨 Safety First: Understanding the Markers

# Safe: Run only configuration tests (NO API CALLS)
pytest tests/test_real_api_integration.py -k "not real_api"

# COSTS MONEY: Run real API tests (requires explicit marker)
pytest tests/test_real_api_integration.py -m real_api

The test runner includes safety prompts and cost warnings:

# Run quick smoke tests (includes cost warning)
python run_real_api_tests.py

# Skip the cost confirmation prompt
python run_real_api_tests.py --yes

# Run all real API tests
python run_real_api_tests.py --mode all

# Run only OpenAI tests
python run_real_api_tests.py --mode openai

# Run only Bedrock tests
python run_real_api_tests.py --mode bedrock

# Run ONLY configuration tests (NO API CALLS, NO COSTS)
python run_real_api_tests.py --mode config

# Run with verbose output
python run_real_api_tests.py --verbose

# Stop on first failure
python run_real_api_tests.py --failfast

Using pytest directly

⚠️ These commands make real API calls and cost money!

# Run all real API tests (COSTS MONEY)
pytest tests/test_real_api_integration.py -m real_api -v

# Run only OpenAI tests (COSTS MONEY)
pytest tests/test_real_api_integration.py -m real_api -k "TestRealOpenAI" -v

# Run only Bedrock tests (COSTS MONEY)
pytest tests/test_real_api_integration.py -m real_api -k "TestRealBedrock" -v

# Run SAFE configuration tests only (NO API CALLS)
pytest tests/test_real_api_integration.py -k "not real_api" -v

# Run with logging (COSTS MONEY)
pytest tests/test_real_api_integration.py -m real_api --log-cli-level=INFO -s

Test Categories

πŸ’° TestRealOpenAIIntegration (Costs Money)

πŸ’° TestRealBedrockIntegration (Costs Money)

πŸ’° TestRealAPIComparison (Costs Money)

πŸ†“ TestConfigurationValidation (Free)

πŸ’° TestPerformanceAndLimits (Costs Money)

Test Markers

The tests use pytest markers to ensure safety:

Expected Behavior

Successful Test Run

When tests pass, you should see:

Skipped Tests

Tests will be automatically skipped if:

Cost Considerations

πŸ’° API Usage Costs:

Cost Minimization Features:

Safety Features

1. Marker Protection

# This will NOT run real API tests (safe)
pytest tests/test_real_api_integration.py

# This WILL run real API tests (costs money)
pytest tests/test_real_api_integration.py -m real_api

2. Interactive Cost Warnings

The test runner will prompt before running costly tests:

⚠️  COST WARNING:
   These tests make REAL API calls that will incur costs!
   Estimated cost per run:
   β€’ Quick mode: ~$0.01-0.02
   β€’ Full test suite: ~$0.05-0.10

   Continue? [y/N]:

3. Configuration-Only Mode

# Run ONLY configuration tests (zero API calls)
python run_real_api_tests.py --mode config

Troubleshooting

Common Issues

  1. Tests Not Running
No tests ran matching the given pattern
  1. Authentication Errors
ConfigurationError: API key not configured
  1. Model Not Available
ModelNotFoundError: Model not supported in region
  1. Rate Limiting
RateLimitError: Too many requests

Debugging

To debug test failures:

  1. Test configuration safely:
python run_real_api_tests.py --mode config
  1. Enable verbose logging:
python run_real_api_tests.py --verbose --yes
  1. Run individual test methods:
pytest tests/test_real_api_integration.py::TestRealOpenAIIntegration::test_openai_chat_completion_basic -m real_api -v -s

Integration with CI/CD

For automated testing, consider these safety measures:

  1. Manual Triggers Only: Never run real API tests on every commit
  2. Separate API Keys: Use dedicated testing API keys with spending limits
  3. Cost Monitoring: Set up billing alerts
  4. Conditional Execution: Only run on specific branches

Example GitHub Actions configuration:

- name: Run Configuration Tests (Safe)
  run: python run_real_api_tests.py --mode config
  # This runs on every push - no API calls

- name: Run Real API Tests (Costs Money)
  env:
    OPENAI_API_KEY: $
    AWS_ACCESS_KEY_ID: $
    AWS_SECRET_ACCESS_KEY: $
  run: python run_real_api_tests.py --mode quick --yes
  if: github.event_name == 'workflow_dispatch'  # Manual trigger only

Contributing

When adding new tests:

  1. Always use the real_api marker for tests that make API calls
  2. Test configuration separately without the marker for free validation
  3. Include proper assertions for response validation
  4. Add logging for debugging and monitoring
  5. Consider cost implications of new API calls
  6. Test both success and failure scenarios
  7. Use minimal tokens to keep costs low

Quick Start Examples

Safe Configuration Check

# Test your setup without any API calls (FREE)
python run_real_api_tests.py --mode config

Quick Validation

# Test that your APIs work with minimal cost (~$0.01)
python run_real_api_tests.py --mode quick

Direct Script Execution

# Run basic validation directly (costs money)
PYTHONPATH=. python tests/test_real_api_integration.py

Remember: Always check your API usage dashboards after running real API tests to monitor costs!