Bedrock Agents
Call Bedrock Agents in the OpenAI Request/Response format.
| Property | Details | 
|---|---|
| Description | Amazon Bedrock Agents use the reasoning of foundation models (FMs), APIs, and data to break down user requests, gather relevant information, and efficiently complete tasks. | 
| Provider Route on LiteLLM | bedrock/agent/{AGENT_ID}/{ALIAS_ID} | 
| Provider Doc | AWS Bedrock Agents โ | 
Quick Startโ
Model Format to LiteLLMโ
To call a bedrock agent through LiteLLM, you need to use the following model format to call the agent.
Here the model=bedrock/agent/ tells LiteLLM to call the bedrock InvokeAgent API.
Model Format to LiteLLM
bedrock/agent/{AGENT_ID}/{ALIAS_ID}
Example:
- bedrock/agent/L1RT58GYRW/MFPSBCXYTW
- bedrock/agent/ABCD1234/LIVE
You can find these IDs in your AWS Bedrock console under Agents.
LiteLLM Python SDKโ
Basic Agent Completion
import litellm
# Make a completion request to your Bedrock Agent
response = litellm.completion(
    model="bedrock/agent/L1RT58GYRW/MFPSBCXYTW",  # agent/{AGENT_ID}/{ALIAS_ID}
    messages=[
        {
            "role": "user", 
            "content": "Hi, I need help with analyzing our Q3 sales data and generating a summary report"
        }
    ],
)
print(response.choices[0].message.content)
print(f"Response cost: ${response._hidden_params['response_cost']}")
Streaming Agent Responses
import litellm
# Stream responses from your Bedrock Agent
response = litellm.completion(
    model="bedrock/agent/L1RT58GYRW/MFPSBCXYTW",
    messages=[
        {
            "role": "user",
            "content": "Can you help me plan a marketing campaign and provide step-by-step execution details?"
        }
    ],
    stream=True,
)
for chunk in response:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")
LiteLLM Proxyโ
1. Configure your model in config.yamlโ
- config.yaml
LiteLLM Proxy Configuration
model_list:
  - model_name: bedrock-agent-1
    litellm_params:
      model: bedrock/agent/L1RT58GYRW/MFPSBCXYTW
      aws_access_key_id: os.environ/AWS_ACCESS_KEY_ID
      aws_secret_access_key: os.environ/AWS_SECRET_ACCESS_KEY
      aws_region_name: us-west-2
  - model_name: bedrock-agent-2  
    litellm_params:
      model: bedrock/agent/AGENT456/ALIAS789
      aws_access_key_id: os.environ/AWS_ACCESS_KEY_ID
      aws_secret_access_key: os.environ/AWS_SECRET_ACCESS_KEY
      aws_region_name: us-east-1
2. Start the LiteLLM Proxyโ
Start LiteLLM Proxy
litellm --config config.yaml
3. Make requests to your Bedrock Agentsโ
- Curl
- OpenAI Python SDK
Basic Agent Request
curl http://localhost:4000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $LITELLM_API_KEY" \
  -d '{
    "model": "bedrock-agent-1",
    "messages": [
      {
        "role": "user", 
        "content": "Analyze our customer data and suggest retention strategies"
      }
    ]
  }'
Streaming Agent Request
curl http://localhost:4000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $LITELLM_API_KEY" \
  -d '{
    "model": "bedrock-agent-2",
    "messages": [
      {
        "role": "user",
        "content": "Create a comprehensive social media strategy for our new product"
      }
    ],
    "stream": true
  }'
Using OpenAI SDK with LiteLLM Proxy
from openai import OpenAI
# Initialize client with your LiteLLM proxy URL
client = OpenAI(
    base_url="http://localhost:4000",
    api_key="your-litellm-api-key"
)
# Make a completion request to your agent
response = client.chat.completions.create(
    model="bedrock-agent-1",
    messages=[
      {
        "role": "user",
        "content": "Help me prepare for the quarterly business review meeting"
      }
    ]
)
print(response.choices[0].message.content)
Streaming with OpenAI SDK
from openai import OpenAI
client = OpenAI(
    base_url="http://localhost:4000", 
    api_key="your-litellm-api-key"
)
# Stream agent responses
stream = client.chat.completions.create(
    model="bedrock-agent-2",
    messages=[
      {
        "role": "user",
        "content": "Walk me through launching a new feature beta program"
      }
    ],
    stream=True
)
for chunk in stream:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="")
Provider-specific Parametersโ
Any non-openai parameters will be passed to the agent as custom parameters.
- SDK
- Proxy
Using custom parameters
from litellm import completion
response = litellm.completion(
    model="bedrock/agent/L1RT58GYRW/MFPSBCXYTW",
    messages=[
        {
            "role": "user",
            "content": "Hi who is ishaan cto of litellm, tell me 10 things about him",
        }
    ],
    invocationId="my-test-invocation-id", # PROVIDER-SPECIFIC VALUE
)
LiteLLM Proxy Configuration
model_list:
  - model_name: bedrock-agent-1
    litellm_params:
      model: bedrock/agent/L1RT58GYRW/MFPSBCXYTW
      aws_access_key_id: os.environ/AWS_ACCESS_KEY_ID
      aws_secret_access_key: os.environ/AWS_SECRET_ACCESS_KEY
      aws_region_name: us-west-2
      invocationId: my-test-invocation-id