Langsmith - Logging LLM Input/Output
An all-in-one developer platform for every step of the application lifecycle https://smith.langchain.com/
info
Pre-Requisitesโ
pip install litellm
Quick Startโ
Use just 2 lines of code, to instantly log your responses across all providers with Langsmith
- SDK
- LiteLLM Proxy
litellm.callbacks = ["langsmith"]
import litellm
import os
os.environ["LANGSMITH_API_KEY"] = ""
os.environ["LANGSMITH_PROJECT"] = "" # defaults to litellm-completion
os.environ["LANGSMITH_DEFAULT_RUN_NAME"] = "" # defaults to LLMRun
# LLM API Keys
os.environ['OPENAI_API_KEY']=""
# set langsmith as a callback, litellm will send the data to langsmith
litellm.callbacks = ["langsmith"] 
 
# openai call
response = litellm.completion(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "user", "content": "Hi ๐ - i'm openai"}
  ]
)
- Setup config.yaml
model_list:
  - model_name: gpt-3.5-turbo
    litellm_params:
      model: openai/gpt-3.5-turbo
      api_key: os.environ/OPENAI_API_KEY
litellm_settings:
  callbacks: ["langsmith"]
- Start LiteLLM Proxy
litellm --config /path/to/config.yaml
- Test it!
curl -L -X POST 'http://0.0.0.0:4000/v1/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-eWkpOhYaHiuIZV-29JDeTQ' \
-d '{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "user",
      "content": "Hey, how are you?"
    }
  ],
  "max_completion_tokens": 250
}'
Advancedโ
Local Testing - Control Batch Sizeโ
Set the size of the batch that Langsmith will process at a time, default is 512.
Set langsmith_batch_size=1 when testing locally, to see logs land quickly.
- SDK
- LiteLLM Proxy
import litellm
import os
os.environ["LANGSMITH_API_KEY"] = ""
# LLM API Keys
os.environ['OPENAI_API_KEY']=""
# set langsmith as a callback, litellm will send the data to langsmith
litellm.callbacks = ["langsmith"] 
litellm.langsmith_batch_size = 1 # ๐ KEY CHANGE
 
response = litellm.completion(
    model="gpt-3.5-turbo",
     messages=[
        {"role": "user", "content": "Hi ๐ - i'm openai"}
    ]
)
print(response)
- Setup config.yaml
model_list:
  - model_name: gpt-3.5-turbo
    litellm_params:
      model: openai/gpt-3.5-turbo
      api_key: os.environ/OPENAI_API_KEY
litellm_settings:
  langsmith_batch_size: 1
  callbacks: ["langsmith"]
- Start LiteLLM Proxy
litellm --config /path/to/config.yaml
- Test it!
curl -L -X POST 'http://0.0.0.0:4000/v1/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-eWkpOhYaHiuIZV-29JDeTQ' \
-d '{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "user",
      "content": "Hey, how are you?"
    }
  ],
  "max_completion_tokens": 250
}'
Set Langsmith fieldsโ
import litellm
import os
os.environ["LANGSMITH_API_KEY"] = ""
# LLM API Keys
os.environ['OPENAI_API_KEY']=""
# set langsmith as a callback, litellm will send the data to langsmith
litellm.success_callback = ["langsmith"] 
 
response = litellm.completion(
    model="gpt-3.5-turbo",
     messages=[
        {"role": "user", "content": "Hi ๐ - i'm openai"}
    ],
    metadata={
        "run_name": "litellmRUN",                                   # langsmith run name
        "project_name": "litellm-completion",                       # langsmith project name
        "run_id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",           # langsmith run id
        "parent_run_id": "f8faf8c1-9778-49a4-9004-628cdb0047e5",    # langsmith run parent run id
        "trace_id": "df570c03-5a03-4cea-8df0-c162d05127ac",         # langsmith run trace id
        "session_id": "1ffd059c-17ea-40a8-8aef-70fd0307db82",       # langsmith run session id
        "tags": ["model1", "prod-2"],                               # langsmith run tags
        "metadata": {                                               # langsmith run metadata
            "key1": "value1"
        },
        "dotted_order": "20240429T004912090000Z497f6eca-6276-4993-bfeb-53cbbbba6f08"
    }
)
print(response)
Make LiteLLM Proxy use Custom LANGSMITH_BASE_URLโ
If you're using a custom LangSmith instance, you can set the
LANGSMITH_BASE_URL environment variable to point to your instance.
For example, you can make LiteLLM Proxy log to a local LangSmith instance with
this config:
litellm_settings:
  success_callback: ["langsmith"]
environment_variables:
  LANGSMITH_BASE_URL: "http://localhost:1984"
  LANGSMITH_PROJECT: "litellm-proxy"
Support & Talk to Foundersโ
- Schedule Demo ๐
- Community Discord ๐ญ
- Our numbers ๐ +1 (770) 8783-106 / โญ+1 (412) 618-6238โฌ
- Our emails โ๏ธ ishaan@berri.ai / krrish@berri.ai