Skip to main content

๐Ÿ“Š Neatlogs - Observability Platform

Neatlogs is a comprehensive observability platform that provides detailed logging, monitoring, and analytics for LLM applications in production environments.

Using Neatlogs with LiteLLMโ€‹

LiteLLM provides success_callbacks and failure_callbacks, allowing you to easily integrate Neatlogs for comprehensive tracing and monitoring of your LLM operations.

Integrationโ€‹

Use just a few lines of code to instantly log your LLM responses across all providers with Neatlogs:

import litellm

# Configure LiteLLM to use Neatlogs
litellm.success_callback = ["neatlogs"]

# Make your LLM calls as usual
response = litellm.completion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello, how are you?"}],
)

Complete Code Exampleโ€‹

import os
from litellm import completion

# Set environment variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["NEATLOGS_API_KEY"] = "your-neatlogs-api-key"

# Configure LiteLLM to use Neatlogs
litellm.success_callback = ["neatlogs"]

# Make LLM call
response = completion(
model="gpt-4",
messages=[{"role": "user", "content": "Hi ๐Ÿ‘‹ - I'm using Neatlogs!"}],
)

print(response)

Configuration Optionsโ€‹

The Neatlogs integration can be configured through environment variables:

  • NEATLOGS_API_KEY (str, required): Your Neatlogs API key

Advanced Usageโ€‹

Async Supportโ€‹

Neatlogs fully supports async operations:

import asyncio
import litellm

litellm.success_callback = ["neatlogs"]

async def main():
response = await litellm.acompletion(
model="gpt-4",
messages=[{"role": "user", "content": "Hello async world!"}],
)
print(response)

asyncio.run(main())

Streaming Supportโ€‹

Neatlogs automatically handles streaming responses:

import litellm

litellm.success_callback = ["neatlogs"]

response = litellm.completion(
model="gpt-4",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)

for chunk in response:
print(chunk.choices[0].delta.content, end="")

Error Trackingโ€‹

Neatlogs also tracks failed requests:

import litellm

# Enable both success and failure callbacks
litellm.success_callback = ["neatlogs"]
litellm.failure_callback = ["neatlogs"]

try:
response = litellm.completion(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
# This might fail due to invalid parameters
temperature=3.0 # Invalid temperature
)
except Exception as e:
print(f"Error: {e}")
# Error will be logged to Neatlogs automatically

Data Trackedโ€‹

Neatlogs captures comprehensive data for each LLM request:

  • Request Details: Model, provider, messages, parameters
  • Response Data: Completion text, token usage, cost
  • Performance Metrics: Response time, latency
  • Error Information: Failure reasons and stack traces
  • Metadata:session IDs

Supported Models and Providersโ€‹

Neatlogs works with all LiteLLM-supported providers:

  • OpenAI (GPT-3.5, GPT-4, etc.)
  • Anthropic (Claude)
  • Google (Gemini)
  • Azure OpenAI
  • AWS Bedrock
  • And 100+ more providers

Supportโ€‹

For issues or questions, please refer to: