Overview

Simli Auto is our managed API and provides streamlined way to integrate conversational AI agents into your applications with minimal setup.

Simli Auto now supports custom LLMs, but if you need even more composability (such as making your custom voice bot), check out the Simli Compose API Reference instead.

For changing the interaction languages, you can look at the support list on Deepgram’s website

API Endpoints

There are two ways to build with Simli Auto. The recommended way for most people is using session tokens to authenticate:

  • POST /createE2ESessionToken: Creates a new session token for authentication. Requires your Simli API key and optionally a TTS API key.
  • GET /auto/session/{agent_id}: Starts a session using token in the header for authentication (see API reference).

An alternative way to start a session without creating a token first is to use this endpoint:

  • POST /startE2ESession: Initializes a new end-to-end interactive session with an AI avatar. Configure TTS provider, language, and other session parameters.

For conversation transcripts:

  • GET /getE2ETranscript/{sessionId}: Retrieves the conversation transcript for a specific session if transcript generation was enabled.

Getting Started

To start using the Simli Auto API, you’ll need:

  1. A Simli API key
  2. (Optional) API keys for your preferred TTS provider
  3. (Optional) Custom LLM configuration if not using the default model

Quick Start

Here’s a basic flow to get started:

  1. Create a session token:
POST /createE2ESessionToken
{
  "simliAPIKey": "your-api-key",
  "ttsAPIKey": "your-tts-api-key"  // Optional
}
  1. Start an end-to-end session:
POST /startE2ESession
{
  "apiKey": "your-api-key",
  "faceId": "your-face-id",
  "ttsProvider": "Cartesia",  // Or "ElevenLabs"
  "language": "en",
  "createTranscript": false
}

Using Custom LLMs

Simli Auto supports integration with any OpenAI-compatible LLM API. To use your own LLM:

{
  "customLLMConfig": {
    "model": "your-model-name",
    "baseURL": "https://your-llm-api-endpoint.com",
    "llmAPIKey": "your-llm-api-key"
  }
}

Session Configuration

You can customize various session parameters:

  • maxSessionLength: Maximum duration of the session in seconds (default: 3600)
  • maxIdleTime: Maximum idle time before session timeout in seconds (default: 300)
  • systemPrompt: Custom prompt to define the AI’s behavior
  • firstMessage: Initial message from the AI when session starts

Best Practices

  1. Session Management

    • Keep track of session tokens and handle expiration appropriately
    • Configure idle times based on your use case
  2. Resource Optimization

    • Use appropriate batch sizes for audio processing
    • Handle session cleanup when no longer needed
  3. Error Handling

    • Implement proper error handling for API responses
    • Monitor session status and handle timeouts gracefully

Rate Limits and Quotas

Please refer to your API plan for specific rate limits and quotas. Ensure your application handles rate limiting appropriately to maintain optimal performance.

Next Steps

  • Explore the detailed API reference for each endpoint
  • Check out our example implementations
  • Join our community for support and updates

For more detailed information about specific endpoints and features, navigate through the API reference sections.