Documentation

Contents

Welcome to Inception Platform

Inception Platform provides powerful AI capabilities through an OpenAI-compatible API interface. This means you can use existing OpenAI client libraries or direct REST calls to access our services.

Account Setup

1

Create an Inception Platform account or sign in directly if you already have one.

2

Navigate to Billing in your dashboard and add your payment information to activate your account.

3

Go to API Keys and create a new API key. Make sure to copy and securely store your API key - you won't be able to see it again.

Quick Start

import requests

response = requests.post(
    'https://api.inceptionlabs.ai/v1/chat/completions',
    headers={
        'Content-Type': 'application/json',
        'Authorization': 'Bearer INCEPTION_API_KEY'
    },
    json={
        'model': 'mercury',  
        'messages': [
            {'role': 'user', 'content': 'What is a diffusion model?'}
        ],
        'max_tokens': 1000
    }
)
data = response.json()

External Libraries Compatibility

Inception API is fully compatible with popular Python libraries:

from openai import OpenAI

client = OpenAI(
    api_key="INCEPTION_API_KEY",
    base_url="https://api.inceptionlabs.ai/v1"
)

response = client.chat.completions.create(
    model="mercury",
    messages=[{"role": "user", "content": "What is a diffusion model?"}],
    max_tokens=1000
)
print(response.choices[0].message.content)

Streaming and Diffusing

Inception API supports real-time output and diffusion effect modes:

  • Streaming: Get responses block-by-block for real-time feedback—ideal for chat and live applications.
  • Diffusing: Optionally visualize how noisy outputs are refined into final text, showcasing the model's iterative denoising process.
curl https://api.inceptionlabs.ai/v1/chat/completions \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer INCEPTION_API_KEY" \
    -d '{
      "model": "mercury",
      "messages": [
        {"role": "user", "content": "What is a diffusion model?"}
      ],
      "max_tokens": 1000,
      "stream": true
    }'

Tool Calling

Inception API supports tool calling for more complex responses on the chat completion endpoint.

from openai import OpenAI
import json

client = OpenAI(base_url="https://api.inceptionlabs.ai/v1", api_key="INCEPTION_API_KEY")

def get_weather(location: str, unit: str):
    return f"Getting the weather for {location} in {unit}..."
tool_functions = {"get_weather": get_weather}

tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get the current weather in a given location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string", "description": "City and state, e.g., 'San Francisco, CA'"},
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
            },
            "required": ["location", "unit"]
        }
    }
}]

response = client.chat.completions.create(
    model="mercury",
    messages=[{"role": "user", "content": "What's the weather like in San Francisco?"}],
    tools=tools
)

tool_call = response.choices[0].message.tool_calls[0].function
print(f"Function called: {tool_call.name}")
print(f"Arguments: {tool_call.arguments}")
print(f"Result: {get_weather(**json.loads(tool_call.arguments))}")

Structured Outputs

Inception API supports structured outputs to ensure responses follow specific formats:

  • JSON: Use JSON schemas to enforce structured data output with specific properties, types, and constraints.
  • Choice: Present multiple options to choose from.
  • Regex: Constrain outputs to match specific regular expression patterns.
from openai import OpenAI
import json

client = OpenAI(base_url="https://api.inceptionlabs.ai/v1", api_key="INCEPTION_API_KEY")

response_schema = {
    "type": "object",
    "properties": {
        "sentiment": {
            "type": "string",
            "enum": ["positive", "negative", "neutral"]
        },
        "confidence": {
            "type": "number",
            "minimum": 0,
            "maximum": 1
        },
        "key_phrases": {
            "type": "array",
            "items": {"type": "string"}
        }
    },
    "required": ["sentiment", "confidence", "key_phrases"]
}

response = client.chat.completions.create(
    model="mercury-coder",
    messages=[
        {"role": "user", "content": "Analyze the sentiment of this text: 'I absolutely love this feature! It works perfectly and saves me so much time.'"}
    ],
    extra_body={"guided_json": response_schema},
    max_tokens=50,
)

result = json.loads(response.choices[0].message.content)
print(f"Sentiment: {result['sentiment']}")
print(f"Confidence: {result['confidence']}")
print(f"Key phrases: {result['key_phrases']}")