Skip to content

Latest commit

 

History

History
 
 

README.md

description AI Observability and Evaluation

Arize Phoenix

Phoenix is an open-source observability library designed for experimentation, evaluation, and troubleshooting. It allows AI Engineers and Data Scientists to quickly visualize their data, evaluate performance, track down issues, and export data to improve.

Phoenix is built by Arize AI, the company behind the industry-leading AI observability platform, and a set of core contributors.

Install Phoenix

{% tabs %} {% tab title="pip" %} In your Python, Jupyter, or Colab environment, run the following command to install.

pip install arize-phoenix

For full details on how to run phoenix in various environments such as Databricks, consult our environments guide. {% endtab %}

{% tab title="conda" %}

conda install -c conda-forge arize-phoenix[evals]

{% endtab %}

{% tab title="Container" %} Phoenix can also run via a container. The image can be found at:

{% embed url="https://hub.docker.com/r/arizephoenix/phoenix" %} Images for phoenix are published to dockerhub {% endembed %}

Checkout the environments section and deployment guide for details. {% endtab %}

{% tab title="npm" %} The Phoenix server can be run as a #containerand be interacted with using the phoenix-client and OpenTelelemetry. See #packages below. {% endtab %} {% endtabs %}

Phoenix works with OpenTelemetry and OpenInference instrumentation. If you are looking to deploy phoenix as a service rather than a library, see deployment

What you can do in Phoenix

{% tabs %} {% tab title="Prompt Engineering" %} {% embed url="https://storage.googleapis.com/arize-phoenix-assets/assets/gifs/prompt_playground.mp4" %} Phoenix Prompt Playground {% endembed %}

Phoenix offers tools to streamline your prompt engineering workflow.

  • Prompt Management - Create, store, modify, and deploy prompts for interacting with LLMs
  • Prompt Playground - Play with prompts, models, invocation parameters and track your progress via tracing and experiments
  • Span Replay - Replay the invocation of an LLM. Whether it's an LLM step in an LLM workflow or a router query, you can step into the LLM invocation and see if any modifications to the invocation would have yielded a better outcome.
  • Prompts in Code - Phoenix offers client SDKs to keep your prompts in sync across different applications and environments. {% endtab %}

{% tab title="Tracing" %} {% embed url="https://storage.googleapis.com/arize-phoenix-assets/assets/gifs/tracing.mp4" %} Tracing in Phoenix {% endembed %}

Tracing is a helpful tool for understanding how your LLM application works. Phoenix's open-source library offers comprehensive tracing capabilities that are not tied to any specific LLM vendor or framework.

Phoenix accepts traces over the OpenTelemetry protocol (OTLP) and supports first-class instrumentation for a variety of frameworks (LlamaIndex, LangChain, DSPy), SDKs (OpenAI, Bedrock, Mistral, Vertex), and Languages. (Python, Javascript, etc.) {% endtab %}

{% tab title="Evaluation" %} {% embed url="https://storage.googleapis.com/arize-phoenix-assets/assets/gifs/evals.mp4" %} Evals in the Phoenix UI {% endembed %}

Phoenix is built to help you evaluate your application and understand their true performance. To accomplish this, Phoenix includes:

{% tab title="Datasets & Experiments" %} {% embed url="https://storage.googleapis.com/arize-phoenix-assets/assets/gifs/experiments.mp4" %} Experiments in Phoenix {% endembed %}

Phoenix Datasets & Experiments let you test different versions of your application, store relevant traces for evaluation and analysis, and build robust evaluations into your development process.

Quickstarts

Running Phoenix for the first time? Select a quickstart below.

Tracingllm-traces-1.mdScreenshot 2023-09-27 at 1.51.45 PM.png
Prompt Playgroundquickstart-prompts.mdprompt_playground.png
Datasets and Experimentsquickstart-datasets.mdexperiments_preview.png
Evaluationevals.mdevals.png
Inferencesphoenix-inferences.mdScreenshot 2023-09-27 at 1.53.06 PM.png

Packages

The main Phoenix package is arize-phoenix. We offer several packages below for specific use cases.

{% tabs %} {% tab title="Python" %}

Package What It's For Pypi
arize-phoenix

Running and connecting to the Phoenix client. Used:
- Self-hosting Phoenix
- Connecting to a Phoenix client (either Phoenix Developer Edition or self-hosted) to query spans, run evaluations, generate datasets, etc.

*arize-phoenix automatically includes arize-phoenix-otel and arize-phoenix evals

PyPI - Version
arize-phoenix-otel Sending OpenTelemetry traces to a Phoenix instance PyPI - Version
arize-phoenix-evals Running evaluations in your environment PyPI - Version
openinference-semantic-conventions Our semantic layer to add LLM telemetry to OpenTelemetry PyPI - Version
openinference-instrumentation-xxxx Automatically instrumenting popular packages. See integrations-tracing
{% endtab %}

{% tab title="TypeScript" %}

Package What It's For npm
@arizeai/phoenix-client

Running and connecting to the Phoenix server.

coming soon
@arizeai/openinference-semantic-conventions Our semantic layer to add LLM telemetry to OpenTelemetry NPM Version
@aizeai/openinference-instrumentation-xxxx Automatically instrumenting popular packages. See integrations-tracing
{% endtab %}
{% endtabs %}

Next Steps

Check out a comprehensive list of example notebooks for LLM Traces, Evals, RAG Analysis, and more.

Join the Phoenix Slack community to ask questions, share findings, provide feedback, and connect with other developers.