Modern users expect more than buttons and forms β they want intelligence:
- π Smart search
- βοΈ AI writing assistants
- π Predictive dashboards
- π― Personalized experiences
With Phoenix LiveView + Python, you can deliver exactly that β blending real-time UI with machine learning intelligence.
Why Python?
Python is the go-to language for AI/ML thanks to:
- TensorFlow, PyTorch β deep learning
- spaCy, NLTK β NLP
- scikit-learn β classical models
- FastAPI, Flask β fast API serving
You donβt need to rewrite your models in Elixir.
Just wrap them in an API.
Step 1: Wrap Your Model in FastAPI
Hereβs a simple Python ML service:
from fastapi import FastAPI
from pydantic import BaseModel
import my_model # your ML model
app = FastAPI()
class Input(BaseModel):
text: str
@app.post("/predict")
def predict(input: Input):
result = my_model.predict(input.text)
return {"sentiment": result.label, "confidence": result.score}
Run it on localhost:8000
or Docker.
Step 2: Call from LiveView with Finch
In your LiveView:
def handle_event("analyze", %{"text" => input}, socket) do
Task.async(fn ->
body = Jason.encode!(%{text: input})
{:ok, response} = Finch.build(:post, "http://localhost:8000/predict", [], body)
|> Finch.request(MyApp.Finch)
Jason.decode!(response.body)
end)
|> then(&{:noreply, assign(socket, predicting: true, prediction_task: &1)})
end
def handle_info({ref, %{"sentiment" => s, "confidence" => c}}, socket) do
Process.demonitor(ref, [:flush])
{:noreply, assign(socket, predicting: false, sentiment: s, confidence: c)}
end
Show a spinner with:
<%= if @predicting do %>
<div class="animate-spin text-gray-500">Analyzing...</div>
<% end %>
Step 3: Use phx-change for Live Input
<form phx-change="analyze" phx-debounce="500">
<textarea name="text" class="w-full p-2 border"></textarea>
</form>
<%= if @sentiment do %>
<p class="mt-2 text-sm">
Sentiment: <strong><%= @sentiment %></strong>
(Confidence: <%= (@confidence * 100) |> round %>%)
</p>
<% end %>
The result updates live as the user types β no reload, no JS.
Use Case: AI Writing Assistant
Let users start a sentence.
Your model suggests the next paragraph:
def handle_event("continue", %{"text" => text}, socket) do
Task.async(fn -> MyApp.AI.generate(text) end)
{:noreply, assign(socket, loading: true)}
end
And render the result:
<%= if @loading do %>
<p class="text-gray-400">Generating...</p>
<% else %>
<p class="mt-4 italic"><%= @ai_continuation %></p>
<% end %>
Want to get fancy? Stream characters back one by one.
Use Case: Predictive Pricing Dashboard
User selects a product.
Your Python model forecasts price:
def handle_event("select_product", %{"sku" => sku}, socket) do
forecast = MyApp.Forecaster.predict(sku)
{:noreply, assign(socket, forecast: forecast)}
end
Render it with a chart using LiveView.ChartComponent
or plotly.js
via a hook.
Production Tips
- β
Use
Finch
orReq
for HTTP calls - β Add API keys or OAuth to your Python server
- β Rate-limit or cache frequent predictions
- β
Use
Task.async
to keep LiveView responsive - β Log inputs/outputs for traceability
Need more speed?
Consider:
- π Moving models into Elixir NIFs (Rustler, Zigler)
- π§ Using Axon (Elixir-native ML) for simple inference
- π§ͺ Pre-batching predictions with
GenServer
queues
This Pattern Is Powerful
LiveView handles:
- UI
- Events
- Realtime rendering
Python handles:
- ML logic
- Heavy compute
- Prediction engines
π§ The result:
Real-time, intelligent UIs β built with a lean team.
Learn More in my PDF Guide
π Phoenix LiveView: The Proβs Guide to Scalable Interfaces and UI Patterns
- AI/ML integration with Python
- Real-time dashboards and workflows
- Background tasks, streaming, security
- Examples for NLP, forecasting, assistants, and more
Download it now β and bring the power of Python into your LiveView stack without the JS baggage.
Top comments (0)