-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Insights: openai/openai-agents-python
Overview
Could not load contribution data
Please try again later
9 Pull requests merged by 6 people
-
Fix #892 by adding proper tool description in context.md
#937 merged
Jun 25, 2025 -
Fix #604 Chat Completion model raises runtime error when response.choices is empty
#935 merged
Jun 25, 2025 -
Remove duplicate entry from
__init__.py
#897 merged
Jun 25, 2025 -
Add exempt-issue-labels to the issue triage GH Action job
#936 merged
Jun 25, 2025 -
Fix #890 by adjusting the guardrail document page
#903 merged
Jun 24, 2025 -
Point CLAUDE.md to AGENTS.md
#930 merged
Jun 24, 2025 -
Bugfix | Fixed a bug when calling reasoning models with
store=False
#920 merged
Jun 24, 2025 -
Removed lines to avoid duplicate output in REPL utility
#928 merged
Jun 24, 2025 -
Add is_enabled to handoffs
#925 merged
Jun 24, 2025
6 Pull requests opened by 5 people
-
Add on_start support to VoiceWorkflowBase and VoicePipeline
#922 opened
Jun 23, 2025 -
Add safety check handling for ComputerTool
#923 opened
Jun 23, 2025 -
feat(tools): add conversation history support to ToolContext #904
#926 opened
Jun 24, 2025 -
Ensure that input_guardrails can block tools from running
#931 opened
Jun 24, 2025 -
Annotating the openai.Omit type so that ModelSettings can be serialized by pydantic
#938 opened
Jun 25, 2025 -
Support Claude extended thinking
#941 opened
Jun 25, 2025
27 Issues closed by 4 people
-
[Bug]: SDK crashes when `choices` is `None` (provider-error payload)
#604 closed
Jun 25, 2025 -
Configure `service_tier` param
#934 closed
Jun 25, 2025 -
the same isinstance(output, ResponseFunctionToolCall) check twice in "_run_impl.py "
#658 closed
Jun 25, 2025 -
How to ensure a specific tool sequencing?
#918 closed
Jun 25, 2025 -
Cannot run Gemini
#829 closed
Jun 25, 2025 -
Make FuncTool and @function_tool decorated function callable
#708 closed
Jun 25, 2025 -
Make intermediate results available when `MaxTurnExceededException` is thrown
#719 closed
Jun 25, 2025 -
ValidationError` from `InputTokensDetails` when using `LitellmModel` with `None` cached\_tokens
#760 closed
Jun 25, 2025 -
Bug Report: Reasoning Mode Incompatible with Tools on Bedrock Claude via LiteLLM model class
#810 closed
Jun 25, 2025 -
Bug: style guideline and formatting inconsistencies
#611 closed
Jun 25, 2025 -
no attribute error occurs while calling MCP
#630 closed
Jun 25, 2025 -
Feature Request: Allow Separate Models for Tool Execution and Final Response in OpenAI Agent SDK
#684 closed
Jun 25, 2025 -
ImportError: cannot import name 'MCPServerStdio' from 'agents.mcp'
#691 closed
Jun 25, 2025 -
Streaming Fails Due to `{"include_usage": True}`
#442 closed
Jun 25, 2025 -
input_guardrail is skipped
#576 closed
Jun 25, 2025 -
Context restrictions
#283 closed
Jun 25, 2025 -
Errors from custom model providers dont handled
#380 closed
Jun 25, 2025 -
Add the possibility to add extra header fields in the RunConfig or Agents
#549 closed
Jun 25, 2025 -
Add HTTP Streamable support for MCP's
#600 closed
Jun 25, 2025 -
[Feature Request] Add REPL-style `run_demo_loop` utility for rapid agent testing and debugging
#784 closed
Jun 24, 2025 -
Bug | Reasoning on OpenAI's models doesn't work with `store=False`
#919 closed
Jun 24, 2025 -
Missing docstring in fetch_user_age breaks tool understanding
#892 closed
Jun 23, 2025 -
Multiplying by 10 in @function_tool return value does not work
#896 closed
Jun 23, 2025 -
documentation
#916 closed
Jun 23, 2025
5 Issues opened by 5 people
-
Go version of the Agents SDK?
#942 opened
Jun 26, 2025 -
Expose tool call arguments in RunHooks on_tool_start and on_tool_end
#939 opened
Jun 25, 2025 -
Document upload across gemini, claude and gpt (completions API)
#933 opened
Jun 24, 2025 -
Provided By OpenAI Kittens SSO
#929 opened
Jun 24, 2025 -
OpenAI SDK does not expose MCP session ID, making stateful streamable HTTP sessions impossible
#924 opened
Jun 23, 2025
158 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
feat: add MCP tool filtering support
#861 commented on
Jun 25, 2025 • 6 new comments -
add voice option "ballad" in src/agents/voice/model.py
#278 commented on
Jun 25, 2025 • 1 new comment -
Fix typos in event names and comments across multiple files
#155 commented on
Jun 25, 2025 • 1 new comment -
parallel_tool_calls default is True in the client, despite docs saying False
#762 commented on
Jun 25, 2025 • 0 new comments -
Yield reasoning delta in the response or add hooks to handle
#825 commented on
Jun 25, 2025 • 0 new comments -
Streaming Output issue got unnecesory text in output
#824 commented on
Jun 25, 2025 • 0 new comments -
Error initializing MCP server: Connection closed
#822 commented on
Jun 25, 2025 • 0 new comments -
openai_stt.py _turn_audio_buffer maybe empty
#821 commented on
Jun 25, 2025 • 0 new comments -
Adding a ToolRAG that the user can enable when he gives to the agent many tools.
#913 commented on
Jun 25, 2025 • 0 new comments -
Adding a knowledge parameter within the Agent class
#914 commented on
Jun 25, 2025 • 0 new comments -
CodeInterpreter tool is appending data to the end of the filename in containers
#911 commented on
Jun 25, 2025 • 0 new comments -
When using a llmproxy to access model, the part of code remove the model prefix, but llm proxy need it.
#910 commented on
Jun 25, 2025 • 0 new comments -
Support Anthropic Prompt Caching
#905 commented on
Jun 25, 2025 • 0 new comments -
When running in non-streaming mode, retrieve past history during tool execution.
#904 commented on
Jun 25, 2025 • 0 new comments -
`output_type=List[str]` silently disables “thinking” in Qwen-3 (and similar models), cutting run-time by ~50 % compared with `output_type=str`
#902 commented on
Jun 25, 2025 • 0 new comments -
OpenAI 'file' message type is supported by Chat Completions
#893 commented on
Jun 25, 2025 • 0 new comments -
KeyError: ~TContext while importing with Python 3.11.0
#891 commented on
Jun 25, 2025 • 0 new comments -
FileSearchTool runs despite InputGuardrailTripwireTriggered
#889 commented on
Jun 25, 2025 • 0 new comments -
Support for short‑term and long‑term memory in Agents SDK
#887 commented on
Jun 25, 2025 • 0 new comments -
Add uv as well
#884 commented on
Jun 25, 2025 • 0 new comments -
Get pydantic serializer warnings in calling Runner.run
#880 commented on
Jun 25, 2025 • 0 new comments -
MCP tools turn upstream HTTP errors into AgentsException, crashing the run
#879 commented on
Jun 25, 2025 • 0 new comments -
LiteLLM + Gemini 2.5 Pro: cached_tokens=None crashes Agents SDK with Pydantic int-validation error
#758 commented on
Jun 25, 2025 • 0 new comments -
support for bedrock prompt caching
#750 commented on
Jun 25, 2025 • 0 new comments -
Add Session Memory
#745 commented on
Jun 25, 2025 • 0 new comments -
Unable to use tool calling when using Llama 4 scout via LiteLLM Proxy
#723 commented on
Jun 25, 2025 • 0 new comments -
Allow agent to return logprobs
#715 commented on
Jun 25, 2025 • 0 new comments -
LLM Cannot Access tool_context.context (e.g. user_uuid) for Tool Input Interpolation in MCP Workflows
#711 commented on
Jun 25, 2025 • 0 new comments -
Prevent Token Wastage When Input Guardrails Delay Error Generation
#867 commented on
Jun 25, 2025 • 0 new comments -
Agent.as_tool hides nested tool‑call events — blocks parallel sub‑agents with streaming
#864 commented on
Jun 25, 2025 • 0 new comments -
Tools selection from MCP server
#863 commented on
Jun 25, 2025 • 0 new comments -
Mixture of Tool-Calling and Handoff
#858 commented on
Jun 25, 2025 • 0 new comments -
Filter MCPServerStdio server tools
#851 commented on
Jun 25, 2025 • 0 new comments -
Support automatic "back" handoffs to orchestrating Agents
#847 commented on
Jun 25, 2025 • 0 new comments -
Inconsistent timeout and sse_read_timeout Types in MCPServerSseParams and MCPServerStreamableHttpParams
#845 commented on
Jun 25, 2025 • 0 new comments -
acknowledged_safety_checks during response
#843 commented on
Jun 25, 2025 • 0 new comments -
When using the Azure LLM key and employing "Runner.run_streamed", the token usage returned is 0.
#838 commented on
Jun 25, 2025 • 0 new comments -
Support Predicted Outputs
#837 commented on
Jun 25, 2025 • 0 new comments -
Connection Error on Windows When Running Official LiteLLM Example
#836 commented on
Jun 25, 2025 • 0 new comments -
Support Streaming of Function Call Arguments
#834 commented on
Jun 25, 2025 • 0 new comments -
separate stream items for "tool_call_item" and "tool_call_output_item"
#831 commented on
Jun 25, 2025 • 0 new comments -
Update all examples details and links in examples.md
#250 commented on
Jun 25, 2025 • 0 new comments -
Add tool call parameters for `on_tool_start` hook
#253 commented on
Jun 25, 2025 • 0 new comments -
replace .py file with .ipynb for Jupyter example
#262 commented on
Jun 25, 2025 • 0 new comments -
Update examples.md
#271 commented on
Jun 25, 2025 • 0 new comments -
Support a callable model
#312 commented on
Jun 25, 2025 • 0 new comments -
add reasoning content to ChatCompletions
#494 commented on
Jun 25, 2025 • 0 new comments -
Make input/new items available in the run context
#572 commented on
Jun 25, 2025 • 0 new comments -
fix: add ensure_ascii=False to json.dumps for correct Unicode output
#639 commented on
Jun 26, 2025 • 0 new comments -
Added support for gpt4o-realtime models for Speect to Speech interactions
#659 commented on
Jun 25, 2025 • 0 new comments -
Add Sessions for Automatic Conversation History Management
#752 commented on
Jun 26, 2025 • 0 new comments -
Fix and Document `parallel_tool_calls` Attribute in ModelSettings
#763 commented on
Jun 25, 2025 • 0 new comments -
feat(tools): run sync tools in a thread to avoid event loop blocking
#820 commented on
Jun 25, 2025 • 0 new comments -
feat: separate stream items for tool_call_item and tool_call_output_item
#833 commented on
Jun 25, 2025 • 0 new comments -
[Fix]: Get logger from tracing folder instead of agents folder
#839 commented on
Jun 25, 2025 • 0 new comments -
Added support for "return" handoffs (#1)
#869 commented on
Jun 25, 2025 • 0 new comments -
Add reasoning content - fix on #494
#871 commented on
Jun 25, 2025 • 0 new comments -
add example of gradio in streaming agent with mcp cases
#888 commented on
Jun 25, 2025 • 0 new comments -
feat: Support Anthropic Claude prompt caching key "cache_control"
#908 commented on
Jun 25, 2025 • 0 new comments -
Add uv as an alternative Python environment setup option for issue #884
#909 commented on
Jun 25, 2025 • 0 new comments -
Can tools ingest images attached to image_url?
#875 commented on
Jun 25, 2025 • 0 new comments -
Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.
#873 commented on
Jun 25, 2025 • 0 new comments -
OPENAI Authentication error
#870 commented on
Jun 25, 2025 • 0 new comments -
Add `method_tool` Functionality
#94 commented on
Jun 25, 2025 • 0 new comments -
Very high response times at random during hand-offs
#717 commented on
Jun 25, 2025 • 0 new comments -
[Bug]: UnicodeDecodeError when importing litellm_model on Windows
#610 commented on
Jun 25, 2025 • 0 new comments -
Processing image/ multi modal responses in function tool results?
#787 commented on
Jun 25, 2025 • 0 new comments -
When will realtime capabilities be added like they are in the Typescript SDK?
#894 commented on
Jun 25, 2025 • 0 new comments -
Mem0ai integration as example for create memory agents
#832 commented on
Jun 26, 2025 • 0 new comments -
results field in Code Interpreter always returns None
#917 commented on
Jun 26, 2025 • 0 new comments -
fix(#94): handle special parameters (self/cls) correctly with context parameter
#137 commented on
Jun 25, 2025 • 0 new comments -
perf: Optimize has_more_than_n_keys function and BatchTraceProcessor
#143 commented on
Jun 25, 2025 • 0 new comments -
Add FastAPI example
#168 commented on
Jun 25, 2025 • 0 new comments -
Add PDF extraction example agent with verification
#176 commented on
Jun 25, 2025 • 0 new comments -
Create a Tauri desktop app with Lynx UI for OpenAI Agent SDK
#179 commented on
Jun 25, 2025 • 0 new comments -
Update README.md
#180 commented on
Jun 25, 2025 • 0 new comments -
Add Firecrawl integration for web scraping and information extraction
#182 commented on
Jun 25, 2025 • 0 new comments -
UI-for-examples
#228 commented on
Jun 25, 2025 • 0 new comments -
update bullets for documentation in tool.py
#247 commented on
Jun 25, 2025 • 0 new comments -
Custom model provider agent does not use tool calling when an output type is specified
#332 commented on
Jun 25, 2025 • 0 new comments -
Feature Request: Support Manual Interruption of OpenAI Agent During Execution
#329 commented on
Jun 25, 2025 • 0 new comments -
`tts_settings` should be available per Agent
#327 commented on
Jun 25, 2025 • 0 new comments -
Question about Tool Call Streaming
#326 commented on
Jun 25, 2025 • 0 new comments -
Retry mechanism for ModelBehaviorError
#325 commented on
Jun 25, 2025 • 0 new comments -
Callable model
#310 commented on
Jun 25, 2025 • 0 new comments -
Streamed Voice Agent Demo - Multiple Performance Issues
#301 commented on
Jun 25, 2025 • 0 new comments -
Integrate more 3rd party tools into AgentSDK tool
#299 commented on
Jun 25, 2025 • 0 new comments -
Best practices for long running tools?
#295 commented on
Jun 25, 2025 • 0 new comments -
agents.exceptions.ModelBehaviorError: Invalid JSON when parsing
#280 commented on
Jun 25, 2025 • 0 new comments -
Handoff does not work with Claude 3.7 Sonnet
#270 commented on
Jun 25, 2025 • 0 new comments -
Truncate span input when input is too large
#260 commented on
Jun 25, 2025 • 0 new comments -
Provide a agent hook `on_get_response` to recalculate the messages passed to the model
#257 commented on
Jun 25, 2025 • 0 new comments -
Ordering of events in Runner.run_streamed is incorrect
#583 commented on
Jun 25, 2025 • 0 new comments -
Missing Handling for `delta.reasoning_content` in `agents.models.chatcmpl_stream_handler.ChatCmplStreamHandler.handle_stream`
#578 commented on
Jun 25, 2025 • 0 new comments -
bugs in run.py
#570 commented on
Jun 25, 2025 • 0 new comments -
Reasoning model items provide to General model
#569 commented on
Jun 25, 2025 • 0 new comments -
Why does the Computer protocol not have the goto method?
#547 commented on
Jun 25, 2025 • 0 new comments -
History Cleaning
#545 commented on
Jun 25, 2025 • 0 new comments -
from agents.extensions.models.litellm_model import LitellmModel
#666 commented on
Jun 25, 2025 • 0 new comments -
Best practices for integrating Gradio's Agent and Tool
#862 commented on
Jun 24, 2025 • 0 new comments -
Empty assistant message after a hand-off
#856 commented on
Jun 24, 2025 • 0 new comments -
Best setup for tracing with Azure
#846 commented on
Jun 24, 2025 • 0 new comments -
Add llms.txt in the documentation
#556 commented on
Jun 25, 2025 • 0 new comments -
Usage tokens no longer automatically show
#548 commented on
Jun 25, 2025 • 0 new comments -
Passing the Encoded image to @tool
#883 commented on
Jun 25, 2025 • 0 new comments -
BadRequestError while trying out input guardrails
#129 commented on
Jun 25, 2025 • 0 new comments -
Accessing Reasoning Summary for ComputerUseTool-based Agent
#202 commented on
Jun 25, 2025 • 0 new comments -
Computer tool can not be used with Web search call.
#254 commented on
Jun 25, 2025 • 0 new comments -
Agents SDK v0.0.4: Local Traces Missing Critical LLM Outputs and Tool Interaction Data
#222 commented on
Jun 25, 2025 • 0 new comments -
invalid_request_error when using "chat_completions" with triage agent (gemini -> any other model)
#237 commented on
Jun 25, 2025 • 0 new comments -
Add `reasoning_content` to ChatCompletions
#415 commented on
Jun 25, 2025 • 0 new comments -
Rate limit in RunConfig
#381 commented on
Jun 25, 2025 • 0 new comments -
Only use certain tools from MCP server
#376 commented on
Jun 25, 2025 • 0 new comments -
Random transcript gets printed/generated when talking to the voice agent implemented using "VoicePipline" . Eg - "Transcription: Kurs." Mind you there is no background noise.
#368 commented on
Jun 25, 2025 • 0 new comments -
Implementing a comprehensive workflow
#366 commented on
Jun 25, 2025 • 0 new comments -
Mermaid-based visualization
#352 commented on
Jun 25, 2025 • 0 new comments -
Access to Chat History in Hooks
#346 commented on
Jun 25, 2025 • 0 new comments -
Add Support for Image Return in Agent Tools
#341 commented on
Jun 25, 2025 • 0 new comments -
Custom model provider ignored when using agents as tools
#663 commented on
Jun 25, 2025 • 0 new comments -
First-class streaming tool output
#661 commented on
Jun 25, 2025 • 0 new comments -
OAuth support for MCPServerSse
#657 commented on
Jun 25, 2025 • 0 new comments -
Function calling fails on “application/json” MIME type with the latest Gemini models
#656 commented on
Jun 25, 2025 • 0 new comments -
Human-In-The-Loop Architecture should be implemented on top priority!
#636 commented on
Jun 25, 2025 • 0 new comments -
Intent Classifier Support
#628 commented on
Jun 25, 2025 • 0 new comments -
on_agent_start hook should be more performant
#623 commented on
Jun 25, 2025 • 0 new comments -
Can we use agent.run instead of Runner.run(starting_agent=agent)
#622 commented on
Jun 25, 2025 • 0 new comments -
Resource tracker warning (leaked semaphores) with MCPServerStdio
#618 commented on
Jun 25, 2025 • 0 new comments -
Random Invalid URL completion error on agents SDK
#816 commented on
Jun 25, 2025 • 0 new comments -
Using LiteLLM router (with fallback and retry/cooldown) in an Agent
#813 commented on
Jun 25, 2025 • 0 new comments -
Azure OpenAI rejects system prompt from prompt_with_handoff_instructions
#806 commented on
Jun 25, 2025 • 0 new comments -
Feature Request: Enhanced Run Lifecycle Management - Interrupt and Update Active Runs
#798 commented on
Jun 25, 2025 • 0 new comments -
SDK support for retrieving historical traces
#793 commented on
Jun 25, 2025 • 0 new comments -
Rate Limit Support
#782 commented on
Jun 25, 2025 • 0 new comments -
Bug: Pydantic Warnings and `.final_output` Serialization Issue with Code Interpreter File Citations
#772 commented on
Jun 25, 2025 • 0 new comments -
Only 1 handoff getting called no matter what
#771 commented on
Jun 25, 2025 • 0 new comments -
function_tool calling - agents orchertratins
#769 commented on
Jun 25, 2025 • 0 new comments -
Agents Reproducibility - seed ? top_p=0?
#768 commented on
Jun 25, 2025 • 0 new comments -
Support for MCP prompts and resources
#544 commented on
Jun 25, 2025 • 0 new comments -
Websocket streaming audio in realtime from client
#536 commented on
Jun 25, 2025 • 0 new comments -
Cannot get the last tool_call_output event in stream_events when MaxTurnsExceeded
#526 commented on
Jun 25, 2025 • 0 new comments -
Timeout after 300 seconds with any error message. Could it be rate limiting?
#511 commented on
Jun 25, 2025 • 0 new comments -
Obvious Pauses Between Text Segments in Current `OpenAITTSModel` Implementation Affect Speech Fluency
#493 commented on
Jun 25, 2025 • 0 new comments -
Add reasoning support for custom models.
#492 commented on
Jun 25, 2025 • 0 new comments -
Automatic Audio Feedback on Silence in VoicePipeline
#489 commented on
Jun 25, 2025 • 0 new comments -
Add Intro message function for VoicePipeline
#488 commented on
Jun 25, 2025 • 0 new comments -
Duplicate tool names across MCP servers cause errors
#464 commented on
Jun 25, 2025 • 0 new comments -
Support hosted tools across different model providers
#461 commented on
Jun 25, 2025 • 0 new comments -
SSL error for streaming voice terminal app
#423 commented on
Jun 25, 2025 • 0 new comments -
Custom-LLM <think> tag handling
#703 commented on
Jun 25, 2025 • 0 new comments -
MCP server restart cause Agent to fail
#693 commented on
Jun 25, 2025 • 0 new comments -
Feature Request: Support streaming tool call outputs
#692 commented on
Jun 25, 2025 • 0 new comments -
Please add time travel
#688 commented on
Jun 25, 2025 • 0 new comments -
Agent attempts to use non-existing tool
#686 commented on
Jun 25, 2025 • 0 new comments -
Add response cost in the Usage
#683 commented on
Jun 25, 2025 • 0 new comments -
Unable to use reasoning models with tool calls using LitellmModel
#678 commented on
Jun 25, 2025 • 0 new comments -
Tool calling with LiteLLM and thinking models fail
#765 commented on
Jun 25, 2025 • 0 new comments -
Error code: 400 "No tool output found for function call"
#673 commented on
Jun 25, 2025 • 0 new comments