Skip to content

Commit

Permalink
Python: Fix openai assistant function calling bug (#8817)
Browse files Browse the repository at this point in the history
### Motivation and Context

We weren't properly handling the available FunctionResultContent in an
OpenAI assistant if multiple tool calls were included in the chat
history.

<!-- Thank you for your contribution to the semantic-kernel repo!
Please help reviewers and future users, providing the following
information:
  1. Why is this change required?
  2. What problem does it solve?
  3. What scenario does it contribute to?
  4. If it fixes an open issue, please link to the issue here.
-->

### Description

Fix this issue by making sure we use the `function call contents` list
as the source of truth (the model is expecting that we return the tool
call ids that were sent). We then use these function call content ids to
grab the corresponding function result content from the chat history and
this is sent back to the model.
- Fixes #8701 

<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->

### Contribution Checklist

<!-- Before submitting this PR, please make sure: -->

- [X] The code builds clean without any errors or warnings
- [X] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts)
raises no violations
- [X] All unit tests pass, and I have added new tests where possible
- [X] I didn't break anyone 😄
  • Loading branch information
moonbox3 committed Sep 16, 2024
1 parent 77aa4e3 commit db0faca
Show file tree
Hide file tree
Showing 4 changed files with 19 additions and 15 deletions.
3 changes: 1 addition & 2 deletions python/samples/getting_started_with_agents/step2_plugins.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,15 +70,14 @@ async def main():
# Create the instance of the Kernel
kernel = Kernel()

# Add the OpenAIChatCompletion AI Service to the Kernel
service_id = "agent"
kernel.add_service(AzureChatCompletion(service_id=service_id))

settings = kernel.get_prompt_execution_settings_from_service_id(service_id=service_id)
# Configure the function choice behavior to auto invoke kernel functions
settings.function_choice_behavior = FunctionChoiceBehavior.Auto()

kernel.add_plugin(plugin=MenuPlugin(), plugin_name="menu")
kernel.add_plugin(MenuPlugin(), plugin_name="menu")

# Create the agent
agent = ChatCompletionAgent(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
HOST_INSTRUCTIONS = "Answer questions about the menu."

# Note: you may toggle this to switch between AzureOpenAI and OpenAI
use_azure_openai = True
use_azure_openai = False


# Define a sample plugin for the sample
Expand Down
25 changes: 15 additions & 10 deletions python/semantic_kernel/agents/open_ai/open_ai_assistant_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -735,7 +735,7 @@ async def _invoke_internal(
chat_history = ChatHistory()
_ = await self._invoke_function_calls(fccs=fccs, chat_history=chat_history)

tool_outputs = self._format_tool_outputs(chat_history)
tool_outputs = self._format_tool_outputs(fccs, chat_history)
await self.client.beta.threads.runs.submit_tool_outputs(
run_id=run.id,
thread_id=thread_id,
Expand Down Expand Up @@ -982,22 +982,27 @@ async def _invoke_function_calls(self, fccs: list[FunctionCallContent], chat_his
]
return await asyncio.gather(*tasks)

def _format_tool_outputs(self, chat_history: ChatHistory) -> list[dict[str, str]]:
def _format_tool_outputs(self, fccs: list[FunctionCallContent], chat_history: ChatHistory) -> list[dict[str, str]]:
"""Format tool outputs from chat history for submission.
Args:
fccs: The function call contents.
chat_history: The chat history.
Returns:
The formatted tool outputs as a list of dictionaries.
"""
tool_outputs = []
for tool_call in chat_history.messages[0].items:
if isinstance(tool_call, FunctionResultContent):
tool_outputs.append({
"tool_call_id": tool_call.id,
"output": tool_call.result,
})
return tool_outputs
tool_call_lookup = {
tool_call.id: tool_call
for message in chat_history.messages
for tool_call in message.items
if isinstance(tool_call, FunctionResultContent)
}

return [
{"tool_call_id": fcc.id, "output": str(tool_call_lookup[fcc.id].result)}
for fcc in fccs
if fcc.id in tool_call_lookup
]

# endregion
4 changes: 2 additions & 2 deletions python/tests/unit/agents/test_open_ai_assistant_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -965,8 +965,8 @@ def test_format_tool_outputs(azure_openai_assistant_agent, openai_unit_test_env)
frc = FunctionResultContent.from_function_call_content_and_result(fcc, 123, {"test2": "test2"})
chat_history.add_message(message=frc.to_chat_message_content())

tool_outputs = azure_openai_assistant_agent._format_tool_outputs(chat_history)
assert tool_outputs[0] == {"tool_call_id": "test", "output": 123}
tool_outputs = azure_openai_assistant_agent._format_tool_outputs([fcc], chat_history)
assert tool_outputs[0] == {"tool_call_id": "test", "output": "123"}


@pytest.mark.asyncio
Expand Down

0 comments on commit db0faca

Please sign in to comment.