-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python: Auto Function Invocation Filter is not handled properly for OpenAI models #8020
Comments
moonbox3
added
bug
Something isn't working
python
Pull requests for the Python Semantic Kernel
filters
Anything related to filters
labels
Aug 9, 2024
8 tasks
The chat completions can be simplified to:
since we're adding the function results to the chat every time. |
4 tasks
github-merge-queue bot
pushed a commit
that referenced
this issue
Aug 12, 2024
…onality for OpenAI models. (#8071) ### Motivation and Context Our auto function invoke filter does not have the correct behavior. Currently, it will return the straight chat completion results which contain the function call content, instead of the function result content. <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> ### Description This PR fixes this by adding the function result content to the chat history, and if the auto function invoke filter has the context as terminated then it will return the last item of the chat history (function result content) otherwise it will be the completions (in the event that the function isn't run for some reason). - It also fixes the `auto_function_invoke_filters` sample to show the FunctionResultContent if it exists. - Aligns to the same behavior as in dotnet. - Fixes #8020 <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [X] The code builds clean without any errors or warnings - [X] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [X] All unit tests pass, and I have added new tests where possible - [X] I didn't break anyone 😄
LudoCorporateShark
pushed a commit
to LudoCorporateShark/semantic-kernel
that referenced
this issue
Aug 25, 2024
…onality for OpenAI models. (microsoft#8071) ### Motivation and Context Our auto function invoke filter does not have the correct behavior. Currently, it will return the straight chat completion results which contain the function call content, instead of the function result content. <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> ### Description This PR fixes this by adding the function result content to the chat history, and if the auto function invoke filter has the context as terminated then it will return the last item of the chat history (function result content) otherwise it will be the completions (in the event that the function isn't run for some reason). - It also fixes the `auto_function_invoke_filters` sample to show the FunctionResultContent if it exists. - Aligns to the same behavior as in dotnet. - Fixes microsoft#8020 <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [X] The code builds clean without any errors or warnings - [X] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [X] All unit tests pass, and I have added new tests where possible - [X] I didn't break anyone 😄
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
When using the Auto Function Invocation Filter, with a
context.terminate
condition, the originalcompletions
are returned to the caller instead of the auto invocation function result(s).Item 1:
Non-streaming:
Streaming:
If I run the auto invoke function filter example, I should see "Stop trying to ask me to do math, I don't like it!" as the result, but instead I see a manual tool call as the result (the underlying function was already run, so the result should be returned instead of a tool call):
With the updates shown in the image, I get the correction invocation context back with the injected filter result value of "Stop asking me to do math, I don't like it!"
Item 2:
If using the auto invoke kernel function filter, the function result was never added to the chat history -- thus, in the chat history, there would be a tool call entry, and never a FunctionResult content entry following it up.
We should still add the function result to the chat history so it's persisted, even if terminating with the invocation context function result or the completions result. Need to make sure that the chat history is still included in the response's metadata, similar to how it's done for completions.
The text was updated successfully, but these errors were encountered: