Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python: Handling Multiple AI Services #5083

Closed
HuskyDanny opened this issue Feb 20, 2024 · 3 comments · Fixed by #5077
Closed

Python: Handling Multiple AI Services #5083

HuskyDanny opened this issue Feb 20, 2024 · 3 comments · Fixed by #5077
Assignees
Labels
python Pull requests for the Python Semantic Kernel

Comments

@HuskyDanny
Copy link

HuskyDanny commented Feb 20, 2024

Describe the bug
Essentially, I want to get a clear path on how to handle different AI service, so that I could use gpt4 for semantic function A and gpt3.5 for semantif function B.

I tried a bit by having the single kernel and specifcying the service_id in the request setting. But it still uses the original default service.
To Reproduce
Steps to reproduce the behavior:

  1. Adding service and set default
            kernel.add_chat_service(
                "azure_openai_chat35_service",
                sk_oai.AzureChatCompletion(
                    deployment_name=chat35_api_config.deployment_model_id,
                    api_key=chat35_api_config.key,
                    endpoint=chat35_api_config.endpoint,
                ),
            )

            kernel.add_chat_service(
                "azure_openai_chat4_service",
                sk_oai.AzureChatCompletion(
                    deployment_name=chat4_api_config.deployment_model_id,
                    api_key=chat4_api_config.key,
                    endpoint=chat4_api_config.endpoint,
                ),
            )
            kernel.set_default_chat_service("azure_openai_chat35_service")
  1. Calling with settings
        refactor_code = self.get_semantic_function(kernel, "skills", "Generate")
        settings = AzureChatRequestSettings(
            service_id="azure_openai_chat4_service", ai_model_id="gpt4-32k"
        )
        response = await refactor_code.invoke_async(context=context, settings=settings)
  1. It still uses 35 service to call

Expected behavior
chat4_service instead should be used to call

Screenshots
If applicable, add screenshots to help explain your problem.

Platform

  • OS: Windows
  • IDE: VS Code
  • Language: Python
  • Source: 0.4.6dev

Additional context
Add any other context about the problem here.

@markwallace-microsoft markwallace-microsoft self-assigned this Feb 20, 2024
@markwallace-microsoft markwallace-microsoft added the python Pull requests for the Python Semantic Kernel label Feb 20, 2024
@markwallace-microsoft markwallace-microsoft removed their assignment Feb 20, 2024
@github-actions github-actions bot changed the title Handling Multiple AI Services Python: Handling Multiple AI Services Feb 20, 2024
@markwallace-microsoft
Copy link
Member

@moonbox3 here's a sample showing the pattern supported in .Net https://github.com/microsoft/semantic-kernel/blob/main/dotnet/samples/KernelSyntaxExamples/Example61_MultipleLLMs.cs

@eavanvalkenburg
Copy link
Member

@HuskyDanny We have support for this incoming soon, this is already present in the feature branch that we have to get together a bunch of big changes, it has a AIServiceSelector class, that uses the service_id and settings to pick the right service (similar to how that is done in dotnet), check it out here: https://github.com/microsoft/semantic-kernel/blob/python_kernel_args_latest/python/semantic_kernel/services/ai_service_selector.py

@HuskyDanny
Copy link
Author

sounds good, excited for the new release

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
python Pull requests for the Python Semantic Kernel
Projects
Archived in project
4 participants