Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python: major features for the Kernel Arguments, Function Result and new Prompt Templating #5077

Merged
merged 32 commits into from
Feb 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
76af75b
kernel args and func result
moonbox3 Feb 14, 2024
895b2da
remove functions view
moonbox3 Feb 14, 2024
838c68b
updated blocks, small updates to samples and tests
moonbox3 Feb 14, 2024
1e6543f
Python: Kernel service revamp and AIServiceSelector (#5050)
eavanvalkenburg Feb 16, 2024
3f500ff
Python: fix for using complete_chat (#5072)
eavanvalkenburg Feb 19, 2024
a2aaf8c
Python: implemented multiple arguments for functions in templates, wi…
eavanvalkenburg Feb 21, 2024
15ed9a8
fix annotated version in stepwise_planner
eavanvalkenburg Feb 22, 2024
07f5e02
fix 3.9- isinstance check validity
eavanvalkenburg Feb 22, 2024
3a723a1
fixed annotation parsing for py3.9-
eavanvalkenburg Feb 22, 2024
52d043b
type fix
eavanvalkenburg Feb 22, 2024
ec02d89
Python: implement Chat history, and refactor prompt config/prompt con…
moonbox3 Feb 22, 2024
b8cbd00
Python: Clean up kernel examples and notebooks and, allow plugins to …
moonbox3 Feb 23, 2024
33bcd8e
Python: Updated chat history (#5120)
eavanvalkenburg Feb 23, 2024
ad23100
Spelling fix and clear notebook output.
moonbox3 Feb 23, 2024
be96376
Fix type issue for python 3.8.
moonbox3 Feb 23, 2024
5995ba7
Use Python 3.8 Type instead of type
moonbox3 Feb 23, 2024
cf99fb8
Fix broken links and imports.
moonbox3 Feb 23, 2024
d5a524f
Fix dict type to Dict for Python 3.8
moonbox3 Feb 23, 2024
b206a8c
Fix type to Type
moonbox3 Feb 23, 2024
82fd40b
Merge branch 'main' into python_kernel_args_latest
moonbox3 Feb 23, 2024
57a04fa
Fix streaming return: add a toggle to return a function result with t…
moonbox3 Feb 23, 2024
b5164b7
Merge branch 'python_kernel_args_latest' of https://github.com/micros…
moonbox3 Feb 23, 2024
6f2e785
formatting fixes
moonbox3 Feb 23, 2024
675580c
Fix logging spacing to make it more readable.
moonbox3 Feb 23, 2024
b04997d
Addressed PR feedback
moonbox3 Feb 23, 2024
0e251c6
ruff formatting fix
moonbox3 Feb 23, 2024
3e44d01
Merge branch 'main' into python_kernel_args_latest
moonbox3 Feb 24, 2024
180c39b
Use AzureOpenAI eastus resource
moonbox3 Feb 24, 2024
c4ffd72
Merge branch 'python_kernel_args_latest' of https://github.com/micros…
moonbox3 Feb 24, 2024
cfc7626
Add secret name key to test yml
moonbox3 Feb 24, 2024
69d006a
Add missing underscore
moonbox3 Feb 24, 2024
99a9b83
Adjust embedding resource env var name to one that exists in eastus.
moonbox3 Feb 24, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .github/workflows/python-integration-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,10 @@ jobs:
AzureOpenAI__DeploymentName: ${{ vars.AZUREOPENAI__DEPLOYMENTNAME }}
AzureOpenAIChat__DeploymentName: ${{ vars.AZUREOPENAI__CHAT__DEPLOYMENTNAME }}
AzureOpenAIEmbeddings__DeploymentName: ${{ vars.AZUREOPENAIEMBEDDINGS__DEPLOYMENTNAME2 }}
AzureOpenAIEmbeddings_EastUS__DeploymentName: ${{ vars.AZUREOPENAIEMBEDDINGS_EASTUS__DEPLOYMENTNAME}}
AzureOpenAI__Endpoint: ${{ secrets.AZUREOPENAI__ENDPOINT }}
AzureOpenAI_EastUS__Endpoint: ${{ secrets.AZUREOPENAI_EASTUS__ENDPOINT }}
AzureOpenAI_EastUS__ApiKey: ${{ secrets.AZUREOPENAI_EASTUS__APIKEY }}
AzureOpenAIEmbeddings__Endpoint: ${{ secrets.AZUREOPENAI__ENDPOINT }}
AzureOpenAI__ApiKey: ${{ secrets.AZUREOPENAI__APIKEY }}
AzureOpenAIEmbeddings__ApiKey: ${{ secrets.AZUREOPENAI__APIKEY }}
Expand Down Expand Up @@ -142,9 +145,12 @@ jobs:
AzureOpenAI__DeploymentName: ${{ vars.AZUREOPENAI__DEPLOYMENTNAME }}
AzureOpenAIChat__DeploymentName: ${{ vars.AZUREOPENAI__CHAT__DEPLOYMENTNAME }}
AzureOpenAIEmbeddings__DeploymentName: ${{ vars.AZUREOPENAIEMBEDDINGS__DEPLOYMENTNAME2 }}
AzureOpenAIEmbeddings_EastUS__DeploymentName: ${{ vars.AZUREOPENAIEMBEDDINGS_EASTUS__DEPLOYMENTNAME}}
AzureOpenAI__Endpoint: ${{ secrets.AZUREOPENAI__ENDPOINT }}
AzureOpenAIEmbeddings__Endpoint: ${{ secrets.AZUREOPENAI__ENDPOINT }}
AzureOpenAI__ApiKey: ${{ secrets.AZUREOPENAI__APIKEY }}
AzureOpenAI_EastUS__Endpoint: ${{ secrets.AZUREOPENAI_EASTUS__ENDPOINT }}
AzureOpenAI_EastUS__ApiKey: ${{ secrets.AZUREOPENAI_EASTUS__APIKEY }}
AzureOpenAIEmbeddings__ApiKey: ${{ secrets.AZUREOPENAI__APIKEY }}
Bing__ApiKey: ${{ secrets.BING__APIKEY }}
OpenAI__ApiKey: ${{ secrets.OPENAI__APIKEY }}
Expand Down
8 changes: 4 additions & 4 deletions python/.coveragerc
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@
source = semantic_kernel
omit =
semantic_kernel/connectors/memory/*
semantic_kernel/connectors/openapi/*
semantic_kernel/connectors/search_engine/*
semantic_kernel/connectors/ai/google_palm/*
semantic_kernel/connectors/ai/hugging_face/*
semantic_kernel/utils/settings.py
semantic_kernel/utils/null_logger.py
semantic_kernel/utils/logging.py



[report]
Expand Down
5 changes: 0 additions & 5 deletions python/.vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,6 @@
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
"pythonTestExplorer.testFramework": "pytest",
"pythonTestExplorer.pytestPath": "poetry",
"pythonTestExplorer.pytestArguments": [
"run",
"pytest"
],
"[python]": {
"editor.codeActionsOnSave": {
"source.organizeImports": "explicit",
Expand Down
65 changes: 50 additions & 15 deletions python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,14 +33,34 @@ kernel = sk.Kernel()

# Prepare OpenAI service using credentials stored in the `.env` file
api_key, org_id = sk.openai_settings_from_dot_env()
kernel.add_chat_service("chat-gpt", OpenAIChatCompletion("gpt-3.5-turbo", api_key, org_id))
service_id="chat-gpt"
kernel.add_service(
OpenAIChatCompletion(
service_id=service_id,
ai_model_id="gpt-3.5-turbo",
api_key=api_key,
org_id=org_id
)
)

# Alternative using Azure:
# deployment, api_key, endpoint = sk.azure_openai_settings_from_dot_env()
# kernel.add_chat_service("dv", AzureChatCompletion(deployment, endpoint, api_key))

# Wrap your prompt in a function
prompt = kernel.create_semantic_function("""
# kernel.add_service(
# AzureChatCompletion(
# service_id="dv",
# deployment_name=deployment,
# base_url=endpoint,
# api_key=api_key
# )
# )

# Define the request settings
req_settings = kernel.get_service(service_id).get_prompt_execution_settings_class()(service_id=service_id)
req_settings.max_tokens = 2000
req_settings.temperature = 0.7
req_settings.top_p = 0.8

prompt = """
1) A robot may not injure a human being or, through inaction,
allow a human being to come to harm.

Expand All @@ -50,37 +70,52 @@ such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.

Give me the TLDR in exactly 5 words.""")
Give me the TLDR in exactly 5 words."""

prompt_template_config = sk.PromptTemplateConfig(
template=prompt,
name="tldr",
template_format="semantic-kernel",
execution_settings=req_settings,
)

function = kernel.create_function_from_prompt(
prompt_template_config=prompt_template_config,
)

# Run your prompt
# Note: functions are run asynchronously
async def main():
print(await prompt()) # => Robots must not harm humans.
result = await kernel.invoke(function)
print(result) # => Robots must not harm humans.

if __name__ == "__main__":
asyncio.run(main())
```

# **Semantic functions** are Prompts with input parameters
# **Semantic Prompt Functions** are Prompts with input parameters

```python
# Create a reusable function with one input parameter
summarize = kernel.create_semantic_function("{{$input}}\n\nOne line TLDR with the fewest words.")
# Create a reusable function summarize function
summarize = kernel.create_function_from_prompt(
template="{{$input}}\n\nOne line TLDR with the fewest words."
execution_settings=req_settings,
)

# Summarize the laws of thermodynamics
print(await summarize("""
print(await kernel.invoke(summarize, input="""
1st Law of Thermodynamics - Energy cannot be created or destroyed.
2nd Law of Thermodynamics - For a spontaneous process, the entropy of the universe increases.
3rd Law of Thermodynamics - A perfect crystal at zero Kelvin has zero entropy."""))

# Summarize the laws of motion
print(await summarize("""
print(await kernel.invoke(summarize, input="""
1. An object at rest remains at rest, and an object in motion remains in motion at constant speed and in a straight line unless acted on by an unbalanced force.
2. The acceleration of an object depends on the mass of the object and the amount of force applied.
3. Whenever one object exerts a force on another object, the second object exerts an equal and opposite on the first."""))

# Summarize the law of universal gravitation
print(await summarize("""
print(await kernel.invoke(summarize, input="""
Every point mass attracts every single other point mass by a force acting along the line intersecting both points.
The force is proportional to the product of the two masses and inversely proportional to the square of the distance between them."""))

Expand All @@ -100,8 +135,8 @@ Python notebooks:
- [Getting started with Semantic Kernel](./notebooks/00-getting-started.ipynb)
- [Loading and configuring Semantic Kernel](./notebooks/01-basic-loading-the-kernel.ipynb)
- [Running AI prompts from file](./notebooks/02-running-prompts-from-file.ipynb)
- [Creating Semantic Functions at runtime (i.e. inline functions)](./notebooks/03-semantic-function-inline.ipynb)
- [Using Context Variables to Build a Chat Experience](./notebooks/04-context-variables-chat.ipynb)
- [Creating Prompt Functions at runtime (i.e. inline functions)](./notebooks/03-prompt-function-inline.ipynb)
- [Using Context Variables to Build a Chat Experience](./notebooks/04-kernel-arguments-chat.ipynb)
- [Introduction to planners](./notebooks/05-using-the-planner.ipynb)
- [Building Memory with Embeddings](./notebooks/06-memory-and-embeddings.ipynb)
- [Using Hugging Face for Plugins](./notebooks/07-hugging-face-for-plugins.ipynb)
Expand Down
75 changes: 49 additions & 26 deletions python/notebooks/00-getting-started.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -31,20 +31,10 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Option 1: using OpenAI\n",
"\n",
"**Step 2**: Add your [OpenAI Key](https://openai.com/product/) key to a `.env` file in the same folder (org Id only if you have multiple orgs):\n",
"\n",
"```\n",
"OPENAI_API_KEY=\"sk-...\"\n",
"OPENAI_ORG_ID=\"\"\n",
"```\n",
"\n",
"Use \"keyword arguments\" to instantiate an OpenAI Chat Completion service and add it to the kernel:"
"### Configure the service you'd like to use via the `Service` Enum."
]
},
{
Expand All @@ -53,21 +43,28 @@
"metadata": {},
"outputs": [],
"source": [
"from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion\n",
"from services import Service\n",
"\n",
"api_key, org_id = sk.openai_settings_from_dot_env()\n",
"\n",
"kernel.add_chat_service(\n",
" \"chat-gpt\",\n",
" OpenAIChatCompletion(ai_model_id=\"gpt-3.5-turbo-1106\", api_key=api_key, org_id=org_id),\n",
")"
"# Select a service to use for this notebook (available services: OpenAI, AzureOpenAI, HuggingFace)\n",
"selectedService = Service.OpenAI"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Option 1: using OpenAI\n",
"\n",
"**Step 2**: Add your [OpenAI Key](https://openai.com/product/) key to a `.env` file in the same folder (org Id only if you have multiple orgs):\n",
"\n",
"```\n",
"OPENAI_API_KEY=\"sk-...\"\n",
"OPENAI_ORG_ID=\"\"\n",
"```\n",
"\n",
"Use \"keyword arguments\" to instantiate an OpenAI Chat Completion service and add it to the kernel:\n",
"\n",
"## Option 2: using Azure OpenAI\n",
"\n",
"**Step 2**: Add your [Azure Open AI Service key](https://learn.microsoft.com/azure/cognitive-services/openai/quickstart?pivots=programming-language-studio) settings to a `.env` file in the same folder:\n",
Expand All @@ -87,14 +84,23 @@
"metadata": {},
"outputs": [],
"source": [
"from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion\n",
"service_id = None\n",
"if selectedService == Service.OpenAI:\n",
" from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion\n",
"\n",
"deployment, api_key, endpoint = sk.azure_openai_settings_from_dot_env()\n",
" api_key, org_id = sk.openai_settings_from_dot_env()\n",
" service_id = \"chat-gpt\"\n",
" kernel.add_service(\n",
" OpenAIChatCompletion(service_id=service_id, ai_model_id=\"gpt-3.5-turbo-1106\", api_key=api_key, org_id=org_id),\n",
" )\n",
"elif selectedService == Service.AzureOpenAI:\n",
" from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion\n",
"\n",
"kernel.add_chat_service(\n",
" \"chat_completion\",\n",
" AzureChatCompletion(deployment_name=deployment, endpoint=endpoint, api_key=api_key),\n",
")"
" deployment, api_key, endpoint = sk.azure_openai_settings_from_dot_env()\n",
" service_id = \"chat_completion\"\n",
" kernel.add_service(\n",
" AzureChatCompletion(service_id=service_id, deployment_name=deployment, endpoint=endpoint, api_key=api_key),\n",
" )"
]
},
{
Expand All @@ -113,10 +119,27 @@
"metadata": {},
"outputs": [],
"source": [
"plugin = kernel.import_semantic_plugin_from_directory(\"../../samples/plugins\", \"FunPlugin\")\n",
"try:\n",
" plugin = kernel.import_plugin_from_prompt_directory(service_id, \"../../samples/plugins\", \"FunPlugin\")\n",
"except ValueError as e:\n",
" # Don't fail if we try to add the plug in again\n",
" # This is just for the sake of the example\n",
" # Once the plugin has been added to the kernel it will fail\n",
" # to add it again, if this cell is run multiple times without\n",
" # restarting the kernel\n",
" pass"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"joke_function = plugin[\"Joke\"]\n",
"\n",
"print(await joke_function(\"time travel to dinosaur age\"))"
"joke = await kernel.invoke(joke_function, sk.KernelArguments(input=\"time travel to dinosaur age\", style=\"super silly\"))\n",
"print(joke)"
]
}
],
Expand Down
81 changes: 24 additions & 57 deletions python/notebooks/01-basic-loading-the-kernel.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -34,29 +34,7 @@
"metadata": {},
"outputs": [],
"source": [
"import semantic_kernel as sk\n",
"from semantic_kernel.connectors.ai.open_ai import (\n",
" AzureChatCompletion,\n",
" OpenAIChatCompletion,\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can instantiate the kernel in a few ways, depending on your use case."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# Simple instance\n",
"kernel_1 = sk.Kernel()"
"import semantic_kernel as sk"
]
},
{
Expand All @@ -66,7 +44,7 @@
"source": [
"When using the kernel for AI requests, the kernel needs some settings like URL and credentials to the AI models.\n",
"\n",
"The SDK currently supports OpenAI and Azure OpenAI, other services will be added over time.\n",
"The SDK currently supports OpenAI and Azure OpenAI, among other connectors.\n",
"\n",
"If you need an Azure OpenAI key, go [here](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/quickstart?pivots=rest-api)."
]
Expand All @@ -77,39 +55,10 @@
"metadata": {},
"outputs": [],
"source": [
"kernel = sk.Kernel()\n",
"from services import Service\n",
"\n",
"kernel.add_chat_service( # We are adding a text service\n",
" \"Azure_curie\", # The alias we can use in prompt templates' config.json\n",
" AzureChatCompletion(\n",
" deployment_name=\"my-finetuned-Curie\", # Azure OpenAI *Deployment name*\n",
" endpoint=\"https://contoso.openai.azure.com/\", # Azure OpenAI *Endpoint*\n",
" api_key=\"...your Azure OpenAI Key...\", # Azure OpenAI *Key*\n",
" ),\n",
")\n",
"\n",
"kernel.add_chat_service( # We are adding a text service\n",
" \"OpenAI_chat_gpt\", # The alias we can use in prompt templates' config.json\n",
" OpenAIChatCompletion(\n",
" ai_model_id=\"gpt-3.5-turbo\", # OpenAI Model Name\n",
" api_key=\"...your OpenAI API Key...\", # OpenAI API key\n",
" org_id=\"...your OpenAI Org ID...\", # *optional* OpenAI Organization ID\n",
" ),\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"When working with multiple services and multiple models, the **first service** defined\n",
"is also the \"**default**\" used in these scenarios:\n",
"\n",
"* a prompt configuration doesn't specify which AI service to use\n",
"* a prompt configuration requires a service unknown to the kernel\n",
"\n",
"The default can be set and changed programmatically:"
"# Select a service to use for this notebook (available services: OpenAI, AzureOpenAI, HuggingFace)\n",
"selectedService = Service.OpenAI"
]
},
{
Expand All @@ -118,7 +67,25 @@
"metadata": {},
"outputs": [],
"source": [
"kernel.set_default_text_completion_service(\"Azure_curie\")"
"kernel = sk.Kernel()\n",
"\n",
"service_id = None\n",
"if selectedService == Service.OpenAI:\n",
" from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion\n",
"\n",
" api_key, org_id = sk.openai_settings_from_dot_env()\n",
" service_id = \"oai_chat_gpt\"\n",
" kernel.add_service(\n",
" OpenAIChatCompletion(service_id=service_id, ai_model_id=\"gpt-3.5-turbo-1106\", api_key=api_key, org_id=org_id),\n",
" )\n",
"elif selectedService == Service.AzureOpenAI:\n",
" from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion\n",
"\n",
" deployment, api_key, endpoint = sk.azure_openai_settings_from_dot_env()\n",
" service_id = \"aoai_chat_completion\"\n",
" kernel.add_service(\n",
" AzureChatCompletion(service_id=service_id, deployment_name=deployment, endpoint=endpoint, api_key=api_key),\n",
" )"
]
},
{
Expand Down
Loading
Loading