Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure SDK Review - [Introduction to Azure AI Studio] #7903

Open
azure-sdk opened this issue Aug 16, 2024 · 1 comment
Open

Azure SDK Review - [Introduction to Azure AI Studio] #7903

azure-sdk opened this issue Aug 16, 2024 · 1 comment
Labels
needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team.

Comments

@azure-sdk
Copy link
Collaborator

New SDK Review meeting has been requested.

Service Name: Azure AI Studio
Review Created By: Neehar Duvvuri
Review Date: 08/22/2024 02:05 PM PT

Release Plan:1285
Hero Scenarios Link: Not Provided
Architecture Diagram Link: Not Provided
Core Concepts Doc Link: Not Provided
APIView Links: Python,

Description: The Azure AI team proposes to introduce the package azure-ai-evals. This package already exists as promptflow-evals, but we are aiming to deprecate that package and rename it to azure-ai-evals. An API view for the current version of promptflow-evals has been provided below, and reference documentation can be found here: https://microsoft.github.io/promptflow/reference/python-library-reference/promptflow-evals/promptflow.html.

This package will allow customers to evaluate their LLM applications for various quality metrics, such as groundedness, coherence, content safety and more. This package will also allow customers to simulate various adversarial/jailbreak scenarios against their LLM application to test its robustness.

Detailed meeting information and documents provided can be accessed here

@github-actions github-actions bot added the needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team. label Aug 16, 2024
@azure-sdk
Copy link
Collaborator Author

Meeting updated by Neehar Duvvuri

Service Name: Azure AI Studio
Review Created By: Neehar Duvvuri
Review Date: 08/22/2024 02:05 PM PT

Hero Scenarios Link: here
Architecture Diagram Link: Not Provided
Core Concepts Doc Link: here
APIView Links: Python,

Description: The Azure AI team proposes to introduce the package azure-ai-evals. This package already exists as promptflow-evals, but we are aiming to deprecate that package and rename it to azure-ai-evals. An API view for the current version of promptflow-evals has been provided below, and reference documentation can be found here: https://microsoft.github.io/promptflow/reference/python-library-reference/promptflow-evals/promptflow.html.

This package will allow customers to evaluate their LLM applications for various quality metrics, such as groundedness, coherence, content safety and more. This package will also allow customers to simulate various adversarial/jailbreak scenarios against their LLM application to test its robustness.

Detailed meeting information and documents provided can be accessed here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team.
Projects
None yet
Development

No branches or pull requests

1 participant