Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLM Monitoring not working for Async OpenAI requests #3494

Open
AltafHussain4748 opened this issue Aug 21, 2024 · 8 comments
Open

LLM Monitoring not working for Async OpenAI requests #3494

AltafHussain4748 opened this issue Aug 21, 2024 · 8 comments
Labels

Comments

@AltafHussain4748
Copy link

Problem Statement

I just experimented LLM monitoring and could not make it work with AsyncOpenAI.

Below is my code

My sentry init is

sentry_sdk.init(
    dsn=os.environ.get("SENTRY_DSN"),
    integrations=[sentry_logging],
    environment=os.environ.get("ENVIRONMENT", "prod"),
    send_default_pii=True,
)

Function i am using:

@ai_track("Tracking Name")
@async_retry(retries=4)
async def func():
    client = AsyncOpenAI()
    with sentry_sdk.start_transaction(op="ai-inference", name="Structured Data Prompt"):
        response = await client.chat.completions.create(
            model=model,
            messages=messages,
            functions=functions,
            temperature=0.0,
            timeout=120,
        )

Versions

sentry-sdk==2.13.0
openai==1.37.1

Solution Brainstorm

No response

Product Area

Insights

@getsantry
Copy link

getsantry bot commented Aug 21, 2024

Assigning to @getsentry/support for routing ⏲️

@getsantry
Copy link

getsantry bot commented Aug 21, 2024

Routing to @getsentry/product-owners-insights for triage ⏲️

@bcoe
Copy link
Member

bcoe commented Aug 22, 2024

@AltafHussain4748 thank you for the bug report, I just added to our backlog of work to triage next week. We'll update soon.

@bcoe bcoe transferred this issue from getsentry/sentry Sep 3, 2024
@antonpirker
Copy link
Member

Hey @AltafHussain4748 !

Your code looks good, I think the only thing your are missing that you need to enable tracing. (Setting tracing_sample_rate=1.0 in your init() call.)

You can check if data is sent to Sentry by setting debug=True to your init() call and then you see something like this message in your console:

Sending envelope [envelope with 1 items (transaction)] project:5461230 host:o447951.ingest.sentry.io

(The LLM data is in the transaction envelope)

Hope this help!

@vetyy
Copy link

vetyy commented Sep 4, 2024

Hey, sorry I didn't notice this issue before and created a new one, but it doesn't work because of this: #3496

@antonpirker
Copy link
Member

Cool @vetyy , thanks for linking!

@vetyy
Copy link

vetyy commented Sep 4, 2024

@AltafHussain4748

This is the configuration I am using

    sentry_sdk.init(
        dsn=dsn,
        release=release,
        environment=environment,
        send_default_pii=True,
        enable_tracing=True,
        integrations=[
            OpenAIIntegration(
                include_prompts=False,  # Exclude prompts from being sent to Sentry, despite send_default_pii=True
                tiktoken_encoding_name="cl100k_base",
            )
        ],
    )

Don't forget to specify tiktoken_encoding_name otherwise it will calculate 0 tokens.

    def count_tokens(self, s):
        # type: (OpenAIIntegration, str) -> int
        if self.tiktoken_encoding is not None:
            return len(self.tiktoken_encoding.encode_ordinary(s))
        return 0

and also make sure you have pip install tiktoken installed.

@AltafHussain4748
Copy link
Author

Hey @AltafHussain4748 !

Your code looks good, I think the only thing your are missing that you need to enable tracing. (Setting tracing_sample_rate=1.0 in your init() call.)

You can check if data is sent to Sentry by setting debug=True to your init() call and then you see something like this message in your console:

Sending envelope [envelope with 1 items (transaction)] project:5461230 host:o447951.ingest.sentry.io

(The LLM data is in the transaction envelope)

Hope this help!

Thanks for your reply even with this parameter i was not able to make it work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: No status
Development

No branches or pull requests

4 participants