Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: __init__() got an unexpected keyword argument 'client_id' #61

Closed
sentry-io bot opened this issue Aug 24, 2024 · 11 comments
Closed

TypeError: __init__() got an unexpected keyword argument 'client_id' #61

sentry-io bot opened this issue Aug 24, 2024 · 11 comments
Assignees
Labels
bug Something isn't working

Comments

@sentry-io
Copy link

sentry-io bot commented Aug 24, 2024

Sentry Issue: PATCHBACK-20

TypeError: __init__() got an unexpected keyword argument 'client_id'
  File "octomachinery/routing/webhooks_dispatcher.py", line 69, in route_github_event
    github_install = await github_app.get_installation(github_event)
  File "octomachinery/github/api/app_client.py", line 113, in get_installation
    return await self.get_installation_by_id(install_id)
  File "octomachinery/github/api/app_client.py", line 118, in get_installation_by_id
    GitHubAppInstallationModel(

Task exception was never retrieved
future: <Task finished name='Task-33452' coro=<route_github_event() done, defined at /opt/app-root/lib64/python3.9/site-packages/octomachinery/routing/webhooks_dispatcher.py:29> exception=TypeError("__init__() got an unexpected keyword argument 'client_id'")>

Upvote & Fund

  • We're using Polar.sh so you can upvote and help fund this issue.
  • We receive the funding once the issue is completed & confirmed by you.
  • Thank you in advance for helping prioritize & fund our backlog.
Fund with Polar
@sentry-io sentry-io bot added the bug Something isn't working label Aug 24, 2024
@webknjaz
Copy link
Member

webknjaz commented Aug 24, 2024

I need to go AFK for a couple of hours, but ultimately I'm planning to apply the following patch to prevent this from happening in the future (since it's not the first time GH is suddenly updating their API responses on us w/o actually versioning the changes):

index 304c479..db5ade0 100644
--- a/octomachinery/utils/asynctools.py
+++ b/octomachinery/utils/asynctools.py
@@ -1,12 +1,17 @@
 """Asynchronous tools set."""
 
 from functools import wraps
+from inspect import signature as _inspect_signature
+from logging import getLogger as _get_logger
 from operator import itemgetter
 
 from anyio import create_queue
 from anyio import create_task_group as all_subtasks_awaited
 
 
+logger = _get_logger(__name__)
+
+
 def auto_cleanup_aio_tasks(async_func):
     """Ensure all subtasks finish."""
     @wraps(async_func)
@@ -86,6 +91,24 @@ async def amap(callback, async_iterable):
 
 def dict_to_kwargs_cb(callback):
     """Return a callback mapping dict to keyword arguments."""
+    cb_arg_names = set(_inspect_signature(callback).parameters.keys())
+
     async def callback_wrapper(args_dict):
-        return await try_await(callback(**args_dict))
+        excessive_arg_names = set(args_dict.keys()) - cb_arg_names
+        filtered_args_dict = {
+            arg_name: arg_value for arg_name, arg_value in args_dict.items()
+            if arg_name not in excessive_arg_names
+        }
+        if excessive_arg_names:
+            logger.warning(
+                'Excessive arguments passed to callback %(callable)s',
+                {'callable': callback},
+                extra={
+                    'callable': callback,
+                    'excessive-arg-names': excessive_arg_names,
+                    'passed-in-args': args_dict,
+                    'forwarded-args': filtered_args_dict,
+                },
+            )
+        return await try_await(callback(**filtered_args_dict))
     return callback_wrapper

@webknjaz
Copy link
Member

Re-deployed Chronographer and Patchback with this. Should be good now.

@webknjaz
Copy link
Member

Forgot to update the hashes in lockfiles. Re-deploying and going to sleep. Will re-check Sentry later in the day...

@webknjaz
Copy link
Member

Urgh.. More updates needed for the deps compat. Will have to wait another half a day.

webknjaz added a commit that referenced this issue Aug 25, 2024
This is a follow-up for 97da1e9 that
didn't completely eliminate the exception reported in Sentry.

Ref #61
webknjaz added a commit that referenced this issue Aug 25, 2024
@webknjaz
Copy link
Member

Alright… I think I've updated all the right pins now.

@felixfontein
Copy link

It still doesn't seem to work.

@webknjaz
Copy link
Member

@felixfontein show me how you tried to retrigger processing..

@webknjaz
Copy link
Member

I think I saw some incomplete tracebacks. Will take a look in the morning..

@felixfontein
Copy link

@felixfontein show me how you tried to retrigger processing..

I removed and re-added labels for a merged PR (for example ansible-collections/community.general#8792), or merged a PR that had backport labels.

@webknjaz
Copy link
Member

Oh.. I see. I messed up @ 116222c because I applied a wrapper that returns a coroutine and didn't await that...

@webknjaz
Copy link
Member

Re-deployed and checked that it doesn't traceback on the aiohttp repo anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants