Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on method ssm.HMM (GLM weight sign) #164

Open
Gentu-Ding opened this issue Sep 9, 2023 · 2 comments
Open

Question on method ssm.HMM (GLM weight sign) #164

Gentu-Ding opened this issue Sep 9, 2023 · 2 comments

Comments

@Gentu-Ding
Copy link

Gentu-Ding commented Sep 9, 2023

Hi there!

Thank you very much for the ssm package! I have a question regarding the output of the fitted weights from the ssm.HMM method when I applied it to a human reward-learning task. The task is a 2-step task that humans learn to choose between two options to collect rewards. And the GLM I’m using is really studying how previous outcome affect people’s current choice (i.e. whether they stick to the rewarded option or not). Generally people in this task showed decent reward learning, that is they expressed the behaviors consistent with a positive outcome effect on their current choice of sticking to the rewarded option (outcome: 0 - no reward, 1 - reward). But the output of the fitted weights, which was returned by glmhmm.observations.params seemed to give a negative weight loading on the effect of “outcome” (Figure 1), which seems to be the opposite of the actual behaviors (the loading on the “outcome_transition" seems to be opposite too compared with people’s actual behavior). But later if I use the method glmhmm.observations.calculate_logits to check the model prediction of behaviors in different states, the results all make sense (Figure 2).

Just would like check this with you to see what in the usage of the package might cause the potential problem of weight-sign flipping in the Figure 1 here. I’m attaching the codes I’ve used. Thank you!

Figure 1:
image

Figure 2:
image

Code 1: Fit a GLM to have initialized weights for the GLM-HMM
image

Code 2: Fit the GLM-HMM with the initialized weights from the GLM in Code 1
image

Best regards,
Weilun

@Gentu-Ding Gentu-Ding changed the title Question on method ssm.HMM (GLM weight-sign flipped) Question on method ssm.HMM (GLM weight-sign flipped?) Sep 15, 2023
@Gentu-Ding Gentu-Ding changed the title Question on method ssm.HMM (GLM weight-sign flipped?) Question on method ssm.HMM (GLM weight sign flipped?) Sep 15, 2023
@Gentu-Ding Gentu-Ding changed the title Question on method ssm.HMM (GLM weight sign flipped?) Question on method ssm.HMM (GLM weight sign) Sep 16, 2023
@slinderman
Copy link
Collaborator

Tagging @zashwood in case she has any thoughts on the sign flipping in the GLM-HMM.

@abisi
Copy link

abisi commented Jul 19, 2024

Hi @Gentu-Ding, did you get to the bottom of this? I'm curious because I also get negative weights for a feature that I'd expect to be positively associated with the outcome.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants