-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bellman Equation is not correct #11
Comments
Thanks for the tip. I tested it but the agent was not able to achieve higher scores. I don't think that change is correct since each state will end up having the same |
I also did not achieve significantly different results, so the impact of the change seems to be minor in practice. The Bellman equation states that for any state-action pair at time t the expected return is the q-value of the action plus the discounted maximum expected return that can be achieved from any possible next state-action pair. That is why the |
Hey man,
I used your train function in my project because of its optimization. It runs the fit function in one batch an accelerates training quite a bit, thx for that.
Problem is that your Bellman equation is slightly wrong. The original Bellman equation states that the best policy is the one that leads to the next state that yields the highest possible return.
Check out this blog or their sources: Deeplizard
Basically what you need to do is instead of adding reward and next_qs[i] you want to add reward and max(next_qs)
I copy and pasted this from my code, so the variable names are different, but I think you get the point.
https://github.com/nuno-faria/tetris-ai/blob/4d01877100870e2a6a1ef84dc955354e534589ae/dqn_agent.py#L132C64-L132C64
Again thanks for this cool optimization!
Keep up the good work.
The text was updated successfully, but these errors were encountered: