-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: use load_model() function #231
Conversation
IMHO, it's natural to pass the model first and then the weights ( E.g. model = env.net()
model = load_model(model_path, model)
# or
model = load_model(env.net(), model_path) |
I was worried that unnecessary |
Ah, I got it!! |
So, if the definition of the neural network is separated from the environment module, |
That's right. This torch lazy import will be written by whoever needs it, and I think your idea is cool. |
@@ -277,10 +277,9 @@ def network_match_acception(n, env_args, num_agents, port): | |||
return agents_list | |||
|
|||
|
|||
def get_model(env, model_path): | |||
def load_model(model_path, model): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
LGTM |
load_model()
is more natural thanget_model()
.Especially for ONNX models, the model can be loaded with just the
model_path
.Therefore, it makes more sense to have the
model_path
as the first argument, as in build_agent(), but I'm not sure.