You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like if Avante could prompt against the OAI reasoning models
Motivation
These reasoning models are allegedly more capable at coding.
Other
There are a few differences in the API now:
temperature must be set to 1 if you're using either o1-preview or o1-mini models.
max_tokens is not used. Instead max_completion_tokens must be used in the config
These models do not support a system role/message. I think we need to remove this line
Streaming is not supported, so I believe we need to remove this line.
Even with these changes, I'm not getting a successful end to end flow. I've never contributed to the avante.nvim codebase and I'm not entirely sure what else to try at the moment to get things working.
So far, my total diffs look like so:
diff --git a/lua/avante/config.lua b/lua/avante/config.lua
index c1689d7..82d3477 100644
--- a/lua/avante/config.lua+++ b/lua/avante/config.lua@@ -30,8 +30,8 @@ You are an excellent programming expert.
endpoint = "https://api.openai.com/v1",
model = "gpt-4o",
timeout = 30000, -- Timeout in milliseconds
- temperature = 0,- max_tokens = 4096,+ temperature = 1,+ max_completion_tokens = 4096,
["local"] = false,
},
---@type AvanteSupportedProvider
diff --git a/lua/avante/providers/openai.lua b/lua/avante/providers/openai.lua
index 52e62b1..888d466 100644
--- a/lua/avante/providers/openai.lua+++ b/lua/avante/providers/openai.lua@@ -51,7 +51,6 @@ M.parse_message = function(opts)
end
return {
- { role = "system", content = opts.system_prompt },
{ role = "user", content = user_content },
}
end
@@ -91,7 +90,6 @@ M.parse_curl_args = function(provider, code_opts)
body = vim.tbl_deep_extend("force", {
model = base.model,
messages = M.parse_message(code_opts),
- stream = true,
}, body_opts),
}
end
I've gotten this far by trying to use the openai provider and seeing it fail and return an error message. This time, it's not returning anything. I see Generating response ... and it never changes.
The text was updated successfully, but these errors were encountered:
Feature request
I would like if Avante could prompt against the OAI reasoning models
Motivation
These reasoning models are allegedly more capable at coding.
Other
There are a few differences in the API now:
temperature
must be set to 1 if you're using eithero1-preview
oro1-mini
models.max_tokens
is not used. Insteadmax_completion_tokens
must be used in the configsystem
role/message. I think we need to remove this lineEven with these changes, I'm not getting a successful end to end flow. I've never contributed to the avante.nvim codebase and I'm not entirely sure what else to try at the moment to get things working.
So far, my total diffs look like so:
I've gotten this far by trying to use the openai provider and seeing it fail and return an error message. This time, it's not returning anything. I see
Generating response ...
and it never changes.The text was updated successfully, but these errors were encountered: