Ollama and #buffers #85
-
Hey 👋 @olimorris and thanks for this nice plugin! I'm trying it out with ollama and I can chat just fine with return {
{
"olimorris/codecompanion.nvim",
dependencies = {
"nvim-lua/plenary.nvim",
"nvim-treesitter/nvim-treesitter",
"nvim-telescope/telescope.nvim",
{
"stevearc/dressing.nvim",
},
},
config = function()
local ollama_fn = function()
return require("codecompanion.adapters").use("ollama", { schema = { model = { default = "llama3.1" } } })
end
require("codecompanion").setup({
adapters = {
ollama = ollama_fn,
},
strategies = {
chat = {
adapter = "ollama",
},
inline = {
adapter = "ollama",
},
agent = {
adapter = "ollama",
},
},
})
end,
},
} However, I can't quite get If you do a bunch of |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 7 replies
-
Hey @fredrikaverpil. Thanks for your kind words. There may be a few nuances with Are these constraints stopping it from working for you? |
Beta Was this translation helpful? Give feedback.
Hey @fredrikaverpil. Thanks for your kind words.
There may be a few nuances with
#buffers
I should have made users aware of. It will only share buffers that are of the same filetype as the buffer you originated the chat from. My logic was it stops things like large markdown being sent over to the LLM when I only care about fellow Lua or Python files for example. Also, it usesnvim_buf_is_valid
to check that a buffer exists and has not been unloaded or deleted. Just because a buffer is listed may not mean it's loaded.Are these constraints stopping it from working for you?