A lightweight, offline, privacy-respecting AI coding assistant integrated directly into your terminal Vim.
Powered by Ollama and local models likemistral
,llama3
, andcodellama
.
No internet. No telemetry. 100% local.
- Trigger the assistant with
\a
while selecting code in visual mode - Preview AI-generated code with
vimdiff
- Accept or reject changes interactively
- Automatically replaces selected code if approved
- Displays a detailed explanation in a vertical split (formatted as comments)
- Smart comment formatting based on filetype (
//
,#
, etc.) - Works entirely offline — no OpenAI or cloud APIs required
git clone https://github.com/agace/vim-ai-assistant.git
cd vim-ai-assistant
pip install -r requirements.txt
Make sure Ollama is installed and running. Then install a model (e.g. mistral):
ollama pull mistral
ollama serve
You can replace
mistral
with any compatible model likellama3
,codellama
, etc. Just make sure theMODEL
variable inassistant.py
matches the model you pulled. Ollama currently supports a variety of LLMs, a full list is available here: https://ollama.ai/library
cat .vimrc-snippet >> ~/.vimrc
Then reload Vim:
:source ~/.vimrc
- Open a file in Vim
- Select a block of code in visual mode
- Press
\a
- Enter a custom instruction (e.g.
"simplify this"
,"optimize performance"
,"fix and explain"
) - You’ll see:
- A diff preview side-by-side (via vimdiff)
- Press
y
to accept the changes - Press
n
to cancel and leave the code untouched
- After confirming, an explanation opens in a vertical split with comments
-
Change the AI model: Edit the
MODEL
variable inassistant.py
to use a different local model (e.g.,llama3
,codellama
). -
Customize the keybinding: The default keybinding is
\a
in visual mode. You can change it in your.vimrc
:
vnoremap <leader>a :<C-U>call AIHelper()<CR>
-
Customizing the Prompt: To change how the assistant behaves, edit
prompt.txt
.
This is the system message sent to the AI, it controls formatting, tone, and expectations. -
Change the temp file location (optional): You can edit
assistant.py
orAIHelper()
in.vimrc-snippet
to change where input/output files are written.
-
Nothing happens after pressing
\a
Ensure you’re in visual mode and have selected code. Then press\a
. -
Error: command not found: ollama
Ollama must be installed and running. See: https://ollama.com -
Model not found
Make sure you’ve runollama pull mistral
(or your desired model), and thatassistant.py
uses the correct name.