sokowirowka.dev
my daily tech stack
A short, opinionated tour of the tools I work with every day, Claude Code, Neovim,
and a local-AI stack, plus a few notes on getting more out of Claude Code.
If you take one thing from this page: run /insights first.
claude code
-
Start with
/insights. It reports your token spend, tool usage, and session length across the last month, a useful baseline before changing skills, hooks, or yourCLAUDE.md. -
Skills I use most.
superpowers:brainstormingfor turning "build X" into a written spec before code is written.solana-devas a reference playbook for Solana wallet and RPC patterns. -
Local docs index before starting work.
I use
lv-local, my own MCP server (going public once I'm happy with it), to embed the relevant docs and books, then callsearch_codefrom inside Claude Code. - Three agents for non-trivial work. Executor writes the code, auditor reviews it against the spec, tester runs and extends the test suite. Each agent gets a narrower context.
- Audit, cross-check, fix, re-grade. Run the audit, have Claude cross-check each finding against the source, fix in order CRITICAL → HIGH → MED → LOW with one commit per group, re-run tests after each tier, then re-grade the audit doc.
- Eval-driven loops for LLM code. For voice agents, parsers, classifiers: write the eval harness first, then iterate against pass-rate.
-
Smoke-test the real boundary.
cargo checkand unit tests don't cover integration, run the actual HTTP / MCP / runtime path before marking a task done. -
Lesser-known commands worth knowing.
/rc(/remote-control) lets external scripts or scheduled agents push prompts into a running session./compactmanually compacts context before auto-truncation./doctorwhen auth, MCP, or hooks feel off./costmid-session for token spend. -
Hooks I keep around.
UserPromptSubmitfor project-specific reminders.PostToolUsematched onEdit|Writeto runcargo checkandcargo clippyafter edits, so compile and lint errors surface before the next prompt. AStophook to run the test suite at end of session. -
Loop.
plan → execute → verify (
cargo check && cargo clippy && cargo test) → commit.
neovim
- How I landed here. VS Code → Cursor → JetBrains → Zed → Neovim. They all worked. Working in nvim is the one I find most satisfying day to day.
-
From scratch with
lazy.nvim. No distro.init.lua+ alua/plugins/*tree, one file per concern. Easier to reason about than LazyVim/AstroNvim once you know what you want. -
Plugins I actually use.
telescopefor fuzzy-find,nvim-treesitterfor syntax,mason+nvim-lspconfigfor LSP wrangling,nvim-cmpfor completion,hop.nvimfor motion,trouble.nvimfor diagnostics. -
Rust gets its own setup.
rustaceanvim+crates.nvim, rust-analyzer with sane defaults and inline crate version hints inCargo.toml. -
AI lives inside the editor too.
codecompanion.nvimwired to a local Qwen 3.6 27B over an OpenAI-compatible adapter. Inline prompts, chat buffer, no cloud round-trip. -
Sits next to Claude Code, doesn't replace it.
Terminal split for
claude, nvim for the actual editing. Two tools, one job each.
local ai
-
Qwen 3.6 27B on
llama-server. One process, OpenAI-compatible endpoint at127.0.0.1:8081, 131k context. Used as the default model for everything that doesn't need frontier-tier reasoning. -
lv-localfor RAG over my own docs. My MCP server (going public once I'm happy with it) indexes Markdown, PDFs, EPUBs, and source trees into a local embeddings store, then exposessearch_codeover MCP. I point it at course notes, books like Clean Architecture, and project repos. -
Same model in Neovim via codecompanion.
codecompanion.nvimtalks to the localllama-serverover an OpenAI-compatible adapter, so chat and inline prompts hit the same process I'm already running, no duplicated weights, no second runtime. - Why local. No per-token cost, no data leaves the machine, and it's fast enough for short prompts and inline edits.