I created Gokin as a companion to Claude Code. When my Claude Code limits ran out, I needed a tool that could:
- Write projects from scratch — Gokin handles the heavy lifting of initial development
- Save money — GLM-4 costs ~$3/month vs Claude Code’s ~$100/month
- Stay secure — I don’t trust Chinese AI company CLIs with my code, so I built my own
Gokin (GLM-4 / Gemini Flash 3) → Claude Code (Claude Opus 4.5)
↓ ↓
Write code from scratch Polish and refine the code
Bulk file operations Complex architectural decisions
Repetitive tasks Code review and optimization
| Tool | Cost | Best For |
|---|---|---|
| Gokin + Ollama | Free (local) | Privacy-focused, offline development |
| Gokin + GLM-4 | ~$3/month | Initial development, bulk operations |
| Gokin + DeepSeek | ~$1/month | Coding tasks, great value |
| Gokin + Gemini Flash 3 | Free tier available | Fast iterations, prototyping |
| Claude Code | ~$100/month | Final polish, complex reasoning |
Note: Chinese models are currently behind frontier models like Claude, but they’re improving rapidly. For best performance, use Gokin with Gemini Flash 3 — it’s fast, capable, and has a generous free tier.
- File Operations — Read, create, edit, copy, move, delete files and directories (including PDF, images, Jupyter notebooks)
- Shell Execution — Run commands with timeout, background execution, sandbox mode
- Search — Glob patterns, regex grep (with regex replacement in edit), semantic search with embeddings
- Google Gemini — Gemini 3 Pro/Flash, free tier available
- DeepSeek — Excellent coding model, very affordable (~$1/month)
- GLM-4 — Cost-effective Chinese model (~$3/month)
- Ollama — Local LLMs (Llama, Qwen, DeepSeek, CodeLlama), free & private
- Multi-Agent System — Specialized agents (Explore, Bash, Plan, General) with adaptive delegation
- Tree Planner — Advanced planning with Beam Search, MCTS, A* algorithms
- Context Predictor — Predicts needed files based on access patterns
- Semantic Search — Find code by meaning, not just keywords
- Git Integration — Status, add, commit, pull request, blame, diff, log
- Task Management — Todo list, background tasks
- Memory System — Remember information between sessions
- Sessions — Save and restore conversation state
- Undo/Redo — Revert file changes (including copy, move, delete operations)
- MCP Support — Connect to external MCP servers for additional tools
- Custom Agent Types — Register your own specialized agents
- Permission System — Control which operations require approval
- Hooks — Automate actions (pre/post tool, on error, on start/exit)
- Themes — Light and dark mode
- GOKIN.md — Project-specific instructions
# Clone the repository
git clone https://github.com/ginkida/gokin.git
cd gokin
# Build
go build -o gokin ./cmd/gokin
# Install via Go (recommended for macOS/Linux)
# This installs the binary to ~/go/bin
go install ./cmd/gokin
# Make sure ~/go/bin is in your PATH:
# echo 'export PATH=$PATH:$(go env GOPATH)/bin' >> ~/.zshrc
# source ~/.zshrc
# Install to system PATH (optional)
sudo mv gokin /usr/local/bin/
- Go 1.23+
- One of:
- Google account with Gemini subscription (OAuth login)
- Google Gemini API key (free tier available)
- DeepSeek API key
- GLM-4 API key
- Ollama installed locally (no API key needed)
Option A: OAuth (recommended for Gemini subscribers)
gokin
> /oauth-login
# Browser opens for Google authentication
Option B: API Key
# Get your free Gemini API key at: https://aistudio.google.com/apikey
# Via environment variable
export GEMINI_API_KEY="your-api-key"
# Or via command in the app
gokin
> /login gemini your-api-key
# In project directory
cd /path/to/your/project
gokin
> Hello! Tell me about this project's structure
> Find all files with .go extension
> Create a function to validate email
| Provider | Models | Cost | Best For |
|---|---|---|---|
| Gemini | gemini-3-flash-preview, gemini-3-pro-preview | Free tier + paid | Fast iterations, prototyping |
| DeepSeek | deepseek-chat, deepseek-reasoner | ~$1/month | Coding tasks, reasoning |
| GLM | glm-4.7 | ~$3/month | Budget-friendly development |
| Ollama | Any model from ollama list |
Free (local) | Privacy, offline, custom models |
| Preset | Provider | Model | Use Case |
|---|---|---|---|
fast |
Gemini | gemini-3-flash-preview | Quick responses |
creative |
Gemini | gemini-3-pro-preview | Complex tasks |
coding |
GLM | glm-4.7 | Budget coding |
# Via environment
export GOKIN_BACKEND="gemini" # or "deepseek", "glm", or "ollama"
# Via config.yaml
model:
provider: "gemini"
name: "gemini-3-flash-preview"
preset: "fast" # or use preset instead
# For Ollama (local)
model:
provider: "ollama"
name: "llama3.2" # Use exact name from 'ollama list'
Ollama allows you to run LLMs locally without any API keys or internet connection.
# 1. Install Ollama (https://ollama.ai)
curl -fsSL https://ollama.ai/install.sh | sh
# 2. Start Ollama server
ollama serve
# 3. Pull a model (see https://ollama.ai/library for available models)
ollama pull llama3.2 # Meta's Llama 3.2
ollama pull qwen2.5-coder # Alibaba's coding model
ollama pull deepseek-coder-v2 # DeepSeek coding model
ollama pull codellama # Meta's Code Llama
ollama pull mistral # Mistral 7B
# 4. List installed models
ollama list
# 5. Run Gokin with Ollama
gokin --model llama3.2
# or set in config.yaml
Use the exact model name from ollama list:
$ ollama list
NAME SIZE
llama3.2:latest 2.0 GB
qwen2.5-coder:7b 4.7 GB
deepseek-coder-v2:16b 8.9 GB
Then use it in Gokin:
gokin --model llama3.2
gokin --model qwen2.5-coder:7b
gokin --model deepseek-coder-v2:16b
Note: Tool calling support varies by model. Models like Llama 3.1+, Qwen 2.5+, and Mistral have good tool support.
For remote Ollama servers (e.g., on a GPU server):
# config.yaml
api:
ollama_base_url: "http://gpu-server:11434"
ollama_key: "optional-api-key" # If server requires auth
Or via environment:
export OLLAMA_HOST="http://gpu-server:11434"
export OLLAMA_API_KEY="optional-api-key"
Use Ollama Cloud to run models without local GPU:
# 1. Sign in to Ollama Cloud
ollama signin
# 2. Set your API key
export OLLAMA_API_KEY="your_api_key"
# 3. Run Gokin with cloud endpoint
gokin --model llama3.2
Or configure in config.yaml:
api:
ollama_base_url: "https://ollama.com"
ollama_key: "your_api_key"
model:
provider: ollama
name: "llama3.2"
Note: Ollama Cloud offloads processing to cloud servers — no local GPU required.
All commands start with /:
| Command | Description |
|---|---|
/help [command] |
Show help |
/clear |
Clear conversation history |
/sessions |
List saved sessions |
/save [name] |
Save current session |
/resume |
Restore session |
| Command | Description |
|---|---|
/compact |
Force context compression |
/cost |
Show token usage and cost |
| Command | Description |
|---|---|
/semantic-stats |
Show index statistics |
/semantic-reindex |
Force reindex |
/semantic-cleanup |
Clean up old projects |
| Command | Description |
|---|---|
/undo |
Undo last file change |
| Command | Description |
|---|---|
/commit [-m message] |
Create commit |
/pr [--title title] |
Create pull request |
| Command | Description |
|---|---|
/config |
Show current configuration |
/doctor |
Check environment |
/init |
Create GOKIN.md for project |
/model |
Change AI model |
/theme |
Switch UI theme |
/permissions |
Manage tool permissions |
/sandbox |
Toggle sandbox mode |
/update |
Check for and install updates |
/register-agent-type |
Register custom agent type |
| Command | Description |
|---|---|
/oauth-login |
Login via Google account (uses Gemini subscription) |
/login gemini |
Set Gemini API key |
/login deepseek |
Set DeepSeek API key |
/login glm |
Set GLM API key |
/login ollama |
Set Ollama API key (for remote servers) |
/logout |
Remove saved API key |
Note: OAuth login uses your Gemini subscription (not API credits). Ollama running locally doesn’t require an API key.
| Command | Description |
|---|---|
/browse |
Interactive file browser |
/copy |
Copy to clipboard |
/paste |
Paste from clipboard |
/stats |
Project statistics |
AI has access to 50+ tools:
| Tool | Description |
|---|---|
read |
Read files (text, images, PDF, Jupyter notebooks) |
write |
Create and overwrite files |
edit |
Find and replace text in files (supports regex mode) |
copy |
Copy files and directories |
move |
Move or rename files and directories |
delete |
Delete files and directories (with safety checks) |
mkdir |
Create directories (supports recursive creation) |
diff |
Compare files |
batch |
Bulk operations (replace, rename, delete) on multiple files |
| Tool | Description |
|---|---|
glob |
Search files by pattern (with .gitignore support) |
grep |
Search content with regex |
list_dir |
Directory contents |
tree |
Tree structure |
semantic_search |
Find code by meaning using embeddings |
code_graph |
Analyze code dependencies and imports |
| Tool | Description |
|---|---|
bash |
Execute shell commands (timeout, background, sandbox) |
ssh |
Execute commands on remote servers |
kill_shell |
Stop background tasks |
env |
Access environment variables |
| Tool | Description |
|---|---|
git_status |
Repository status (modified, staged, untracked files) |
git_add |
Stage files for commit (supports patterns, –all, –update) |
git_commit |
Create commits (with message, –all, –amend options) |
git_log |
Commit history |
git_blame |
Line-by-line authorship |
git_diff |
Diff between branches/commits |
| Tool | Description |
|---|---|
web_fetch |
Fetch URL content |
web_search |
Search the internet |
| Tool | Description |
|---|---|
enter_plan_mode |
Start planning mode |
update_plan_progress |
Update plan step status |
get_plan_status |
Get current plan status |
exit_plan_mode |
Exit plan mode |
| Tool | Description |
|---|---|
todo |
Create and manage task list |
task |
Background task management |
task_output |
Get background task results |
| Tool | Description |
|---|---|
memory |
Persistent storage (remember/recall/forget) |
ask_user |
Ask user questions with options |
| Tool | Description |
|---|---|
refactor |
Pattern-based code refactoring |
pattern_search |
Search code patterns |
Configuration is stored in ~/.config/gokin/config.yaml:
api:
gemini_key: "" # Gemini API key (or via GEMINI_API_KEY)
deepseek_key: "" # DeepSeek API key (or via DEEPSEEK_API_KEY)
glm_key: "" # GLM API key (or via GLM_API_KEY)
ollama_key: "" # Ollama API key (optional, for remote servers)
ollama_base_url: "" # Ollama server URL (default: http://localhost:11434)
backend: "gemini" # gemini, deepseek, glm, or ollama
model:
name: "gemini-3-flash-preview" # Model name
provider: "gemini" # Provider: gemini or glm
temperature: 1.0 # Temperature (0.0 - 2.0)
max_output_tokens: 8192 # Max tokens in response
custom_base_url: "" # Custom API endpoint (for GLM)
tools:
timeout: 2m # Tool execution timeout
bash:
sandbox: true # Sandbox for commands
blocked_commands: # Blocked commands
- "rm -rf /"
- "mkfs"
ui:
stream_output: true # Streaming output
markdown_rendering: true # Markdown rendering
show_tool_calls: true # Show tool calls
show_token_usage: true # Show token usage
context:
max_input_tokens: 0 # Input token limit (0 = default)
warning_threshold: 0.8 # Warning threshold (80%)
summarization_ratio: 0.5 # Compress to 50%
tool_result_max_chars: 10000 # Max result characters
enable_auto_summary: true # Auto-summarization
permission:
enabled: true # Permission system
default_policy: "ask" # allow, ask, deny
rules: # Tool rules
read: "allow"
write: "ask"
bash: "ask"
plan:
enabled: true # Planning mode
require_approval: true # Require approval
hooks:
enabled: false # Hooks system
hooks: [] # Hook list
memory:
enabled: true # Memory system
max_entries: 1000 # Max entries
auto_inject: true # Auto-inject into prompt
# Semantic search
semantic:
enabled: true # Enable semantic search
index_on_start: true # Auto-index on start
chunk_size: 500 # Characters per chunk
chunk_overlap: 50 # Overlap between chunks
max_file_size: 1048576 # Max file size (1MB)
cache_dir: "~/.config/gokin/semantic_cache" # Index cache
cache_ttl: 168h # Cache TTL (7 days)
auto_cleanup: true # Auto-cleanup old projects (>30 days)
index_patterns: # Indexed files
- "*.go"
- "*.md"
- "*.yaml"
- "*.yml"
- "*.json"
- "*.ts"
- "*.tsx"
- "*.js"
- "*.py"
exclude_patterns: # Excluded files
- "vendor/"
- "node_modules/"
- ".git/"
- "*.min.js"
- "*.min.css"
logging:
level: "info" # debug, info, warn, error
| Variable | Description |
|---|---|
GEMINI_API_KEY |
Gemini API key |
GOKIN_GEMINI_KEY |
Gemini API key (alternative) |
DEEPSEEK_API_KEY |
DeepSeek API key |
GOKIN_DEEPSEEK_KEY |
DeepSeek API key (alternative) |
GLM_API_KEY |
GLM API key |
GOKIN_GLM_KEY |
GLM API key (alternative) |
OLLAMA_API_KEY |
Ollama API key (for remote servers) |
GOKIN_OLLAMA_KEY |
Ollama API key (alternative) |
OLLAMA_HOST |
Ollama server URL (default: http://localhost:11434) |
GOKIN_MODEL |
Model name (overrides config) |
GOKIN_BACKEND |
Backend: gemini, deepseek, glm, or ollama |
Recommended: Use environment variables instead of config.yaml.
# Add to ~/.bashrc or ~/.zshrc
export GEMINI_API_KEY="your-api-key"
# Or for DeepSeek
export DEEPSEEK_API_KEY="your-api-key"
# Or for GLM
export GLM_API_KEY="your-api-key"
Gokin supports MCP — a protocol for connecting AI assistants to external tools and data sources. This allows you to extend Gokin with tools from MCP servers.
Add MCP servers to ~/.config/gokin/config.yaml:
mcp:
enabled: true
servers:
# GitHub integration
- name: github
transport: stdio
command: npx
args: ["-y", "@modelcontextprotocol/server-github"]
env:
GITHUB_PERSONAL_ACCESS_TOKEN: "${GITHUB_TOKEN}"
auto_connect: true
timeout: 30s
# Filesystem access
- name: filesystem
transport: stdio
command: npx
args: ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/dir"]
auto_connect: true
# Brave Search
- name: brave-search
transport: stdio
command: npx
args: ["-y", "@modelcontextprotocol/server-brave-search"]
env:
BRAVE_API_KEY: "${BRAVE_API_KEY}"
auto_connect: true
| Option | Description |
|---|---|
name |
Unique server identifier |
transport |
Transport type: stdio or http |
command |
Command to start the server (for stdio) |
args |
Command arguments |
env |
Environment variables (supports ${VAR} expansion) |
url |
Server URL (for http transport) |
auto_connect |
Connect automatically on startup |
timeout |
Request timeout |
tool_prefix |
Prefix for tool names (default: server name) |
- Startup: Gokin connects to configured MCP servers
- Tool Discovery: Server tools are registered as Gokin tools
- Execution: When AI uses an MCP tool, Gokin forwards the call to the server
- Response: Results are returned to AI
- Environment Isolation: MCP servers run with sanitized environment (no API keys leaked)
- Secret Expansion: Use
${VAR}syntax to inject secrets from environment - Permission System: MCP tools go through Gokin’s permission system
| Server | Package | Description |
|---|---|---|
| GitHub | @modelcontextprotocol/server-github |
GitHub API integration |
| Filesystem | @modelcontextprotocol/server-filesystem |
File system access |
| Brave Search | @modelcontextprotocol/server-brave-search |
Web search |
| Puppeteer | @modelcontextprotocol/server-puppeteer |
Browser automation |
| Slack | @modelcontextprotocol/server-slack |
Slack integration |
Find more servers at: https://github.com/modelcontextprotocol/servers
One of the main reasons I built Gokin was security. I don’t trust Chinese AI company CLIs with access to my codebase. With Gokin, you control everything locally.
Gokin automatically masks sensitive information in logs and AI outputs:
# What you type or what appears in files:
export GEMINI_API_KEY="AIzaSyD1234567890abcdefghijk"
password: "super_secret_password_123"
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
# What Gokin shows to AI and logs:
export GEMINI_API_KEY="[REDACTED]"
password: "[REDACTED]"
Authorization: Bearer [REDACTED]
| Type | Pattern Example |
|---|---|
| API Keys | api_key: sk-1234..., GEMINI_API_KEY=AIza... |
| AWS Credentials | AKIA..., aws_secret_key=... |
| GitHub Tokens | ghp_..., gho_..., ghu_... |
| Stripe Keys | sk_live_..., sk_test_... |
| JWT Tokens | eyJhbG... |
| Database URLs | postgres://user:password@host |
| Private Keys | -----BEGIN RSA PRIVATE KEY----- |
| Slack/Discord | Webhook URLs and bot tokens |
| Bearer Tokens | Authorization: Bearer ... |
When showing API keys in status or logs, Gokin masks the middle:
// Input: "sk-1234567890abcdef"
// Output: "sk-1****cdef"
- Bash commands run with sanitized environment
- API keys are excluded from subprocesses
- Dangerous commands are blocked by default
- Config directory uses
0700permissions (owner-only access) - Config file uses
0600permissions (owner read/write only) - Atomic file writes prevent corruption
export GOKIN_MODEL="gemini-3-flash-preview"
export GOKIN_BACKEND="gemini" # or "glm"
Create a GOKIN.md file in the project root for context:
Example content:
# Project Instructions for Gokin
## Project Overview
This is a Go web application using Gin framework.
## Structure
- `cmd/` - entry points
- `internal/` - internal packages
- `api/` - HTTP handlers
## Code Standards
- Use gofmt
- Comments in English
- Note: tests are not used in this project
go build -o app ./cmd/app
Default settings:
| Policy | Tools |
|---|---|
allow |
read, glob, grep, tree, diff, env, list_dir, todo, web_fetch, web_search |
ask |
write, edit, bash |
When permission is requested, available options:
- Allow — allow once
- Allow for session — allow until session ends
- Deny — deny once
- Deny for session — deny until session ends
AI can remember information between sessions:
> Remember that this project uses PostgreSQL 15
> What database do we use?
Memory is stored in ~/.local/share/gokin/memory/.
Gokin supports semantic code search using embeddings. This allows finding code that is conceptually similar to the query, even if exact words don’t match.
- Indexing: Project is indexed on first launch
- Chunking: Files are split into parts (chunks)
- Embeddings: Each chunk gets a vector representation
- Caching: Index is saved to
~/.config/gokin/semantic_cache/ - Search: Most similar chunks are found for queries
Each project is stored separately:
~/.config/gokin/semantic_cache/
├── a1b2c3d4e5f6g7h8/ # Project ID (SHA256 of path)
│ ├── embeddings.gob # Embeddings cache
│ ├── index.json # Index metadata
│ └── metadata.json # Project info
> Find functions for JWT token validation
> Where is user authorization implemented?
> Show all code related to payments
| Command | Description |
|---|---|
/semantic-stats |
Index statistics (files, chunks, size) |
/semantic-reindex |
Force reindexing |
/semantic-cleanup |
Clean up old projects |
semantic_search — semantic search
{
"query": "how are API errors handled",
"top_k": 10 // number of results
}
semantic_cleanup — cache management
{
"action": "list", // show all projects
"action": "clean", // remove old (>30 days)
"action": "remove", // remove specific project
"project_id": "a1b2c3d4",
"older_than_days": 30
}
In config.yaml:
semantic:
enabled: true # Enable feature
index_on_start: true # Index on start
chunk_size: 500 # Chunk size (characters)
cache_ttl: 168h # Cache TTL (7 days)
auto_cleanup: true # Auto-cleanup old projects
index_patterns: # What to index
- "*.go"
- "*.md"
exclude_patterns: # What to exclude
- "vendor/"
- "node_modules/"
Concept search:
> Where does error logging happen?
> Find code for sending email notifications
> Show all functions for database operations
Combined search:
> Find tests for authenticateUser function
> Show all Gin middleware
Automation via shell commands:
hooks:
enabled: true
hooks:
- name: "Log writes"
type: "post_tool"
tool_name: "write"
command: "echo 'File written: ${WORK_DIR}' >> /tmp/gokin.log"
enabled: true
- name: "Format on save"
type: "post_tool"
tool_name: "write"
command: "gofmt -w ${WORK_DIR}/*.go 2>/dev/null || true"
enabled: true
Hook types:
pre_tool— before executionpost_tool— after successful executionon_error— on erroron_start— on session starton_exit— on exit
AI can create plans and request approval:
- AI analyzes the task
- Creates plan with steps
- Shows plan to user
- Waits for approval
- Executes step by step with reports
For complex tasks, Gokin uses advanced planning algorithms:
| Algorithm | Description |
|---|---|
| Beam Search | Explores multiple paths, keeps best candidates (default) |
| MCTS | Monte Carlo Tree Search for exploration/exploitation |
| A* | Heuristic-based optimal path finding |
Configure in config.yaml:
plan:
algorithm: "beam" # or "mcts", "astar"
Gokin uses specialized agents for different tasks:
| Agent | Purpose |
|---|---|
| ExploreAgent | Codebase exploration and structure analysis |
| BashAgent | Command execution specialist |
| PlanAgent | Task planning and decomposition |
| GeneralAgent | General-purpose tasks |
Agents can:
- Coordinate with each other via messenger
- Share memory between sessions
- Delegate subtasks to specialized agents
- Self-reflect and correct errors
- Learn from delegation success/failure (adaptive metrics)
Register your own specialized agents:
/register-agent-type coder "Coding specialist" --tools read,write,edit,bash --prompt "You are a coding expert"
Or in config.yaml:
agents:
custom_types:
- name: "coder"
description: "Coding specialist"
tools: ["read", "write", "edit", "bash"]
system_prompt: "You are a coding expert focused on clean code."
Long commands can run in background:
> Run make build in background
> Check task status
| Path | Contents |
|---|---|
~/.config/gokin/config.yaml |
Configuration |
~/.local/share/gokin/sessions/ |
Saved sessions |
~/.local/share/gokin/memory/ |
Memory data |
| Key | Action |
|---|---|
Enter |
Send message |
Ctrl+C |
Interrupt operation / Exit |
Ctrl+P |
Open command palette |
Ctrl+G |
Toggle select mode (freezes viewport, enables native text selection) |
Option+C |
Copy last AI response to clipboard (macOS, requires “Option sends Esc+” in terminal) |
↑ / ↓ |
Input history |
Tab |
Autocomplete |
Text selection:
Ctrl+Gswitches to select mode — the viewport freezes so you can drag to select text and copy withCmd+C. PressCtrl+Gagain to return to scroll mode. For quick copy of the last AI response, useOption+C.
> Explain what the ProcessOrder function does in order.go
> Find all places where this function is used
> Are there potential performance issues?
> Rename getUserData function to fetchUserProfile in all files
> Extract repeated error handling code into a separate function
> Create a REST API endpoint to get user list
> Add input validation
> Write unit tests for this endpoint
> Show changes since last commit
> /commit -m "feat: add user validation"
> /pr --title "Add user validation feature"
> The app crashes on startup, here's the error: [error]
> Check logs and find the cause
> Fix the problem
gokin/
├── cmd/gokin/ # Entry point
├── internal/
│ ├── app/ # Application orchestrator
│ ├── agent/ # Multi-agent system
│ │ ├── agent.go # Base agent
│ │ ├── tree_planner.go # Tree planning (Beam, MCTS, A*)
│ │ ├── coordinator.go # Agent coordination
│ │ ├── reflection.go # Self-correction
│ │ └── shared_memory.go # Inter-agent memory
│ ├── client/ # AI providers
│ │ ├── gemini.go # Google Gemini
│ │ ├── anthropic.go # DeepSeek & GLM-4 (Anthropic-compatible)
│ │ └── ollama.go # Ollama (local LLMs)
│ ├── mcp/ # MCP (Model Context Protocol)
│ │ ├── client.go # MCP client
│ │ ├── transport.go # Stdio/HTTP transports
│ │ ├── manager.go # Multi-server management
│ │ └── tool.go # MCP tool wrapper
│ ├── tools/ # 50+ AI tools
│ │ ├── read.go, write.go, edit.go
│ │ ├── copy.go, move.go, delete.go, mkdir.go
│ │ ├── bash.go, grep.go, glob.go
│ │ ├── git_status.go, git_add.go, git_commit.go
│ │ ├── git_log.go, git_blame.go, git_diff.go
│ │ ├── semantic_*.go # Semantic search
│ │ ├── plan_mode.go # Planning tools
│ │ └── ...
│ ├── commands/ # Slash commands
│ ├── context/ # Context management & compression
│ ├── security/ # Secret redaction, path validation
│ ├── permission/ # Permission system
│ ├── hooks/ # Automation hooks
│ ├── memory/ # Persistent memory
│ ├── semantic/ # Embeddings & search
│ ├── ui/ # TUI (Bubble Tea)
│ │ ├── tui.go # Main model
│ │ ├── themes.go # Light/dark themes
│ │ └── ...
│ └── config/ # Configuration
├── go.mod
└── README.md
/auth-status
/logout
/login --oauth --client-id=YOUR_ID
or
Check ~/.config/gokin/config.yaml:
permission:
enabled: true
default_policy: "ask"
MIT License

