Introduction
AI pair programming transforms your terminal into an intelligent coding partner. Instead of consulting documentation separately or switching between IDE and browser, you get real-time code suggestions, debugging help, and architectural guidance directly in your workflow.
Key Statistics:
- AI pair programming reduces coding time by 40-55%
- Developers complete 25% more tasks with AI assistance
- Average suggestion acceptance rate: 75-85%
- Local LLMs provide privacy for sensitive code
Why AI Pair Programming in Terminal?
The Traditional Workflow Problem
โ Traditional: Context Switching Hell
1. Write code
2. Get stuck on error
3. Google the error
4. Browse Stack Overflow
5. Read documentation
6. Apply fix
7. Repeat
Time lost: 15-30 minutes per issue
The AI Pair Programming Workflow
โ
AI-Powered: Continuous Flow
1. Write code
2. Ask AI: "why am I getting this error?"
3. Get instant answer with fix
4. Apply and continue
Time lost: <1 minute per issue
Benefits
| Aspect | Without AI | With AI Terminal |
|---|---|---|
| Error debugging | 15-30 min | 30 sec |
| Documentation lookup | 5-10 min | Instant |
| Code review | Manual | Instant |
| Learning new code | Slow | Accelerated |
| Privacy | N/A | Local options |
Tool Overview
Comparison Matrix
| Tool | Model | Privacy | Cost | Best For |
|---|---|---|---|---|
| Claude Code | Claude 3.5/4 | API-based | $15-20/mo | General coding |
| GitHub Copilot CLI | GPT-4o | API-based | $10/mo | GitHub ecosystem |
| Ollama + CLI | Local models | 100% local | Free | Privacy/sensitive code |
| Aider | Multiple | Local/API | Free | Refactoring |
| Codestral | Local | 100% local | Free | Fast local inference |
Claude Code (Recommended)
Claude Code is Anthropic’s CLI tool for AI-assisted coding. It’s the most capable terminal coding assistant.
Installation
# macOS
brew install anthropic-cli
# Or via npm
npm install -g @anthropic-ai/claude-code
# Verify
claude --help
Authentication
# First time setup
claude auth
# Enter your API key (ANTHROPIC_API_KEY)
# Get one at: https://console.anthropic.com/settings/keys
Basic Usage
# Start a new project
claude new my-project
# Continue a conversation
claude continue
# Read and analyze a file
claude read src/main.py
# Write code to a file
claude write src/utils.py
# Execute a command and analyze output
claude exec python test.py
Claude Code Commands
# /help - Show all commands
claude /help
# /review - Review code changes
claude /review
# /test - Generate tests for selected code
claude /test
# /refactor - Refactor selected code
claude /refactor
# /explain - Explain what code does
claude /explain
# /fix - Fix errors in current buffer
claude /fix
# /audit - Security audit of code
claude /audit
Example Session
$ claude
You: I'm getting a TypeError when parsing JSON in Python
Claude: I'd be happy to help debug that TypeError. Could you share the relevant code snippet and the exact error message?
You: Here's the code:
```python
import json
data = json.loads(response.text)
print(data['items'][0]['name'])
And the error is: “KeyError: ‘items’”
Claude: The error occurs because data likely doesn’t contain the key 'items'. This usually happens when:
- The API response structure is different than expected
- The API returned an error response
- The JSON structure is nested differently
Let me suggest a robust approach:
import json
data = json.loads(response.text)
# Debug: Print the actual structure
print(json.dumps(data, indent=2))
# Safe access with .get()
items = data.get('items', [])
if items:
print(items[0].get('name'))
else:
print("No items found in response")
print(f"Response status: {response.status_code}")
Want me to explain what’s happening or help debug further?
---
## GitHub Copilot CLI
### Installation
```bash
# Install via GitHub CLI
gh extension install github/copilot-extensions/gh-copilot
# Or standalone
npm install -g @githubnext/github-copilot-cli
# Authenticate
gh auth login
Usage
# Ask coding questions
gh copilot "explain what this regex does: ^[\w\.-]+@[\w\.-]+\.\w+$"
# Get code suggestions
gh copilot "create a Python function to calculate fibonacci"
# Explain errors
gh copilot "explain this error: cannot find module 'express'"
# Write tests
gh copilot "write unit tests for my auth.js file"
Interactive Mode
# Start interactive session
gh copilot interactive
# It will:
# 1. Analyze your current directory
# 2. Understand your project structure
# 3. Answer context-aware questions
Ollama + Terminal Integration
Ollama runs large language models locally, keeping your code 100% private.
Installation
# macOS
brew install ollama
# Linux
curl -fsSL https://ollama.com/install.sh | sh
# Verify
ollama --version
Download Models
# Get a coding-focused model
ollama pull codestral # Fast, good for code
ollama pull deepseek-coder
ollama pull llama3.2 # General purpose
ollama pull qwen2.5-coder # Excellent for code
# List installed models
ollama list
Terminal Integration with Ollama
# Direct interaction
ollama run codestral "write a Python decorator for logging"
# With custom parameters
ollama run codestral --temperature 0.3 "explain this code" < myfile.py
Best Ollama Setup for Coding
# Use codestral for code tasks
alias code='ollama run codestral'
# Use llama3 for general questions
alias ask='ollama run llama3'
# Quick one-liners
code "write a Rust struct with Serialize"
Aider - AI Refactoring Assistant
Aider is specifically designed for AI-assisted code editing and refactoring.
Installation
# Via pip
pip install aider-install
aider-install
# Or direct
pip install aider
Usage
# Start with a git repo
aider --openai-api-key sk-xxx
# Or use local models
aider --model ollama/codestral
# Edit multiple files
aider src/main.py src/utils.py
# Specify files to edit only
aider --read-only tests/ src/*.py
Aider Commands
# /open <file> - Add file to chat
/open src/auth.py
# /drop <file> - Remove file from chat
/drop src/auth.py
# /describe - Describe changes made
/describe
# /commit - Commit changes to git
/commit
# /undo - Undo last change
/undo
# /web - Fetch web content
/web https://docs.example.com
Example Aider Session
$ aider src/main.py
Adding src/main.py to chat. Type /help for commands.
You: Refactor this function to use async/await
Claude (aider): I'll refactor the function. Here's the change:
```python
# Before (sync)
def fetch_data(url):
response = requests.get(url)
return response.json()
# After (async)
import asyncio
import aiohttp
async def fetch_data(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
return await response.json()
# Usage
async def main():
data = await fetch_data('https://api.example.com/data')
print(data)
asyncio.run(main())
Type /accept to apply, /reject to discard.
---
## Building Your AI Terminal Workflow
### Recommended Setup
```bash
# 1. Install Claude Code (primary)
brew install anthropic-cli
# 2. Install Ollama (for privacy)
brew install ollama
# 3. Install Aider (for refactoring)
pip install aider
# 4. Set up shell aliases
cat >> ~/.zshrc << 'EOF'
# AI Coding Assistants
alias claude='claude'
alias code='ollama run codestral'
alias ask='ollama run llama3'
# Quick AI commands
ai() {
echo "$@" | ollama run llama3
}
codehelp() {
echo "$@" | ollama run codestral
}
EOF
source ~/.zshrc
Workflow Integration
# Daily workflow:
# 1. Start your day by asking what you worked on last
claude "what files did I last work on"
# 2. While coding, get instant help
claude "explain what this function does"
claude "write a test for this"
claude "find the bug in this code"
# 3. Code review before committing
claude /review
# 4. For privacy-sensitive code, use Ollama
ollama run codestral "refactor this to use less memory"
Use Cases
Debugging
# Paste error and get instant fix
claude "Fix this TypeError: 'NoneType' object has no attribute 'get'"
# Pasting the error traceback
# Or use Aider
aider buggy_file.py
> Fix the TypeError on line 42
Writing Tests
# Generate tests for a file
claude "write pytest tests for auth.py"
# Or use Aider
aider src/utils.py
> Add unit tests for the parse_config function
Learning New Codebase
# Analyze a new project
claude "analyze this codebase structure"
# Explain specific files
claude "explain how the database connection works"
# Understand patterns
claude "what design patterns are used here"
Refactoring
# Use Aider for refactoring
aider src/legacy/
> Convert this to use async/await
# Or Claude Code
claude "refactor this to follow SOLID principles"
Security Auditing
# Check for vulnerabilities
claude "audit this code for security issues"
# Use Aider
auder src/auth.py
> Check for SQL injection vulnerabilities
Privacy Considerations
When to Use Local Models
Use Ollama when:
- Working with proprietary code
- Handling sensitive data (PII, credentials)
- Working in regulated industries
- Offline development
When to Use API-based Tools
Use Claude/Copilot when:
- Need the most capable models
- Working with public code
- Internet access is available
- Cost is not a concern
Sensitive Data Handling
# BAD: Send sensitive code to external API
claude "fix this authentication code" # Contains API keys
# GOOD: Use local Ollama
ollama run codestral "fix this authentication code"
# Or sanitize first
# Remove secrets, environment variables, credentials
# Then use external AI
Best Practices
Do’s
- Review AI suggestions - Don’t blindly accept; understand what the code does
- Use appropriate tools - Claude for reasoning, Ollama for privacy
- Iterate with context - Provide relevant files and error messages
- Test AI-generated code - Always verify the output works
- Keep learning - Use AI to learn new concepts, not just copy
Don’ts
- Don’t share credentials - Never paste API keys or secrets
- Don’t ignore errors - Verify the AI’s suggestions work
- Don’t skip understanding - Don’t accept code you don’t understand
- Don’t use for everything - Simple tasks don’t need AI
- Don’t forget to commit - Version control your AI-assisted changes
Advanced Tips
Custom System Prompts
# Claude Code - Set project-specific instructions
claude --system-prompt "You are a Python expert. Always prefer asyncio."
# Ollama - Create custom Modelfile
cat > Modelfile << 'EOF'
FROM codestral
SYSTEM You are a Rust expert. Always use unsafe code when appropriate.
PARAMETER temperature 0.5
EOF
ollama create rust-expert -f Modelfile
ollama run rust-expert "write a fast parser"
Keyboard Shortcuts (in interactive editors)
# In Claude Code interactive mode
Ctrl+C - Cancel current operation
Ctrl+D - End session
Ctrl+R - Refresh/suggest again
Tab - Accept suggestion (in supported editors)
Integration with Editors
# Vim/Neovim - Use vim-lsp + Claude
# Add to .vimrc
Plug 'prabirshrestha/vim-lsp'
Plug 'mattn/vim-lsp-settings'
# VSCode - Use Continue extension
# It brings Claude Code-like experience to VSCode
Related Articles
- Tool Use APIs for Agentic AI Development
- Local-First AI: Running LLMs on Your Machine with Ollama and Open WebUI
- Building AI Agents: Autonomous Systems and Tool Integration
- Best AI Tools in 2025
Comments