Files
pentestagent/.env.example
giveen fab6d5dc0f ollama: honor OLLAMA_API_BASE and support remote Ollama host; add .env.example entry
- Map OLLAMA_BASE_URL into litellm-friendly env vars before importing litellm to allow remote Ollama hosts
- Use environment-driven debug for litellm logging
- Add OLLAMA_API_BASE example to .env.example
2026-01-08 17:40:40 -07:00

34 lines
1.1 KiB
Plaintext

# PentestAgent Configuration
# API Keys (set at least one for chat model)
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
GEMINI_API_KEY=
# For web search functionality (optional)
TAVILY_API_KEY=
# Chat Model (any LiteLLM-supported model)
# OpenAI: gpt-5, gpt-4.1, gpt-4.1-mini
# Anthropic: claude-sonnet-4-20250514, claude-opus-4-20250514
# Google: gemini models require gemini/ prefix (e.g., gemini/gemini-2.5-flash)
# Other providers: azure/, bedrock/, groq/, ollama/, together_ai/ (see litellm docs)
PENTESTAGENT_MODEL=gpt-5
# Ollama local/remote API base
# Example: http://127.0.0.1:11434 or http://192.168.0.165:11434
# Set this when using Ollama as the provider so LiteLLM/clients point to the correct host
# OLLAMA_API_BASE=http://127.0.0.1:11434
# Embeddings (for RAG knowledge base)
# Options: openai, local (default: openai if OPENAI_API_KEY set, else local)
# PENTESTAGENT_EMBEDDINGS=local
# Settings
PENTESTAGENT_DEBUG=false
# Agent max iterations (regular agent + crew workers, default: 30)
# PENTESTAGENT_AGENT_MAX_ITERATIONS=30
# Orchestrator max iterations (crew mode coordinator, default: 50)
# PENTESTAGENT_ORCHESTRATOR_MAX_ITERATIONS=50