feat: model registry and capabilities for multi-provider support (#2158)

* feat: Implement model registry and capabilities for multi-provider support

- Added ModelRegistry to manage available models and their capabilities.
- Introduced ModelProvider enum for different LLM providers.
- Created ModelCapabilities dataclass to define model features.
- Implemented methods to load models based on API keys and settings.
- Added utility functions for model management in model_utils.py.
- Updated settings.py to include provider-specific API keys.
- Refactored LLM classes (Anthropic, OpenAI, Google, etc.) to utilize new model registry.
- Enhanced utility functions to handle token limits and model validation.
- Improved code structure and logging for better maintainability.

* feat: Add model selection feature with API integration and UI component

* feat: Add model selection and default model functionality in agent management

* test: Update assertions and formatting in stream processing tests

* refactor(llm): Standardize model identifier to model_id

* fix tests

---------

Co-authored-by: Alex <a@tushynski.me>
This commit is contained in:
Siddhant Rai
2025-11-14 16:43:19 +05:30
committed by GitHub
parent fbf7cf874b
commit 3f7de867cc
54 changed files with 1388 additions and 226 deletions

View File

@@ -108,7 +108,7 @@ class TestConversationServiceSave:
sources=[],
tool_calls=[],
llm=mock_llm,
gpt_model="gpt-4",
model_id="gpt-4",
decoded_token={}, # No 'sub' key
)
@@ -136,7 +136,7 @@ class TestConversationServiceSave:
sources=sources,
tool_calls=[],
llm=mock_llm,
gpt_model="gpt-4",
model_id="gpt-4",
decoded_token={"sub": "user_123"},
)
@@ -167,7 +167,7 @@ class TestConversationServiceSave:
sources=[],
tool_calls=[],
llm=mock_llm,
gpt_model="gpt-4",
model_id="gpt-4",
decoded_token={"sub": "user_123"},
)
@@ -208,7 +208,7 @@ class TestConversationServiceSave:
sources=[],
tool_calls=[],
llm=mock_llm,
gpt_model="gpt-4",
model_id="gpt-4",
decoded_token={"sub": "user_123"},
)
@@ -237,6 +237,6 @@ class TestConversationServiceSave:
sources=[],
tool_calls=[],
llm=mock_llm,
gpt_model="gpt-4",
model_id="gpt-4",
decoded_token={"sub": "hacker_456"},
)