- Improved error handling and logging in Google response processing.
- Simplified streaming content extraction and error detection in Google provider.
- Enhanced content extraction logic in OpenAI provider to handle edge cases.
- Streamlined tool conversion functions for both Google and OpenAI providers.
- Removed redundant comments and improved code readability across multiple files.
- Updated context window retrieval and message truncation logic for better performance.
- Ensured consistent handling of tool calls and arguments in OpenAI responses.
- Added `process.py` for managing MCP server subprocesses with async capabilities.
- Introduced `protocol.py` for handling JSON-RPC communication over streams.
- Created `llm_client.py` to support chat completion requests to various LLM providers, integrating with MCP tools.
- Defined model configurations in `llm_models.py` for different LLM providers.
- Removed the synchronous `mcp_manager.py` in favor of a more modular approach.
- Established a provider framework in `providers` directory with a base class and specific implementations.
- Implemented `OpenAIProvider` for interacting with OpenAI's API, including streaming support and tool call handling.