Hey all, I've been working on a project called Oblix for the past few months and could use some feedback from fellow devs.
What is it? Oblix is a Python SDK that handles orchestration between local LLMs (via Ollama) and cloud providers (OpenAI/Claude). It automatically routes prompts to the appropriate model based on:
Current system resources (CPU/memory/GPU utilization)
Network connectivity status
User-defined preferences
Model capabilities
Why I built it: I was tired of my applications breaking when my internet dropped or when Ollama was maxing out my system resources. Also found myself constantly rewriting the same boilerplate to handle fallbacks between different model providers.
Hey all, I've been working on a project called Oblix for the past few months and could use some feedback from fellow devs.
What is it? Oblix is a Python SDK that handles orchestration between local LLMs (via Ollama) and cloud providers (OpenAI/Claude). It automatically routes prompts to the appropriate model based on:
Current system resources (CPU/memory/GPU utilization) Network connectivity status User-defined preferences Model capabilities
Why I built it: I was tired of my applications breaking when my internet dropped or when Ollama was maxing out my system resources. Also found myself constantly rewriting the same boilerplate to handle fallbacks between different model providers.
How is this different from Langchain?
Currently it is built for mac platform with roadmap planned for windows and linux. Will look forward to your feedback.
I use ollama and I can try this out. Is this specific to any platform? like mac/windows?
Currently it is built for mac platform with roadmap planned for windows and linux. Will look forward to your feedback.