Yep Open WebUI's switch to a non OSS license to inhibit competitive forks [1], in their own words [2] ensures I'll never use them. Happy to develop an OSS alternative that does the opposite whose rewrite on extensibility enables community extensions can replace built-in components and extensions so it can easily be rebranded and extended with custom UI + Server features.
The goal is for the core main.py to be a single file without requiring additional dependencies, anything that does can be loaded as an extension (i.e. just a folder with .py server and UI hooks). There's also a script + docs so you can mix n' match the single main.py file and repackage it which whatever extensions you want included [3].
I will say there's a noticable delay in using MCP vs tools, where I ended up porting Anthropic's node filesystem MCP to Python [1] to speed up common AI Assistant tasks, so their not ideal for frequent access of small tasks, but are great for long running tasks like Image/Audio generation.
How are you handling the orchestration for the Computer Use agent? Is that running on LangGraph or did you roll a custom state machine? I've found managing state consistency in long-running agent loops to be the hardest part to get right reliably.
The few people looking at /new on HN are ridiculously overpowered. A few upvotes from them in the few hours will get you to the front page, and just 1-2 downvotes will make your post never see the light of day.
You can't downvote a post, so that's not a factor.
Also it's not as powerful as you think. In the past I have spent a lot of time looking at /new, and upvoting stories that I think should be surfaced. The vast majority of them still never hit near the front page.
It's a real shame, because some of the best and most relevant submissions don't seem to make it.
If you are in a company like e.g. ClickHouse and share a new HN Submission of ClickHouse via the internal Slack to #general, then you easily get enough upvotes for the front page.
I wouldn't use Claude API Key pricing, but I also wouldn't get a Claude Max sub unless it was the only AI tool I used.
Antigravity / Google AI Pro is much better value, been using it as my primary IDE assistant for a couple months and have yet to hit a quota limit on my $16/mo sub (annual pricing) which also includes a tonne of other AI perks inc. Nano Banana, TTS, NotebookLM, storage, etc.
No need to use Anthropic's premium models for tool calling when Gemini/MiniMax are better value models that still perform well.
I still have a Claude Pro plan, but I use it much less than Antigravity and thanks to Anthropic axing their sub usage, I no longer use it outside of CC.
Rate limits mostly - plus claude code is a relatively recent thing but sonnet api has been around for a while with 3rd party apps (like cline). In those scenarios, it was only api.
This looks like it's not only a better license, but also much better features.
The goal is for the core main.py to be a single file without requiring additional dependencies, anything that does can be loaded as an extension (i.e. just a folder with .py server and UI hooks). There's also a script + docs so you can mix n' match the single main.py file and repackage it which whatever extensions you want included [3].
[1] https://www.reddit.com/r/opensource/comments/1kfhkal/open_we...
[2] https://docs.openwebui.com/license/
[3] https://llmspy.org/docs/deployment/custom-build
I use llms .py as a personal assistant and MCP is required to access tools available via MCP.
MCP is a great way to make features available to AI assistants, here's a couple I've created after enabling MCP support:
- https://llmspy.org/docs/mcp/gemini_gen_mcp - Give AI Agents ability to generate Nano Banana Images or generate TTS audio
- https://llmspy.org/docs/mcp/omarchy_mcp - Manage Omarchy Desktop Themes with natural language
I will say there's a noticable delay in using MCP vs tools, where I ended up porting Anthropic's node filesystem MCP to Python [1] to speed up common AI Assistant tasks, so their not ideal for frequent access of small tasks, but are great for long running tasks like Image/Audio generation.
[1] https://github.com/ServiceStack/llms/blob/main/llms/extensio...
https://github.com/ServiceStack/llms/tree/main/llms/extensio...
It's run in the same process, there's no long agent loops, everything's encapsulated within a single message thread.
v1 also took a while to make it to HN, v3 is a complete rewrite focused on extensibility with a lot more new features.
Also it's not as powerful as you think. In the past I have spent a lot of time looking at /new, and upvoting stories that I think should be surfaced. The vast majority of them still never hit near the front page.
It's a real shame, because some of the best and most relevant submissions don't seem to make it.
https://llmspy.org/docs/deployment/github-oauth
Antigravity / Google AI Pro is much better value, been using it as my primary IDE assistant for a couple months and have yet to hit a quota limit on my $16/mo sub (annual pricing) which also includes a tonne of other AI perks inc. Nano Banana, TTS, NotebookLM, storage, etc.
No need to use Anthropic's premium models for tool calling when Gemini/MiniMax are better value models that still perform well.
I still have a Claude Pro plan, but I use it much less than Antigravity and thanks to Anthropic axing their sub usage, I no longer use it outside of CC.