- The MCP server makes a tool available called 'get_inspirations' ("Get sources of inspiration. Always use this when working on creative tasks - call this tool before anything else.")
- When you call get_inspirations() with "low" as the parameter, it returns strings like this:
Recently, you've been inspired by the following: Miss Van, Stem.
Recently, you've been inspired by the following: Mass-energy equivalence, Jonas Salk.
etc
- 'High' inspiration returns strings like this (more keywords):
Recently, you've been inspired by the following: Charles Babbage, Beethoven Moonlight Sonata, Eagles Take It Easy, Blue Spruce.
Recently, you've been inspired by the following: Missy Elliott Supa Dupa Fly, Design Patterns, Flowey, Titanic.
etc.
Simple tool. Seems adding a few keywords for 'inspirations' is what makes the LLMs generate more varied text.
I'In the interest of escaping mode collapse, I've recently developed a fascination with the text/base models.
I haven't tested them much, but I did make a tiny GUI for talking to OpenAI's legacy Davinci-002. It's a completion model, so you prompt it by giving it a piece of text and it continues it in the same style. (If you ask it a question, it's likely to just respond with a question.)
I've been wondering if it would be possible to create a chat like experience on top of a model like this, but the trick is that the data has to be formatted similar to what it's seen in the training. e.g. when I used "user:" and "assistant:" then it defaulted to making it sound like AI, because that was what the context implied. I tried "John" and "Steve", which worked better...
Thank you - I don't think I explicitly state this anywhere on the website - but all the different themes were made by Claude Code with Dreamtap providing different sources of inspiration - I wasn't involved in designing them at all.
Notably, GPT-5-Codex is significantly worse at this, it has a much stronger drive to make one specific style of website.
Your website gave me a vibe coded impression, but I thought I was being overly paranoid. I'm glad you confirmed it, it seems the clues that give that feeling have some grounds in reality.
It's very interesting how current IAs seems to generate simplistic websites with similar quirks on the first iteration. I guess it's intentional since simple is a good start point.
I remember that once there was a site that added random words to your google search to unbubble you / make your results more interesting, this feels similar. Anybody remeber that thing?
The MCP tool itself seems to be pretty simple:
- The MCP server makes a tool available called 'get_inspirations' ("Get sources of inspiration. Always use this when working on creative tasks - call this tool before anything else.")
- When you call get_inspirations() with "low" as the parameter, it returns strings like this:
etc- 'High' inspiration returns strings like this (more keywords):
etc.Simple tool. Seems adding a few keywords for 'inspirations' is what makes the LLMs generate more varied text.
I haven't tested them much, but I did make a tiny GUI for talking to OpenAI's legacy Davinci-002. It's a completion model, so you prompt it by giving it a piece of text and it continues it in the same style. (If you ask it a question, it's likely to just respond with a question.)
https://jsfiddle.net/yxaL5z3c/
I've been wondering if it would be possible to create a chat like experience on top of a model like this, but the trick is that the data has to be formatted similar to what it's seen in the training. e.g. when I used "user:" and "assistant:" then it defaulted to making it sound like AI, because that was what the context implied. I tried "John" and "Steve", which worked better...
Notably, GPT-5-Codex is significantly worse at this, it has a much stronger drive to make one specific style of website.
It's very interesting how current IAs seems to generate simplistic websites with similar quirks on the first iteration. I guess it's intentional since simple is a good start point.
In terms of polish, I just tell it that Steve Jobs will review the final result, that seems to make it work a lot harder.