Author is deluded. LLMs aren't the quiet, smart assistant. LLMs are regurtitators of "truths" they have mapped in their models. The outputs are always subtly wrong, and, at best, an average of existing outputs out there.
You will never make something completely "wow" with an LLM alone. You will spend more time debugging the output than you would have making it from scratch. And, at best - you'll end up with mediocre results, by definition.
Author needs a serious case of a holiday, rest, and mental reset.
You will never make something completely "wow" with an LLM alone. You will spend more time debugging the output than you would have making it from scratch. And, at best - you'll end up with mediocre results, by definition.
Author needs a serious case of a holiday, rest, and mental reset.