I've been vibe circuit building since the 1970s, but that's not what this is about, is it? ;-)
Years ago, at Pumping Station One in Chicago, I watched someone struggle with the driving of multiple LEDs from an Arduino in his project. He wondered why the LEDs got dimmer when more than one was lit.
I looked at the original schematic, and what he had built, and noticed a difference. The original design had a resistor on each LED, but he had decided that was a redundancy and refactored it to use a single LED instead. In the case of current flow, this meant the circuit still worked, but that current limiting that resistor provided now was shared across every active LED, leading to the progressive dimming as more LEDs were active.
It turned out his background was in software, where the assumptions are much different as to what is important. Cutting out redundant code is an important skill.
I saw it as a cognitive impedance mismatch being played out in real life.
I assume the same is true for an LLM/AI attempting the same leap.
Had this same experience back when I first learned to program a PIC microcontroller. You really shouldn't be driving LEDs directly off IO pins anyways. I think the digitalness of IO pins also lends itself to not thinking about the underlying circuitry and coming at it from a software lens.
It depends. Many modern microcontrollers are perfectly fine driving LEDs directly off IO pins if the pin specs say it is rated for sufficient current (like 20mA). However, older ones like ESP8266 can only do like 2mA and the 8051 even less. Or you run into a total power budget issue if your are running too many pins. Also, some IO pins are perfectly fine at sinking current to ground but aren't suited for sourcing current, in which case the LED would be directly connected to an external high voltage and the IO pin would simply be switching to ground or not.
I actually had this exact scenario come up last week except that in my version it warned me to make sure that I added a resistor to each LED to keep brightness normalized.
I didn't happen to need this particular advice, but it stuck in my head as something that would potentially save someone learning a lot of pain.
Been working on this exact problem for a while now. The core issue isn't that LLMs are bad at circuits, it's that we're asking them to do novel design when they should be doing selection and integration.
My project (https://phaestus.app/blog) takes a different approach: pre-validated circuit blocks on a fixed 12.7mm grid with standardized bus structures. The LLM picks which blocks you need and where they go, but the actual circuit design was done by humans and tested. No hallucinated resistor values, no creative interpretations of datasheets.
It's the same insight that made software dependencies work. You don't ask ChatGPT to write you a JSON parser from scratch, you ask it which library to use. Hardware should work the same way.
Still WIP and the block library needs expanding, but the constraint-based approach means outputs are manufacturable by construction rather than "probably fine, let's see what catches fire."
> The core issue isn't that LLMs are bad at circuits, it's that we're asking them to do novel design when they should be doing selection and integration.
I don't want to detract from what you're building, but I'm puzzled by this sentence. It very much sounds like the problem is that they're bad at circuits and that you're working around this problem by making them choose from a catalog.
Try that for code. "The problem isn't that LLMs are bad at coding, it's that we're asking them to write new programs when they should be doing selection and integration".
I only had undergrad EE training so maybe I’m out of touch with what’s done in industry. But, I think most human engineers mostly don’t design novel circuits either. Chips come with specification sheets that have some reference implementations. So obviously somebody at the company designs that reference implementation, but I think most users stick pretty close to it…
Why get philosophical and not just deal with the reality that 95% of the industry are dealing with languages that just compile down to C? We are already at some of the highest levels of abstraction thus far in the history of the trade.
That’s exactly how it has been working for me in code. I have a bunch of different components and patterns that the LLMs mix and match. Has been working wonderfully over the past few months.
Everything is derivative of something else. “Novel” is a distinction for works which are minimally derived, but everything created is a remix of something else. Novelty is an illusion.
Sorry, could have been more clear, LLM's are great at architecting high level design decisions, but terrible at the nitty gritty - without better tooling (with the right tooling, such as https://flux.ai, they are capable!).
I even had Gemini hallucinate a QFN version of the TPS2596 last night, it was so confident that the *RGER variant existed. In an automated pipeline, this would break things, but giving it a list of parts to use, it becomes a lot more useful!
I think Altium tried to do something similar. A bunch of common blocks being able to just plop on the PCB, (auto) route few tracks and done. Failed because there was always something some client wanted to do, move, change or optimize for production run.
Module based design is cool for getting the prototype going but once you get into production you want to optimize everything so it falls apart quickly when you need to move the parts (not blocks, parts) to fit the least possible amount of space, cut components that could be shared (do 8 blocks on one board each with its own decoupling caps need entire set of them? Probably not). Fine for prototyping/hobby stuff/one off but falls apart quickly in production.
Still, having working prototype quickly that can then be optimized in more traditional way can still be very valuable.
> It's the same insight that made software dependencies work. You don't ask ChatGPT to write you a JSON parser from scratch, you ask it which library to use. Hardware should work the same way.
hardware optimising gets you far more money faster than software, because the cost of software not being optimal is mostly cost on the consumer (burning more CPU than it would if it was optimized), while for hardware each chip less is more money left in your pocket and there are actual size constraints that can be pretty hard edged vs software's "well the user will have to download extra MB more"
This sounds interesting for quick prototypes, but tbh it doesn't map onto how most iterative layout processes actually work. At least in my experience. $0.02
However, I wanted to say that for a lot of common parts I find the Adafruit open source schematics to be at least as useful as the application layout suggestions in many datasheets. When it comes to regulators etc it's nice to see how they did it, because like your project, you really can approach it like a block.
If I understand what you are doing this sounds like a great idea.
For example a part like the ADS7953 ADC comes with layout recommendations, including the design of the ground plane underneath the chip and the placement of the decoupling caps. A more extreme example would be an esp32 and all of it's supporting parts, including the keepout area on the PCB for wifi transmission.
I really want to assemble circuits out of higher level primitives like that, drag and drop a chip and all of its supporting parts, including their layout and power connections.
So this isn't exactly putting pre-wired up blocks together, my intent behind phaestus is to essentially to get you from an idea to a prototype tangible product as fast as possible.
I'm targeting sub 5mins from first prompt to manufacturing exports, stl files for enclosure, gerbers for pcb manuf, bin file for firmware, bom for pcba.
E.g. if you wanted something that doesn't exist, but don't have the time, the skills or it's just not worth it. One silly example I had was a colour e-ink selfie fridge magnet. As far as I know, that doesn't exist, I could make it, but I can't be arsed. (so I could suprise my partner with a selfie, a picture of our dog, or anything, just a little treat for her for putting up with me).
With this, it'll pull in a ESP32-S3 Sense Xiao board, an e-ink module, a battery connector and a usb c charge connector. glue it all together, and there we go.
Should work if you wanted a rudimentary zigbee mesh communicator, pulls in a C6, a touchscreen, battery, probably a physical button or two. Once that block library starts filling up, it'll become more and more capable.
I built circuitsnips to be the 'thingiverse' for electronics schematics.
Unfortunately it's been a bit neglected since so much of my free time has gone into phaestus, I did have great intentions to kicad up some official reference designs, so I can get rid of the github scraped bootstrap data as that was the sticky point both ethically and for the quality of the schematics, but there are only so many hours in a day.
This is conceptually interesting to me because I see this as almost a more generic TI Webench. I’m curious why your focus in the sized “grid” blocks (presumably for placement directly on the PCB layout) instead of doing the same but for the schematic. That way I still have the flexibility of laying out the board how I want to meet eg mechanical constraints instead of working around a 12.7mm grid.
I saw routing as equally as big of a headache as the schematic, so formalizing the layout to a grid means layout becomes a compilation problem, not a design problem.
My intent for phaestus isn't to design pcb's, it's to design entire products, and also to be friendly to non technical users who don't know what a PCB is, let alone do layout themselves.
Thanks, as a concept it has potential, I've leveraged some of my previous projects www.circuitsnips.com for inspiration for the subcircuit blocks, TOKN for more accurate parsing of schematics, and to a lesser extent even my datasheet MCP server and kicad-netlist tool, more info at https://www.mikeayles.com/
For the time being, I'm erring away from feature creep, even though I really, really want to though! For the sorts of products I would like this to make for the time being, simple I2C, SPI and GPIO driven peripherals are the limit. I only have 2 more weeks, and then I want to have a working, battery powered device on my desk. PCB, Enclosure, Firmware, everything.
Similarly, I haven't got a framework for anything mechatronic in the MCAD pipeline, so no moving parts (besides clickable buttons). Fixed devices are fine, like screens and connectors though.
Is there a way to stay up to date with what you are doing?
It very much aligns with how I've approached hardware since I was 15 and had a massive stack of functional blocks of electronics circuitry that I would combine in all kinds of ways. I've lost the 3x5's, but I still work that way, build a simple block, test it, build another block, test that, hook the one to the other etc.
There's a limited sign up currently on the site, which currently goes to an approval page. I don't think I'm quite ready for it to be fully open yet, as i'm paying all the inference, but I should be starting to populate the gallery soon with generated projects.
NP, I don't do GitHub (because MS), but I'll bookmark your pages and check back periodically. Please do post to HN whenever you reach an interesting milestone and feel free to reach out.
I’m curious why you don’t target an HDL, which seems like it should match very well to llm capabilities, and rely on existing layout solvers for describing the last physical layout step?
I know Ben is having some fun, perhaps making a valid point, with the burning component on the breadboard. I think it does underscore a difference between software vibing and hardware vibing—crash vs. fire.
But in fact vibe-breadboarding has drawn me deeper into the electronics hobby. I have learned more about op-amps and analog computing in the past two months in large part thanks to Gemini and ChatGPT pointing the way.
I know now about BAT54S Schottky diodes and how they can protect ADC inputs. I have found better ADC chips than the ones that come pre-soldered on most EDP32 dev boards (and have breadboarded them up with success). These were often problems I didn't know I should solve. (Problems that, for example, YouTube tutorials will disregard because they're demonstrating a constrained environment and are trying to keep it simple for beginners, I suppose.)
To be sure I research what the LLMs propose, but now have the language and a better picture in my mind to know what to search for (how do I protect ADC inputs from over or under voltages?). (Hilariously too, I often end up on the EE Stack Exchange where there is often anything but a concise answer.)
5V USB power, through-hole op-amp chips… I'm not too worried about burning my house down.
Both Gemini and ChatGPT have a pretty comically wrong knowledge of op-amps. They usually recommend outdated chips and are confused about circuit topologies. I was looking at this last week and it hasn't changed. I asked them to suggest and evaluate microphone circuits and they were just bad. I would really, really recommend reading some human-written text if you're learning about that.
I can't think of any reason why you'd want to use Schottky diodes to protect op-amp inputs. They have high leakage currents and poor surge capabilities. Most op-amps have internal protection diodes, and if you need some extra ESD or overvoltage protection, a Schottky diode probably isn't the way.
I'm not taking an anti-LLM view here. I think they are useful in some fields and are getting better. But in this particular instance, there's a breadth of excellent learning resources and the one you've chosen isn't good.
Thanks, I have read a lot of human-written text (and actual books from the day) in addition to the LLM feedback. Again though, had I ignored LLMs altogether I would have barely progressed in the past two months. I think. They seem to act as an "idea board" of sorts—sends me out then looking for others to validate (or not) what they're spouting.
"Schottky diodes to protect op-amp inputs…" Not op-amp inputs, ADC inputs (which may well come from an op-amp output though—I am playing with analog computing after all).
Those ADC probably also have protection diodes, take a look at the datasheet because you may be better with other options, and very likely can use diodes with a larger band gap if you really need bulky protection.
I just wanted to vote with my feet and say that my experience echoes yours closely.
Modern coding agents have a remarkable grasp of circuit design and the net result is that they keep pushing me to learn more, faster.
I do find that I often have to specify that I only want parts that are "active on Digikey" because otherwise it will recommend obsolete parts. However, I consider this just like reviewing code generated by an LLM. You don't get a pass on reading datasheets or verifying statements.
I recently had GPT 5.2 spit out a progression of circuits that can amplify a dynamic mic signal to line level, simple to complex, with the intention of finally learning how good amplifiers work. Adding transformers and gain stages with the different OPA family parts and hearing the hum disappear and noise floor drop is the best kind of education.
A tip: BAT54SW specifically is the best part for protecting your pins.
> I have found better ADC chips than the ones that come pre-soldered on most EDP32 dev boards (and have breadboarded them up with success).
Depending on your setup: beware of your ground and realize that breadboards are an extremely bad fit for this sort of application. It's hard enough to get maximum performance out of a good DAC on a custom designed PCB, on a breadboard it can be a nightmare.
The breadboard has validated the communication between the ESP32 and ADC chip (over I2C).
It's enough that I've now moved to KiCad layout and will wait for the boards to come back to see if the actual ADC data I am getting is more or less linear, noiseless…
Ah ok! Thanks for that bit of clarification, it makes a world of a difference, yes, you can use the breadboard for that, but - based on my own experience - if you want to actually use an ADC on a breadboard you're going to be in for a world of hurt as soon as you exceed some very low threshold frequency of updating and you're going to be fighting all kinds of weird bias effects. The parasitic capacitance of those breadboards is just terrible.
And an antenna, and a coil. Breadboards are great for slow and digital up a few 100 Khz, above that you are going to have the occasional interesting challenge / hairpulling session.
Irrespective, "letting the magic smoke out" has been a part of the electronic hobbyist's vernacular long before vibe-breadboarding. (Been there many times.)
> vibe-breadboarding has drawn me deeper into the electronics hobby.
Exactly. I'm a life-long software guy, but I've dabbled in electronics at various times. But typically I'd hit walls that I just didn't know how to get past, and it wasn't easy to find solutions. If I'd had an LLM to help, I'm pretty sure I'd have become much more deeply involved in electronics.
If you have solid domain knowledge, LLMs are a force multiplier for electronic design. You just have to have a spider sense for “this is going off the rails”.
Other than that, it does useful circuit review, part selection (or suggestions for alternative parts you didn’t know existed), and is usually usefully skeptical. It’s also great at quick back of the napkin “can I just use a smt ceramic here?” Type calculations, especially handy for roughing out timings and that kind of thing.
This. LLM's are still not good enough to trust for Vibe-Circuit building - a circuit re-design is a lot more cost and time than a code change - But they can get you over so many hurdles getting you to the right references in forums and datasheets quickly, I suspect it wont be too long before they can make schematics and PCB's that are 90% there, but currently much more useful in the firmware design.
I am now more of a hobbyist than a professional and LLM's allow me to get results quicker, for example over Christmas I replaced my Pioneer stereo with a new custom motherboard, re-using the class AB analog parts and all the switches and the VFD Display. LLM helped me do it a lot quicker and gave me a couple of novel options, write up here => https://rodyne.com/?p=3380
im imagining if you had a couple parts picked, an llm could follow the reference designs quite well and pick relevant parts and lay them out.
maybe not in the best way, but from a post here last week or so, somebody has written an altium mcp, from which i assume a bunch of the timing and capacitor checks could be run.
maybe not anything particularly high tech, but enough to let mechanical engineers put together test boards without needing to get too far into the electrical discipline
If you know what you're doing with electronics design, I've found that leveraging an LLM to help come up with ideas, layout block diagrams, and find parts can be super useful. Integrating Digi-Key or Mouser API support for finding parts pricing and inventory is also super handy. Using the distributor APIs can also allow you to perform natural language search which isn't possible (or isn't easy) through the distributor websites as the LLM can quickly download the datasheet and read it as part of its searching operation to verify if a part should be considered given your requirements.
I think the problem with LLMs is they fail to adhere to implicit or explicit constraints. Parts finding functionality of DigiKey is pretty darn good, but it would be awesome to find if there are cheaper equivalent circuits that are functionally equivalent to a particular block, yet again under aforementioned constraints. Multi-component simplification would be awesome so as long as it's safe. AI, so far, is not yet a substitute for review by an expert human.
Not quite the same thing, but recently I want to make adapter clips for connecting powerblocks to a barbell, making them suitable as weights for deadlift/benching. I have fusion 360 experience so I designed something to 3D print as usual, but the issue was that PLA even at 100% infill is pretty unsafe when holding 90lbs blocks.
The logical next step is to use metal, but that's outside of my hobby tools. I found that JLCPCB offered sheet metal fabrication but I had no experience with sheet metal designs. I went to ChatGPT and was actually really impressed by how well it was able to guide me from design to final model file. I received the adapters last week and was really impressed by how nice they turned out.
All of that to say, AI-assisted design is actually lowering the bar of entry for a whole lot of problems and I am quite happy about it.
I ve modified my ZigBee bathroom led light. Replaced the daughterboard with an esp32, integrated a humidity sensor and a presence sensor, another rely and a power circuit, and I now have a bathroom light that lights up automatically when someone is in the room, and turns on the extractor only if the humidity is a ove a certain level.
I ve done all this by taking photos of the circuits and asking Gemini how to do it.
I'm working on this. It works pretty well. The main issue I'm working out right now (which has proven very difficult) is the auto-placing and auto-routing on a multi-layer pcb.
I've been using Claude Code to ssh into a Raspberry Pi in a subprocess, so I can chat with the AI from my more powerful machine, and let it write and run the code on the Pi. It's also good for writing scripts and uploading them to an ESP-32 with the Arduino CLI.
As an AI skeptic, I’ve been brought around to using Claude Code to understand a codebase, like when I need to quickly find where something happens through a tower of abstractions. Crucially, this relies on Claude actually searching my codebase using grep. It’s effectively automated guess and check.
I wonder if a SPICE skill would make LLMs safer and more useful in this area. I’m a complete EE newbie, and I am slowly working through The Art of Electronics to learn more. Being able to feed the LLM a circuit diagram—or better yet, a photo of a real circuit!—and have it guess at what it does and then simulate the results to check its work could be a great boon to hands-on learning.
I haven't had much success yet with this. My ratings follow.
Reading and interpreting datasheets: A- (this has gotten a LOT better in the last year)
Give netlist to LLM and ask it to check for errors: C (hit or miss, but useful because catching ANY errors helps)
Give Image to LLM and ask it to check for errors: C (hit or miss)
Design of circuit from description: D- (hallucinates parts, suggests parts for wrong purpose. suggests obsolete parts. Cannot make diagrams. Not an F because its textual descriptions have gotten better. When describing what nodes connect to each other now its not always wrong. You will have to re-check EVERYTHING though, so its usefulness is doubtful)
...only know what an inductor is from watching a video one the youtubes where they were talking about using them on the suspensions of F1 cars and they explained their relationship to electronic circuits, forget what their actual name is.
Semi related: what are your guys workflow to PCB design? I need to build an AFE + BLE MCU for a BCI, and having no EE background, my workflow is KiCAD -> buy components -> breadboard testing -> done?? -> order fully manufactured PCB?
The trick is, LLMs are only as good as they are at writing code because of all the publicly available source code, tutorials, blog posts and Q&A it can slurp up.
I don't trust an LLM to write software for me without human verification, but it's not like it's that hard to verify what it writes if you understand how to write code yourself. I expect even when an LLM can layout a high voltage circuit or design a bridge that most organizations who carry liability would still be sure to audit the design with a set or two of intelligent and trained human eyes.
I have been working on a tool that aids in circuit tuning: model circuit equations as python functions, the solution space is discrete component values, auto solve for a target spec, build the circuits, record measurements, fit error, repeat until the experiment matches predictions. It adjusts nearly every parameter between tests and converges surprisingly fast. (25% to 2% error in 3 tests for an active band pass filter)
The MVP was hand coded, leaned heavily on sympy, linear fits, and worked for simple circuits. The current PoC only falls back to sympy to invert equations, switches to GPR when convergence stalls, and use a robust differential evolution from scipy for combinatorial search. The MVP works, but now I have a mountain of slop to cleanup and some statistics homework to understand the limitations of these algorithms. It’s nice to validate ideas so quickly though.
Years ago, at Pumping Station One in Chicago, I watched someone struggle with the driving of multiple LEDs from an Arduino in his project. He wondered why the LEDs got dimmer when more than one was lit.
I looked at the original schematic, and what he had built, and noticed a difference. The original design had a resistor on each LED, but he had decided that was a redundancy and refactored it to use a single LED instead. In the case of current flow, this meant the circuit still worked, but that current limiting that resistor provided now was shared across every active LED, leading to the progressive dimming as more LEDs were active.
It turned out his background was in software, where the assumptions are much different as to what is important. Cutting out redundant code is an important skill.
I saw it as a cognitive impedance mismatch being played out in real life.
I assume the same is true for an LLM/AI attempting the same leap.
I didn't happen to need this particular advice, but it stuck in my head as something that would potentially save someone learning a lot of pain.
My project (https://phaestus.app/blog) takes a different approach: pre-validated circuit blocks on a fixed 12.7mm grid with standardized bus structures. The LLM picks which blocks you need and where they go, but the actual circuit design was done by humans and tested. No hallucinated resistor values, no creative interpretations of datasheets.
It's the same insight that made software dependencies work. You don't ask ChatGPT to write you a JSON parser from scratch, you ask it which library to use. Hardware should work the same way.
Still WIP and the block library needs expanding, but the constraint-based approach means outputs are manufacturable by construction rather than "probably fine, let's see what catches fire."
I don't want to detract from what you're building, but I'm puzzled by this sentence. It very much sounds like the problem is that they're bad at circuits and that you're working around this problem by making them choose from a catalog.
Try that for code. "The problem isn't that LLMs are bad at coding, it's that we're asking them to write new programs when they should be doing selection and integration".
Not trying to be a smart ass here, I’ve been keeping an eye out for years.
The proof of the Erdos problem the other day was called novel by Terrence Tao. That seems novel to me.
I even had Gemini hallucinate a QFN version of the TPS2596 last night, it was so confident that the *RGER variant existed. In an automated pipeline, this would break things, but giving it a list of parts to use, it becomes a lot more useful!
Module based design is cool for getting the prototype going but once you get into production you want to optimize everything so it falls apart quickly when you need to move the parts (not blocks, parts) to fit the least possible amount of space, cut components that could be shared (do 8 blocks on one board each with its own decoupling caps need entire set of them? Probably not). Fine for prototyping/hobby stuff/one off but falls apart quickly in production.
Still, having working prototype quickly that can then be optimized in more traditional way can still be very valuable.
> It's the same insight that made software dependencies work. You don't ask ChatGPT to write you a JSON parser from scratch, you ask it which library to use. Hardware should work the same way.
hardware optimising gets you far more money faster than software, because the cost of software not being optimal is mostly cost on the consumer (burning more CPU than it would if it was optimized), while for hardware each chip less is more money left in your pocket and there are actual size constraints that can be pretty hard edged vs software's "well the user will have to download extra MB more"
However, I wanted to say that for a lot of common parts I find the Adafruit open source schematics to be at least as useful as the application layout suggestions in many datasheets. When it comes to regulators etc it's nice to see how they did it, because like your project, you really can approach it like a block.
For example a part like the ADS7953 ADC comes with layout recommendations, including the design of the ground plane underneath the chip and the placement of the decoupling caps. A more extreme example would be an esp32 and all of it's supporting parts, including the keepout area on the PCB for wifi transmission.
I really want to assemble circuits out of higher level primitives like that, drag and drop a chip and all of its supporting parts, including their layout and power connections.
I'm targeting sub 5mins from first prompt to manufacturing exports, stl files for enclosure, gerbers for pcb manuf, bin file for firmware, bom for pcba.
E.g. if you wanted something that doesn't exist, but don't have the time, the skills or it's just not worth it. One silly example I had was a colour e-ink selfie fridge magnet. As far as I know, that doesn't exist, I could make it, but I can't be arsed. (so I could suprise my partner with a selfie, a picture of our dog, or anything, just a little treat for her for putting up with me).
With this, it'll pull in a ESP32-S3 Sense Xiao board, an e-ink module, a battery connector and a usb c charge connector. glue it all together, and there we go.
Should work if you wanted a rudimentary zigbee mesh communicator, pulls in a C6, a touchscreen, battery, probably a physical button or two. Once that block library starts filling up, it'll become more and more capable.
However, what I suspect you're after is more aligned with https://www.circuitsnips.com
I built circuitsnips to be the 'thingiverse' for electronics schematics.
Unfortunately it's been a bit neglected since so much of my free time has gone into phaestus, I did have great intentions to kicad up some official reference designs, so I can get rid of the github scraped bootstrap data as that was the sticky point both ethically and for the quality of the schematics, but there are only so many hours in a day.
My intent for phaestus isn't to design pcb's, it's to design entire products, and also to be friendly to non technical users who don't know what a PCB is, let alone do layout themselves.
For the time being, I'm erring away from feature creep, even though I really, really want to though! For the sorts of products I would like this to make for the time being, simple I2C, SPI and GPIO driven peripherals are the limit. I only have 2 more weeks, and then I want to have a working, battery powered device on my desk. PCB, Enclosure, Firmware, everything.
Similarly, I haven't got a framework for anything mechatronic in the MCAD pipeline, so no moving parts (besides clickable buttons). Fixed devices are fine, like screens and connectors though.
It very much aligns with how I've approached hardware since I was 15 and had a massive stack of functional blocks of electronics circuitry that I would combine in all kinds of ways. I've lost the 3x5's, but I still work that way, build a simple block, test it, build another block, test that, hook the one to the other etc.
I may be able to set up an RSS feed for the blog if that interests you? edit: https://phaestus.app/feed.xml
There's a limited sign up currently on the site, which currently goes to an approval page. I don't think I'm quite ready for it to be fully open yet, as i'm paying all the inference, but I should be starting to populate the gallery soon with generated projects.
So far the language models aren’t great at HDL but I assume it’s just a training priority thing and not some characteristic of HDLs.
WARNING: nerd snipe material.
I know Ben is having some fun, perhaps making a valid point, with the burning component on the breadboard. I think it does underscore a difference between software vibing and hardware vibing—crash vs. fire.
But in fact vibe-breadboarding has drawn me deeper into the electronics hobby. I have learned more about op-amps and analog computing in the past two months in large part thanks to Gemini and ChatGPT pointing the way.
I know now about BAT54S Schottky diodes and how they can protect ADC inputs. I have found better ADC chips than the ones that come pre-soldered on most EDP32 dev boards (and have breadboarded them up with success). These were often problems I didn't know I should solve. (Problems that, for example, YouTube tutorials will disregard because they're demonstrating a constrained environment and are trying to keep it simple for beginners, I suppose.)
To be sure I research what the LLMs propose, but now have the language and a better picture in my mind to know what to search for (how do I protect ADC inputs from over or under voltages?). (Hilariously too, I often end up on the EE Stack Exchange where there is often anything but a concise answer.)
5V USB power, through-hole op-amp chips… I'm not too worried about burning my house down.
I can't think of any reason why you'd want to use Schottky diodes to protect op-amp inputs. They have high leakage currents and poor surge capabilities. Most op-amps have internal protection diodes, and if you need some extra ESD or overvoltage protection, a Schottky diode probably isn't the way.
I'm not taking an anti-LLM view here. I think they are useful in some fields and are getting better. But in this particular instance, there's a breadth of excellent learning resources and the one you've chosen isn't good.
"Schottky diodes to protect op-amp inputs…" Not op-amp inputs, ADC inputs (which may well come from an op-amp output though—I am playing with analog computing after all).
Modern coding agents have a remarkable grasp of circuit design and the net result is that they keep pushing me to learn more, faster.
I do find that I often have to specify that I only want parts that are "active on Digikey" because otherwise it will recommend obsolete parts. However, I consider this just like reviewing code generated by an LLM. You don't get a pass on reading datasheets or verifying statements.
I recently had GPT 5.2 spit out a progression of circuits that can amplify a dynamic mic signal to line level, simple to complex, with the intention of finally learning how good amplifiers work. Adding transformers and gain stages with the different OPA family parts and hearing the hum disappear and noise floor drop is the best kind of education.
A tip: BAT54SW specifically is the best part for protecting your pins.
Depending on your setup: beware of your ground and realize that breadboards are an extremely bad fit for this sort of application. It's hard enough to get maximum performance out of a good DAC on a custom designed PCB, on a breadboard it can be a nightmare.
It's enough that I've now moved to KiCad layout and will wait for the boards to come back to see if the actual ADC data I am getting is more or less linear, noiseless…
Irrespective, "letting the magic smoke out" has been a part of the electronic hobbyist's vernacular long before vibe-breadboarding. (Been there many times.)
Gemini was suggesting the circuit design and of course I'd do the final work myself, but I find vibe-circuit-building to be quite valuable.
It would catch any case where the stove is drawing power, irrespective of possible failure modes of the stove itself.
Exactly. I'm a life-long software guy, but I've dabbled in electronics at various times. But typically I'd hit walls that I just didn't know how to get past, and it wasn't easy to find solutions. If I'd had an LLM to help, I'm pretty sure I'd have become much more deeply involved in electronics.
Other than that, it does useful circuit review, part selection (or suggestions for alternative parts you didn’t know existed), and is usually usefully skeptical. It’s also great at quick back of the napkin “can I just use a smt ceramic here?” Type calculations, especially handy for roughing out timings and that kind of thing.
I am now more of a hobbyist than a professional and LLM's allow me to get results quicker, for example over Christmas I replaced my Pioneer stereo with a new custom motherboard, re-using the class AB analog parts and all the switches and the VFD Display. LLM helped me do it a lot quicker and gave me a couple of novel options, write up here => https://rodyne.com/?p=3380
maybe not in the best way, but from a post here last week or so, somebody has written an altium mcp, from which i assume a bunch of the timing and capacitor checks could be run.
maybe not anything particularly high tech, but enough to let mechanical engineers put together test boards without needing to get too far into the electrical discipline
The logical next step is to use metal, but that's outside of my hobby tools. I found that JLCPCB offered sheet metal fabrication but I had no experience with sheet metal designs. I went to ChatGPT and was actually really impressed by how well it was able to guide me from design to final model file. I received the adapters last week and was really impressed by how nice they turned out.
All of that to say, AI-assisted design is actually lowering the bar of entry for a whole lot of problems and I am quite happy about it.
I ve done all this by taking photos of the circuits and asking Gemini how to do it.
This seems ~identical to the situation where we can use a compiler or parser to return syntax errors to the agent in a feedback loop.
I don't know exactly what the tool calling surface would look like, but I feel like this could work.
[1] https://github.com/dvhx/pedalgen
I wonder if a SPICE skill would make LLMs safer and more useful in this area. I’m a complete EE newbie, and I am slowly working through The Art of Electronics to learn more. Being able to feed the LLM a circuit diagram—or better yet, a photo of a real circuit!—and have it guess at what it does and then simulate the results to check its work could be a great boon to hands-on learning.
Reading and interpreting datasheets: A- (this has gotten a LOT better in the last year)
Give netlist to LLM and ask it to check for errors: C (hit or miss, but useful because catching ANY errors helps)
Give Image to LLM and ask it to check for errors: C (hit or miss)
Design of circuit from description: D- (hallucinates parts, suggests parts for wrong purpose. suggests obsolete parts. Cannot make diagrams. Not an F because its textual descriptions have gotten better. When describing what nodes connect to each other now its not always wrong. You will have to re-check EVERYTHING though, so its usefulness is doubtful)
> ...
> Ah - that makes sense, that's why it's on fire
oh how very relatable, I've had similar moments.
I knew about SEDs (smoke emitting diodes) and LERs (light emitting resistors), but what do you call the inductor version?
"Who did your electricals?"
"My nephew Thomas!"
"Oh, so when did his house burn down?"
"Last ye.... wait how do you know his house burnt down?"
...only know what an inductor is from watching a video one the youtubes where they were talking about using them on the suspensions of F1 cars and they explained their relationship to electronic circuits, forget what their actual name is.
Previous discussion: https://news.ycombinator.com/item?id=44542880
I know nothing...
https://github.com/mixelpixx/KiCAD-MCP-Server
The MVP was hand coded, leaned heavily on sympy, linear fits, and worked for simple circuits. The current PoC only falls back to sympy to invert equations, switches to GPR when convergence stalls, and use a robust differential evolution from scipy for combinatorial search. The MVP works, but now I have a mountain of slop to cleanup and some statistics homework to understand the limitations of these algorithms. It’s nice to validate ideas so quickly though.
The system is on fire