Some thoughts/questions:
Are reproducible builds and supply-chain audits enough to trust the binaries?
What strategies exist for spotting subtle backdoors in such large codebases?
For hardware, how do you approach the risk of compromised firmware, microcode, or hidden subsystems (e.g. Intel ME, AMD PSP)?
Do projects like Coreboot, Heads, or formally verified kernels meaningfully reduce this risk in practice?
Beyond reading every line yourself, what’s the best way to build confidence?
How much trust (percentage-wise) do you personally put in OSS security projects or commodity hardware, and what technical mitigations do you use to minimize blind trust?
You shouldn't particularly trust any software, monitor outbound traffic, silo your different projects to minimize what software is adjacent to your projects and the fallout if something got access, minimize programming dependencies and browser and IDE extensions and add-ons and stuff coming from unknown 3rd parties. Stay behind the latest builds/updates/releases so problems have time to be identified.
With a microkernel system, you only have to really trust the kernel, and it's very small. You can build provably secure systems in this manner.
The last commonly available hardware/software combination that you could actually trust was an IBM PC/XT with dual floppy disks running MS-DOS. The write protection was enforced in hardware, so you could make and keep clean copies of the operating system.
https://bootstrappable.org/ https://stagex.tools/
https://wiki.debian.org/Firmware/Open