I’ve been building a multi-agent orchestration platform and kept running into a design question nobody had a clean answer for: what should each agent know about the others? So I isolated it as a variable.
Ran iterated Prisoner’s Dilemma with two AI agents across blind/partial/full visibility. The result I didn’t expect: partial transparency produced more cooperation than full. With complete information, the cooperative agent found rational justification to abandon its strategy. With partial info, it doubled down on identity.
Curious if others designing multi-agent systems have run into this. How do you decide what each agent should see?
1 comments