26 comments

  • sarreph 3 hours ago
    The bakery example is interesting, because it's presented as "both sides have been working on this thing and think they should get 50%"... and then the _solution_ is "A path back to 50% for Daniel" -- who gets an objectively worse deal than his disputant.

    It's definitely an interesting application of LLMs, the output text to me reads very GPT-ey, with the punctuated and concise phrasing.

  • aroido-bigcat 10 hours ago
    Feels like the tricky part here isn’t computing a “fair” outcome, but defining what fairness even means in the first place.

    Once you formalize preferences into something comparable, you’re already making a lot of assumptions about how people value outcomes.

    • sanity 4 hours ago
      Thank you for the feedback. The goal of the Nash bargaining solution is to find the agreement that maximizes the likelihood that most parties will agree based on their stated preferences.
    • alex1sa 9 hours ago
      [dead]
  • ttul 12 hours ago
    Fabulous idea. LLM-assisted mediation is brilliant because it has the potential to bring the benefits of mediation to the masses. The addressable market is all of humanity. Even if all you did was focus this app on co-parenting arguments, you could help millions of people every day.
  • hawest 9 hours ago
    Super interesting, thank you for sharing!

    I have published some research on using LLMs for mediation here: https://arxiv.org/abs/2307.16732 and https://arxiv.org/abs/2410.07053

    These papers describe the LLMediator, a platform that uses LLMs to:

    a) ensure a discussion maintains a positive tone by flagging and offering reformulated versions of messages that may derail the conversation

    b) suggest intervention messages that the mediator can use to intervene in the discussion and guide the parties toward a positive outcome.

    Overall, LLMs seem to be very good at these tasks, and even compared favourably to human-written interventions. Very excited about the potential of LLMs to lower the barrier to mediation, as it has a lot of potential to resolve disputes in a positive and collaborative manner.

    • sanity 2 hours ago
      Thank you for sharing these.

      This feels complementary to my approach. Your papers seem focused on tone, interventions, and guiding the conversation. My approach is more about trying to infer each party’s preferences and then search for agreements that both would accept.

      I think LLMs are strong at both layers, but they’re quite different problems. One is helping people communicate better, the other is trying to actually compute outcomes given what each side cares about.

    • harvey9 5 hours ago
      Too many chatbots maintain a relentlessly 'positive tone' anyway, and sometimes a negative situation calls for honestly negative tones.
      • lookACamel 1 minute ago
        > sometimes a negative situation calls for honestly negative tones.

        It's not exactly hard for humans in dispute to conjure up negative tones.

      • hawest 3 hours ago
        Fully agree. In the LLMediator, the function is used to nudge people towards a more constructive tone by suggesting alternative formulations, but in the end the user is in control in what they want to say and how of course.
  • lookACamel 9 hours ago
    Great idea though I am skeptical it will be adopted in contentious situations without some sort of stick. In amorphous situations where there is just high trust but an aversion to talking things out I could see this kind of tool being used. But in contentious or low trust situations (strangers) I suspect most people do not want fairness, they want to be ahead. A fair agreement will, paradoxically, disappoint everyone since every party feels the lack of clear advantage.
    • sanity 2 hours ago
      I think this is mostly right, but it depends a bit on how you frame "fairness".

      The system isn’t trying to impose a notion of fairness from the outside. It’s trying to find agreements that both parties prefer over their BATNA (i.e. what they get if they walk away). If there’s a way for one side to come out clearly ahead given the other side’s preferences, it should find that. If not, it finds the best mutual improvement available.

      On the "no stick" point, I agree this probably isn’t useful in fully adversarial situations where one side expects to win outright. Where I think it helps is when both sides suspect there’s a deal but can’t quite find it, or don’t want to go through a long negotiation process to get there.

  • vintermann 10 hours ago
    This doesn't seem to have any notion of power? Coming up with a fair agreement between people who have equal power over the thing they care equally about, isn't that hard.

    But when one side is indifferent to something the other side cares deeply about, yet has veto power to spoil it, a Nash agreement isn't going to be "fair" in the usual sense of the word.

    • sgsjchs 6 hours ago
      You have it backwards.

      This formal game-theoretic notion of fairness acknowledges that power disparity exists and that having less power than your counterparty allows them to inflict greater disutility on you without you being able to inflict disutility on them in turn to discourage this.

      On the other hand, fairness "in the usual sense", pretends power disparity doesn't exist and that, say, an armed robber is not allowed to take your stuff when you have nothing to defend yourself with. Which in reality only works as long there is a powerful third party (the state) that will inflict disutility on the robber for it.

    • maxaw 9 hours ago
      In reality people never have equal power over anything (what would that look like, physically?) so something like nash bargaining is an attempt to get closer to a notion of fair given this inequality
      • vintermann 9 hours ago
        I don't think the difficulty of equal power is a good excuse to pretend power doesn't exist.

        One way we solve it in the real world is that the negotiators also have power - including, possibly, the power to force the party most OK with the status quo to come to the negotiating table, and reject exploitative proposals.

        That isn't foolproof either, of course. But it beats rhetoric trying to convince the weaker party to submit.

        • maxaw 5 hours ago
          I didn’t say it doesn’t exist, rather that it’s already taken into account. I’m also not sure what you are proposing- if mediation is required, and someone has more power than someone else, why would they voluntarily engage with a mediator who will reduce that power? Or if they are forced to use this mediator (eg by the state) then this means they never had the power in the first place
  • dennismcwong 3 hours ago
    Interesting idea for sure. I am just thinking, intuitively couldn't I 'game' the mediator by overstating my preference and requirements to achieve a more favorable outcome?
    • sanity 3 hours ago
      Thank you. Yes, you could inflate your BATNA, but then you risk the other side rejecting the agreement when a mutually beneficial agreement was possible if you had been honest.

      This kind of property in a negotiation system, where honesty is rewarded and dishonesty can backfire, is called “incentive compatibility.” I’m not claiming my approach is formally incentive compatible, but it is directionally so.

  • parkerside 5 hours ago
    I like the idea and signed up, but the first thing I see is a prompt to purchase credits. I don't have a use-case to try this now, so I won't be using the service again, however I couldn't find an account dashboard to delete my account or even sign out?
    • sanity 4 hours ago
      Hey, thank you for the feedback, if you click on the profile icon in the top right there is a "Sign Out" option. We don't have a delete account option yet but I will prioritize it.
  • throwanem 31 minutes ago
    You built Freenet? What about that experience encouraged you to continue building things?
  • webrot 7 hours ago
    I think this is very useful. I wonder if you have people that actually used in difficult situations? maybe family separations or challenging stuff like that, where I see a lot of potential but also resistance.

    This said, I think the challenging part for the users is clearly setting the utility function. I agree LLMs can help there, but I have few concerns wrt that.

    • sanity 2 hours ago
      Thank you! It's early days yet but I've had interest from people going through a divorce with child separation questions - however I wanted to ensure it worked well on less serious problems before I risk it on something so consequential.
  • maxaw 9 hours ago
    This is so cool. Even small disputes like roommate arrangements can feel very emotionally impactful at the time and it would be wonderful to have a tool for these moments
  • dhruv3006 10 hours ago
    John Nash's ideas are still relevant today - highlights how great he was - I liked how you used a genetic algorithm here!
    • sanity 4 hours ago
      John Nash was indeed a great man, thank you!
  • mfrye0 11 hours ago
    I would love something like this to use with my HOA. About to start mediation and the estimate for the mediator alone is ~$20k.
    • sanity 4 hours ago
      Thank you! You should definitely get a lawyer to review any agreement before signing if there is meaningful money at stake.
      • mfrye0 1 hour ago
        Yes. Have a lawyer and there is indeed meaningful money at stake. I'm more wishing there was a simpler way to go about it though, as it's likely going to cost 6 figures when it's all said and done.
    • wferrell 11 hours ago
      You might try Decisionlayer.ai

      We built a way to make contracts enforceable and resolve disputes without the high cost of litigation. Specifically, by adding our arbitration clause to your contracts or using our "case by consent" you can get AI driven court-enforceable arbitration decisions in 7 days for a $500 flat fee - no lawyers required. This compares to the $30k or $40k you would otherwise spend on a lawyer+ JAMS/AAA arbitration fees. For your HOA, I suspect the case by consent would be the best approach - two parties come to the website, both agree to use DecisionLayer to resolve the dispute and then present the issue and each side's argument.

      We have free case simulator on our site. Check it out at https://www.decisionlayer.ai/simulate

      • arowthway 7 hours ago
        I'd rather arbitrate by coin toss.
  • danieldifficult 11 hours ago
    Brilliant! Love seeing this space start to wake up.

    Last year I built https://andshake.app to prevent the need for conflict resolution… by getting things clear up front.

    I agree that AI has much to offer in low-stakes agreements to help people move forward in cooperation.

    • aspect0545 9 hours ago
      Looks interesting. But where’s the privacy policy or at least information what happens with all the sensitive stuff you enter there. Because let’s be honest, a lot of the stuff that is awkward to talk about is somewhat private.
  • zachvandorp 12 hours ago
    Its an interesting idea. I've seen a few of these but not with ol' John's spin on it.

    Do you want the first link "How it Works" to really be just the # of front page? it makes it feel like it's broken if someone clicks it. Also your blog about Nash Bargaining is almost more of a "How it Works" page than the How it Works page is.

    I feel like your landing page very quickly told me what your website does which is great. If the Nash Bargaining is the "wedge" to separate you from the pack, I'd try explain how that differentiates this over the others as quickly as possible. I know that's easier said than done. Good luck!

    • sanity 4 hours ago
      Thank you!

      You're right about the "How it works" page - I will remove it.

      • sanity 1 hour ago
        Actually I changed my mind, I'll just link from How it Works to the blog article for the moment.
  • mukundesh 12 hours ago
    How about Iran/US conflict ? or Israel/Palestine conflict ?

    Is anyone working on this ? seems like a big win for AI if it can be done.

    • sanity 4 hours ago
      Believe it or not I did a lot of testing with geopolitics early on but didn't want to put it on the website so people wouldn't think I'm a megalomaniac ;)

      I regenerated the Israel/Palestine agreement using my latest code although the input positions were as they were this time last year when hostages were still being held.

      Interested to hear what you think: https://gist.github.com/sanity/3851e33e085ed444525edcc7b7ba2...

    • harvey9 5 hours ago
      Seems like a very different class of problem. Many more parties and variables than the 'roommate problem'.
    • watwut 10 hours ago
      Pakistan is working on the Iran/US conflict.
  • Zababa 7 hours ago
    Very interesting! For limitations, I'd add stated vs revealed preference. Currently the system assumes than what people say is what they actually prefer, but that's not always the case. If that is already addressed in your tool, I think it would be nice to mention it!
    • sanity 3 hours ago
      Thank you. The purpose of having the LLM interview the user is to try to surface those unstated preferences by exploring aspects of the agreement that the user may not surface themselves.
  • setnone 10 hours ago
    definitely a great use of LLMs
  • watwut 10 hours ago
    Basically, the negotiating game is will break down to demanding absolute maximum and pretending you care a lot more then you care. The more demanding person gets more, less demanding person is taken for a ride.
    • eigenket 9 hours ago
      I don't know anything about this specific LLM thing but if it correctly uses the Nash bargaining optimiser then that won't happen.

      This thing you point out is exactly why Nash demanded invariance under affine transformations in his solution. Using completely arbitrary units if I rank everything as having importance 1 million, that's exactly the same as ranking everything as having importance 1, and also the same as ranking everything as having importance 0.

      The solution is only sensitive to diffences in the unitity function, not the actual values of the function. If you want to weight something very strongly in the Nash version of the game you also have to weight other things correspondingly weakly.

      • sanity 3 hours ago
        You are correct that Nash should address this because only the relative utilities matter, not absolute.

        There is the potential for parties to get better deals by overstating their BATNAs, but then they risk the other party rejecting the agreement when a mutually beneficial agreement was possible - so it's not in their interests to mislead the system.

    • DeathArrow 9 hours ago
      Then the tool should be named Trump.ai, not Mediator.ai. :)
  • arjunthazhath 12 hours ago
    I am unable to login
    • sanity 4 hours ago
      Hi, what happens when you try?
  • mock-possum 11 hours ago
    EDIT - in all fairness I find the blog entry much more persuasive: https://mediator.ai/blog/ai-negotiation-nash-bargaining/

    That said, given the fictional example:

    Honestly I’m on Daniel’s side - they agreed on a 50/50 split, and they’ve both been working their asses off to make the business work. It’s an arrangement that clearly both of them have been actively participating in, not trying to push back against, for a year and a half.

    And the supposed insight this product offers is to… split the difference? Between Maya’s power play for 70/30, and Daniel’s insistence on the original 50/50? 60/40 is the brilliant proposal?

    How could they stand to work together afterwards, knowing she thinks she deserves 70% of the profit, but was willing to ‘settle’ for 60%? Why would you want to keep working with someone who screwed you over that way? Their partnership is toast. All the mediation really does is… I don’t know, what? How is this good for Daniel? This ain’t any kind of reconciliation, surely.

    Is the argument that it’d be easier for her to get a new baker, than it is for him to get a new business manager?

    • AnthonyR 11 hours ago
      Yeah I also don't quite understand the example on the homepage... they agreed to 50/50 and then she wanted 70/30 so now they settle on 60/40? Like this doesn't seem like a "fair" mediation it's kind of weird (obviously oversimplifying the situation a bit but nonetheless I'm not sure real world conflicts are this simple in practice)
      • sanity 3 hours ago
        You raise a good point. The issue is presentation - leading with the 60/40 reads like midpoint arbitration, whereas the interesting part is Daniel's path back to 50/50, the management salary, the mutual waiver on the first 18 months (which is what settles his rent contribution), and the shotgun buy-sell.

        I've made some changes that should help with this.

      • alex43578 10 hours ago
        They wanted 50/50, but from the vignette Daniel didn’t continue to do 50% of the work.
        • mock-possum 10 hours ago
          Sure, he just continued to take sole responsibility for the production of the product, quality and quantity, while also holding down an additional job, which paid the rent.

          These characters have both been putting the work in.

          I’d be looking for a serpent at his partner’s ear, planting poisonous suggestions that she deserves more of the company they started equally. If this were real.

          • lookACamel 9 hours ago
            > While also holding down an additional job

            That's the problem, the story is saying he stopped focusing full-time on the business in order to make his own ends meet. It looks like the main innovation of the mediator generated deal is that it attempts to reconcile by drafting a way back in to 50/50 if he recommits. The starting 60/40 split is not that important.

            • gavinray 5 hours ago
              He paid her rent
            • throwanem 7 hours ago
              Her ends, too. They share an apartment, in the story.

              This is certainly an example of what I would expect from a product designed to optimize a prenup. You know, they say money ruins people, but sometimes you just have to acknowledge there was nothing really ever there decent to begin with.

              • lookACamel 4 hours ago
                Yeah after re-reading the scenario it is pretty weird. The AI doesn't have enough data. There should be concrete numbers for the rent. Why wouldn't Daniel tell the LLM exactly how much it was?
                • throwanem 3 hours ago
                  Well, I don't know, I'm sure. Totally unrelated, I hear a common piece of advice for the aspiring con artist is to avoid overcomplicating the legend.
  • tahosin 1 hour ago
    [dead]
  • Daffrin 6 hours ago
    [dead]
  • kszxn 4 hours ago
    [dead]
  • openclawclub 7 hours ago
    [dead]