8 comments

  • youoy 78 days ago
    I feel that here the authors think about mathematics as just the language and not what it talks about. We know at least since Gödel that mathematics is more than just the formal system. I even remember going to a talk by Gromov and he getting "angry" about a question from the public in the lines of "what if we suppose this extra thing and change this other thing on that theorem", and him saying that it's stupid to think of new theorems for the sake of doing it, that the language of mathematics are like metaphors speaking about something else.

    In my experience learning math, their claim that "The central hypothesis is that a desirable body of theorems better summarizes the set of all provable statements, for example by having a small description length" is not true. Short != Better, better is what gets me faster to form the correct intuitive idea about the mathematical statement. For example, several times I have experienced the fact of understanding the formal definitions and proofs of a theory, but it's not until I form the correct intuitions (maybe months later), that I truly understand the theory. And it's not until I have the correct intuitions, that I can successfully apply the theory to create meaningful new theorems.

    Anyways, I understand that one has to start from somewhere and the point of view of the article is more tractable and explicit.

  • sgt101 78 days ago
    I love that "system 1" and "system 2" are tossed out as fundamental concepts of mind when they're basically labels for some semi-random ideas based on bullshit studies in a pop sci fluff book. Is there any actual science that ties these ideas to mathematical reasoning?

    Then we have:

    >> By analogy, we can now state the main hypothesis proposed in this paper: a crucial component of the usefulness of a new proven theorem t (in the context of previous theorems T(S)) is how efficiently T(S)∪{t} compresses the set of all provable mathematical statements M. That is, T (S) ∪ {t} is a good compression of M if many provable statements, beyond those in S, can be derived from T (S) ∪ {t}, say using at most k derivation steps.

    which says "Occam's Razor, that's good isn't it?" in more words.

    • CooCooCaCha 78 days ago
      > I love that "system 1" and "system 2" are tossed out as fundamental concepts of mind when they're basically labels for some semi-random ideas based on bullshit studies in a pop sci fluff book. Is there any actual science that ties these ideas to mathematical reasoning?

      I think this fails to hit the mark on what people actually care about with regards to system 1 vs system 2. It’s really just, can we build models that are able to vary how long they think about a problem? Current AI models are very limited in this aspect and I think most would agree that being able to think about a problem for a period of time is a useful feature that we humans use all the time.

    • drdeca 78 days ago
      I don’t think that’s really what Occam’s razor says.

      If one’s goal is to use ML stuff to try to produce new useful theorems, it seems useful towards that goal to come up with a numeric heuristic for how useful a new theorem is. And, stating such a heuristic explicitly seems reasonable.

      Just because the heuristic that they state, which they expect to be a crucial component of what makes a theorem useful to prove in a given context, isn’t particularly surprising, doesn’t make it not worth saying.

      to elaborate on “isn’t particularly surprising” : if you asked many other people to come up with heuristics for the usefulness of a new potential theorem, I imagine many of the answers you would get (counting multiplicity if multiple people give identical answers) would be fairly similar.

      Even if a hypothesis is an obvious-to-consider hypothesis, if one wants to talk about it, it is worth stating what it is first.

    • kendalf89 78 days ago
      System 1 and 2 are just another way of describing dual process theory, which existed long before Daniel Kahneman wrote, Thinking Fast and Slow, and is still the prevailing theory of mind in its category today. There were maybe 1 or 2 studies mentioned in that book that were not very replicable but other than that, the overall consensus by leading experts in that field is positive, from everything I've seen, which is impressive considering how old the book is now.

      I have no idea if dual process theory is actually useful for teaching computers how to math, but it seems unfair to just dismiss it as pop science bunk.

      https://en.m.wikipedia.org/wiki/Dual_process_theory

      • roenxi 78 days ago
        > ...which is impressive considering how old the book is now.

        The human mind hasn't changed all that much in the last ... countless millennia. If anything it'd be a quite concerning data point if we hadn't nailed down the introductory level points about how to be thoughtful.

      • majormajor 78 days ago
        The Wiki article you link gives an example from the 1600s of "the passions" vs "reason" so it certainly seems quite old. And when framed that way in particular has echos of "habitus" from sociology - https://en.wikipedia.org/wiki/Habitus_(sociology) which I will paraphrase badly as "intuition [the passions] as shaped by social structure".

        But all that seems largely descriptive rather than usefully predictive and more a representation of what we don't know than what we do.

  • marojejian 78 days ago
    I found this to be an interesting read, though it's just a high-level plan, vs. breaking new ground. I like how the ideas are synthesized.

    A fun detail is to be found in a footnote: "A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies.” (attributed to Stefan Banach in Randrianantoanina and Randrianantoanina [2007]).

    And reading this, it seems to me this implies the "ultimate" mathematician is a poet.

    P.S. I initially assumed "Randrianantoanina and Randrianantoanina" was the title of some nerdy speculative fiction. It turns out this is a regular paper reference. The authors are Malagasy, no doubt. They have such cool names there.

    • ben_w 78 days ago
      > P.S. I initially assumed "Randrianantoanina and Randrianantoanina" was the title of some nerdy speculative fiction. It turns out this is a regular paper reference. The authors are Malagasy, no doubt. They have such cool names there.

      Given what else Banach is famous for, I assumed a Banach–Tarski joke.

    • card_zero 78 days ago
      > the "ultimate" mathematician is a poet.

      Approximately, but not actually, a member of the set of poets.

    • svantana 78 days ago
      > it seems to me this implies the "ultimate" mathematician is a poet.

      And, conversely, that the best mathematical theorems read like beautiful poetry.

  • thomasahle 78 days ago
    A key aspect in AlphaProof was how to find good training problems. For AlphaProof they managed to scrape 1 million "International Mathematical Olympiad (IMO) style" problems.

    Here Bengio seems to just suggest using estimated entropy `E_P(t|T(S))[− log P(t | T(S))]` of the theorem to find interesting/surprising theorems to try and prove.

    This reminds me of the early work in estimating LLM uncertainty/hallucinations using perplexity. It didn't work very well.

    Actually generating good "exercises" for the system to "practice" on, and theorems to try and prove, still seems like the biggest road block for an AI mathematician. It connects to the philosophical question of why we are doing math in the first place?

    • nickpsecurity 78 days ago
      This might be where expert-crafted, training data comes in. I previously proposed training them like children with K-12 resources, games, stories, and exercises. Maybe add math tutoring with formal proofs at different levels.

      Then, let them generate many types of problems themselves, have experts correct them, and further train on that.

      • thomasahle 77 days ago
        Expert-crafted, training data is great. But it won't take you above human expert level. If you are truly trying to make an "AI mathematician" it has to be able to propose it's own problems to solve.
        • nickpsecurity 77 days ago
          I think being able to do that is itself a skill we might be able to teach with the right data.

          Also, the training data I mentioned could be produced partly by programs that generate such examples. The templates for them could be made from human exercises. There’s already tools that generate test data from grammars or algebraic rules. Maybe encode which math problems can be mixed together using those.

  • marojejian 78 days ago
    Abstract. The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning. What could be missing? Can we learn something useful about that gap from how the brains of mathematicians go about their craft? This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities – which correspond to our intuition and habitual behaviors – but still lacks something important regarding system 2 abilities – which include reasoning and robust uncertainty estimation. It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement, which could guide future work in crafting an AI mathematician. The focus is not on proving a given theorem but on discovering new and interesting conjectures. The central hypothesis is that a desirable body of theorems better summarizes the set of all provable statements, for example by having a small description length while at the same time being close (in terms of number of derivation steps) to many provable statements.
  • ford 78 days ago
    Given the pace of AI lately - might be worth mentioning this is from March of this year
  • jlopes2 77 days ago
    This feels a little to high level for me, the leap from what LLM does today to provable theorem set is rough.

    community is having large debates on whether an LLM can reason outside of its training.This feels ignored in here.

  • n00b101 78 days ago
    ### *Formalization and Implementation*: While the paper lays out a theoretical framework, its practical implementation may face significant challenges. For instance, generating meaningful mathematical conjectures is far more abstract and constrained than tasks like generating text or images. The space of potential theorems is vast, and training an AI system to navigate this space intelligently would require further breakthroughs in both theory and computational techniques.

    ### *Compression as a Measure of Theorem Usefulness*: The notion that a good theorem compresses provable statements is intriguing but may need more exploration in terms of practical utility. While compression aligns with Occam's Razor and Bayesian learning principles, it's not always clear whether the most "compressed" theorems are the most valuable, especially when considering the depth and complexity of many foundational theorems in mathematics.

    ### *Human-AI Collaboration*: The paper lightly touches on how this AI mathematician might work alongside humans, but the real power of such a system might lie in human-AI collaboration. A mathematician AI capable of generating insightful conjectures and proofs could dramatically accelerate research, but the interaction between AI and human intuition would be key.

    ### *Computational and Theoretical Limits*: There are also potential computational limits to the approach. The "compression" and "conjecture-making" frameworks proposed may be too complex to compute at scale, especially when considering the vast space of possible theorems and proofs. Developing approximation methods or heuristics that are effective in real-world applications will likely be necessary.

    Here's how we can unpack this paper:

    ### *System 1 vs. System 2 Thinking*: - *System 1* refers to intuitive, fast, and automatic thinking, such as recognizing patterns or generating fluent responses based on past experience. AI systems like GPT-4 excel in this area, as they are trained to predict and generate plausible content based on large datasets (e.g., text completion, language generation). - *System 2* refers to deliberate, logical, and slow thinking, often involving reasoning, planning, and making sense of abstract ideas—such as solving a mathematical proof, engaging in formal logic, or synthesizing novel insights. The claim that AI lacks System 2 abilities suggests that while AI can mimic certain behaviors associated with intelligence, it struggles with tasks that require structured, step-by-step reasoning and deep conceptual understanding.

    ### "Not so much in terms of mathematical reasoning"

    The claim is *partially true*, but it must be put into context:

       - **Progress in AI**: AI has made **tremendous advances** in recent years, and while it may still lack sophisticated mathematical reasoning, there is significant progress in related areas like automated theorem proving (e.g., systems like Lean or Coq). Specialized systems can solve well-defined, formal mathematical problems—though these systems are not general-purpose AI and operate under specific constraints.
    
       - **Scope of Current Models**: General-purpose models like GPT-4 weren't specifically designed for deep mathematical reasoning. Their training focuses on predicting likely sequences of tokens, not on formal logic or theorem proving. However, with enough specialized training or modularity, they could improve in these domains. We’ve already seen AI systems make progress in proving mathematical theorems with reinforcement learning and imitation learning techniques.
    
       - **Frontiers of AI**: As AI continues to develop, future systems might incorporate elements of both System 1 and System 2 thinking by combining pattern recognition with symbolic reasoning and logical processing (e.g., systems that integrate neural networks with formal logic solvers or reasoning engines).
    
    
    ### Conclusion: AI excels in tasks involving intuitive, pattern-based thinking but struggles with deliberate, goal-oriented reasoning required for deep mathematical work. However, as research evolves—especially in hybrid models that combine deep learning with symbolic reasoning and formal logic—these limitations may become less pronounced.

    The future of AI may very well involve systems that are capable of the same level of mathematical reasoning (or better) as "human experts."