This is a reference to the "Can programming be liberated from the von Neumann style?" from 1977.
It argues for functional programming, making the point that the imperative style is more common for efficiency reasons, as the programming model is close to the computer architecture.
It aims to be a general thought framework inviting to step a step back on some notions that have been (hastily?) accepted in the programming world.
It makes the same analogy that Prolog (or logic programming languages in general) have been strongly influenced by the resolution algorithm. In practice that means that if you write a non-trivial program, if performance is not right you'll need to understand the execution model and adapt to it, mainly with the pruning operator (!). So while the promise is to "declare" values and not think about the implementation details, you're railroaded to think in a very specific way.
I personally found that frustrating to find good solutions essentially unworkable because of this, in comparison with either imperative or functional paradigms that are significantly more flexible.
As a result, Prolog-style programming feels limited to the small problems for which it is readily a good fit, to be integrated into a general program using a general-purpose language.
I may be wrong on this, but of the 50 people that learned Prolog around the same time as me, none kept up with it. Meanwhile, other niche languages like Ocaml, Haskell and Scheme had good success.
Rethinking the language foundation could remove these barriers to give the language a broader user base.
It might be a reference to that 1977 paper in name, but unlike that Backus paper using math to make its point, this reads like a shallow ad for using the Curry language. The central (and only) point is merely an example of rewriting a Prolog predicate for appending lists into Curry but without even the claim of generality. The rewritten Curry functions however trivially fix determinacy and variables to input and output roles when the entire point of logic variables in Prolog is that they're working either when bound to a value upon entering a predicate call, or can get bound to as many values indeterministically as needed until the procedure terminates succesfully (and then even more on backtracking over failed subsequent goals). The paper also glosses over the concept of unification here. I sure hope the referenced detail papers come with more substance.
The paper title doesn't even make sense? So he wants to "liberate" Logic Programming from predicates? Predicate Logic is First-Order Logic ... ie what YeGoblynQueenne says in another comment.
From a mere practical PoV, who is the author even addressing? Prolog, from the get go, was introduced with NLP, planning problems, and similar large combinatorical search spaces in mind and is used for the convenience it brings to these applications. That FP could theoretically be more broadly used is completely besides the point; Prolog's focus is its strength.
The way you write imperative programs in Prolog by exploiting the search order, using cuts, etc. seems clever when you see it in school and do a few assignments for a comparative programming languages class (the only 3 credit CS course I took) but it is painfully awkward if you have to do very much of it.
It isn't. I do most of my programming in Prolog, I write oodles of it daily, and it's not a problem. You learn to think that way easily.
The argument is basically that Prolog is not 100% declarative and that if we jump through a few hoops, and translate it all to functional notation, we can make it "more declarative". But let's instead compare the incomplete declarativeness of Prolog to a fully-imperative, zero-declarative language like Python or C#. We'll find I believe that most programmers are perfectly fine programming completely non-declaratively and don't have any trouble writing very complex programs in it, and that "OMG my language is not purely declarative" is the least of their problems. I hear some old, wizened nerds even manage to program in C where you actually can drop to the hardware level and push bits around registers entirely by hand O.o
I suppose it is a bit spartan, but it has a ton of functionality that I find indispensable [1]. For example, when I place my cursor on a variable in the editor it highlights all the variables that unify with it in the same clause, and it will highlight singleton variables in a different colour so you can catch errors caused by typos easily. It is also aware of things like imported predicates, undefined or dynamic predicates, multi-file predicates etc, and will highlight them accordingly. It has some (limited) auto-complete and code-expansion etc. and a status line that gives you very good information about the kind of term you have your cursor on - where it's defined if it's a predicate, its name and arity and how many clauses etc, basically much of the information you can get from querying the program database with various program inspection built-ins. Some of my colleagues use VS Code to write Prolog instead and I keep shaking my head and grumbling about the errors they keep making that they wouldn't if they used the SWI-Prolog IDE instead. Kids, these days. In my youth, we wrote all our Prolog on Notepad! And compiled on Dos!
(nope. never)
SWI also has a graphical debugger, which however I never use:
I moved to academia, after six years of working in the industry mainly with C# and SQL. It was a deliberate attempt to find a way to work with Prolog. I guess that's a bit immature of me but I fell in love with Prolog in the second year of my CS degree and I couldn't get over it so here I am.
I did an MSc in data science first, then started a PhD to study Inductive Logic Programming (ILP), which is basically machine learning × Prolog (although there's also ASP ILP these days). I got my PhD last summer and I'm now doing a post-doc on a robotics project with Meta-Interpretive Learning (MIL), a recent form of ILP. Here's my latest publication:
Which is still a bit proof-of-concept. We're still at the very early stages of practical applications of MIL and so there's a lot of foundation work to do. Bliss :)
You might want to look at the Icon programming language. I fell in love with it long ago, but I did not go the academic route to use it more, I just accepted that it wouldn't be a part of my work life. Later -much later- I found jq, with which I had more success in industry. Both are very much like logic programming languages in that they have pervasive (DFS) backtracking.
I hope you enjoy it! I sure did. If you enjoy it, you might also enjoy jq (https://github.com/jqlang/jq), which is also a sort of logic programming language (in that it has pervasive generators and backtracking). Icon has an Algol family syntax, while jq is more... Haskell-ish? in a way if you squint hard? Whereas Prolog is a pile of rules. These differences are very interesting. Verse is another language in this vein, and it seems very interesting as well.
I agree that not being fully declarative is okay. But lots of pure functions are written in languages like Python and C#. “Fully-imperative, zero-declarative” seems like a bit of an exaggeration?
(I’m generally of the opinion that both laziness and backtracking are bad defaults for general-purpose programming, but they’re handy when you need them.)
See sibling comment to dragonwriter- maybe I'm exaggerating a bit, but I think it's only a bit. It's been a while since I've programmed a lot in anything but Prolog so maybe I should be a little more circumspect.
Serious question, how do you deal with typos in functors? And is your techniques specific to the implementation of Prolog you use?
Recently I had a maddening experience chasing this down:
Because the typo was in a functor (not predicate or singleton variable) there was no IDE or language support, Prolog assumed that I wanted an reported the wrong answer.
of course the snippet I showed was part of a larger example.
In other languages it would've taken me 5 minutes to bisect the program or debug and find the error but it took me 3-4 hours.
I ended up needing to write a lot of error correcting code, basically a half-assed type system, and that code ended up being more substantial than the actual program logic.
Is this common? Am I "doing it wring"?
Right now this seems to have all the downsides of programming exclusively with "magic strings", and I haven't been able to find any cure for it or even seen this problem discussed elsewhere.
*Edit:*
I even rewrote it for SICStus and downloaded their IDE and taught myself Eclipse just to use their IDE plugin, and found that setting breakpoints didn't help the problem, because naturally due to the fact that the functor is in the predicate signature, the predicate is never stepped into in the first place!
I could linearize the arguments and include them in the body but this destroys the indexing and can put me into "defaulty representation" territory.
so so so so so so so so so much this. I have tried to make prolog a part of various systems on and off for two decades and the ergonomics and basic practical shit like this is why it never works.
the answer is always: do a lot of manual stuff that the language should do for you. I can, but I can't get a team to.
For the record I never have problems like that and I'm sure I'm not special. Well, not in that way. This all comes under the heading of "learn what works". You have to do that with any language.
Edit: as a slightly less flippant answer (sorry) Prolog doesn't "do a lot of manual stuff that the language should do for you" because the Prolog community doesn't think the language should do those things for you and I agree. Take for instance inheritance and composition, like in object orientation. There's no reason Prolog should do that for you. If it did, it would shoehorn you into a certain programming paradigm that can feel like a straightjacket when you don't need it, but you absolutely need to use it because that's what the "language does for you". Prolog instead gives you the tools to do all the things you need, when you need them. It's more work, for sure, but it's also more freedom. I spent most of my career in the industry working with very rigidly OOP languages and that's one reason why I escaped into academia, where I can program in Prolog (see what I did there?) all day without having to declare a class property if I don't want to. And if I really wanted to, I could go all the way and write something like Logtalk:
Another example of something that Prolog doesn't do for you, and that you don't always need, but can always do yourself if you need it, is typing; like I point out in the sibling comment. Why should Prolog do that for you? Are we going to have the whole argument about strongly typed vs. weakly typed languages all over again? I think not. If you want types in Prolog, you can roll your own. Now try to write, say, C# code without types, when you really don't need the bother.
Why is that? Ultimately someone has to work on the tooling for any project. If you're working in a group in particular, you can have different people working on different parts of the project. If you need a type system, you can have one programmer write it and maintain it, or be the product owner whatever the terminology is these days, then everyone else can use the type system and not have to worry about it any more.
To be clear, there are only some tools, or let's say abstractions, that you need to implement on your own. The good thing about Prolog is that it has a very mature ecosystem of libraries and packages that you can pick off the shelf and work with immediately. Although that is mostly true for SWI-Prolog, where most of those libraries are found. You normally have to write some translation layer to port them over to a different Prolog- and that is totally a PITA. But there's no reason not to use SWI-Prolog, which is free as in speech and as in beer.
For example, SWI-Prolog has http libraries that have everything you need to set up a web app, using Prolog itself as the database and the back-end language both- it's like a LAMP stack except the stack is LP: Prolog running on Linux:
I reckon the only reason this is not widely used in the industry is because web devs, like everyone else, don't have the time and energy to learn a whole new language, especially one so scary as Prolog. Quoting from the link above:
Our final problem is the lack of masses of Prolog programmers one can easily hire.
>> Right now this seems to have all the downsides of programming exclusively with "magic strings", and I haven't been able to find any cure for it or even seen this problem discussed elsewhere.
The cure is to not try to program with "magic strings". You don't need to, and if you really want to, then you should try to understand what exactly it is that you're doing, and do it right.
Specifically, what you call "magic strings" are what we call "functions" in First Order Logic, and that Prolog calls compound terms, like carring_rock(robot(R)) which is in fact two compound terms, nested. Prolog lets you nest compound terms infinitely and if you really want to hurt yourself, one great way to do it is to nest terms everywhere.
The alternative is to understand that when you use compound terms as arguments to predicates (or other terms) you are really _typing_ those arguments. Above, "carring_rock(_)" is the type of the second argument of result/3, and "robot(_)" is the type of the single argument of carring_rock/1. The catch is, of course, that Prolog is not a typed language and it doesn't care if you want to hurt yourself. So if you need to have types like that, then you should write some code to explicitly handle types and do some type checking. For instance, right out the top of my head:
result_type(T):-
T =.. [carring_rock,R]
,robot_type(R)
,!.
result_type(T):-
throw('Unknwon result type':T)
robot_type(T):-
T =.. [robot,R]
, % ... further typing of R
,!.
robot_type(T):-
throw('Unknown robot type':T)
Note that this is not the right way to throw errors but I can't now.
A simpler thing I'd advise, but that's going into style territory, is to keep variable names as short as possible. One reason it's hard to spot the mistake in the code above is that the source code is all cluttered with long variable names and the nesting of compound terms makes it even more cluttered. An IDE with good syntax highlighting can also help. Perversly, I find that there are many people who code in Prolog either without syntax highlighting at all or in IDEs that are not aware of common results of typos, like singleton variables. The SWI-Prolog IDE is good for this and Sweep for Emacs has a good reputation also (although I haven't tried it):
Edit: it just occurred to me that the project I'm working on currently, in my post-doc, involves quite a bit of typing using compound terms as arguments, like you do above. I've opted for a program synthesis approach where the predicates I need with typing are automatically generated from a specification where I define the arguments of predicates' types. Doing the same thing by hand is probably my number two recommendation of how not to code in Prolog. Number one is "the dynamic database is evil", but does anyone listen to me? Never.
Edit 2: Covington et al's Coding Guidelines for Prolog make the same point:
5.13 Develop your own ad hoc run-time type and mode checking system.
Many problems during development (especially if the program is large and/or there
are several developers involved) are caused by passing incorrect arguments. Even
if the documentation is there to explain, for each predicate, which arguments are
expected on entry and on successful exit, they can be, and all too often they are,
overlooked or ignored. Moreover, when a “wrong” argument is passed, erratic be-
havior can manifest itself far from where the mistake was made (and of course,
following Murphy’s laws, at the most inconvenient time).
In order to significantly mitigate such problems, do take the time to write your
own predicates for checking the legality of arguments on entry to and on exit from
your procedures. In the production version, the goals you added for these checks
can be compiled away using goal_expansion/2.
>> I could linearize the arguments and include them in the body but this destroys the indexing and can put me into "defaulty representation" territory.
Sorry, I don't know what either of those are: "linearize the arguments" and "defaulty representation".
Prolog only has first-argument indexing normally, although some implementations let you change that and use your own indexing scheme. Is that what you did?
Thanks for clarifying. I didn't know those terms, they're probably Markus Triskas' and Sicstus' inventions. Nothing wrong with that. I couldn't find anything about linearized arguments in Sicstus' pages though, they probably changed their docs since you 've seen it.
The "defaulty representation" Markus Triska discusses is indeed something to be avoided, but if compound terms start to proliferate the chance of typos causing problems like the one you had increases. The solution is ad-hoc typing as recommended by Covington et al.
The balance I think, between a "defaulty representation" and an explosion of "magic strings" is to make sure there are only a handful of predicates that expect their arguments to be typed as compound terms and that those predicates are at the base of a hierarchy of calls that pass those arguments around. Those predicates are responsible for reasoning about those typed arguments and they raise errors if something is off. That way you know quickly when there's a mistake, and where it is.
Another thing to keep in mind is that it's easy to test your Prolog predicates in isolation and check that they do what you think they do. This is the best way to catch errors that have to do with argument structure. What I mean is that you can run each predicate on its own, without having to start from the top of your program. You go to your repl and make some queries to the predicate you want to test with different arguments and see how it behaves. That will help catch errors. Unit tests can also help, if you have the patience to write them.
To be honest, I'm not going to defend Prolog for making this kind of thing easy to hurt yourself with. I love Prolog but it's not friendly to newcomers, exactly because of things like that, which you only learn with experience. Even now, after using it for ... 14 years or so, it still finds ways to hurt me. You gotta be careful with it.
Apologies, it was on the tips and tricks page. I'll repost here:
Linearize Arguments
Ensures that each argument position is a variable that does not occur elsewhere in the clause head. As an example,
foo(a, p(X1), X) :-
body(a, X2, X2).
would become:
foo(A, A1, X) :-
A = a,
A1 = p(X1),
body(a, X2, X2).
Ah, thanks. There's nothing wrong with that except that it makes code more verbose. An alternative is flattening, where you remove the compound arguments from the heads of predicates by defining them as new predicates.
Let me see if I can apply flattening to the example above:
With flattening only compound terms are replaced, not constants so "a" remains unchanged. It's similar to an ad-hoc type system again, but the cool thing is that it can be done automatically. There's an algorithm for it known to the Inductive Logic Programming (ILP) community. Let me see if I can find that paper... Yep:
Flattening and Saturation: Two Representation Changes for Generalization, Céline Rouveirol, 1994.
The example is a bit unusual though because it only checks that A1 = p(X1) and doesn't do anything with X1, it even leaves it dangling as a singleton; same with X and X2. That's very unlikely for real-world code where you want to use unification as a mechanism to pass the state of the computation around your program. With flattening you want to have an extra argument in the new predicate that "returns" a value that you want to process further. I think maybe they meant to write it like this:
foo(a, p(X1), X) :-
body(a, X1, X).
foo(A, A1, X) :-
A = a,
A1 = p(X1),
body(a, X1, X).
In which case the flattened version would be like:
So, I guess, we see that this is a known gotcha and the logic programming community has different ways to work around it. In the ILP community there's a different motivation (to make a language finite, and therefore decidable, by removing "functions").
>> I'm really enjoying the convington et al read, cool to see that O'Keefe is an author too!
Yeah. I haven't heard much from O'Keefe lately and I miss his advice. He was a regular contributor to the SWI-Prolog mailing list when I was doing my MSc in 2014. I think there was a bit of a problem with the SWI-Prolog community moving to Google and he hasn't returned since, even though the community is now on Discourse.
> Develop your own ad hoc run-time type and mode checking system.
"Any sufficiently complicated List or Prolog program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Haskell type inference and checking system."
I suppose you could hack a bug-ridden implementation of Prolog unification in Prolog but why?
Hindley-Milner type inference is unification over types and unification is built-in to Prolog. Functional programmers ignore this because their textbooks never refer to the original description of unification by Robinson. Wait who? Unification was probably invented by Damas, Hindley or Milner right? Or maybe Haskell Curry? Hey maybe it was John McCarthy?
Let me thank you also for sharing those resources and acknowledging this is a tricky problem for newcomers. I do love Prolog and I used to think I was a very careful coder with significant attention to detail, but Prolog is exposing what a clumsy buffoon I am if I don't have significant IDE support or "crash on typo" support.
I am particularly grateful because most of the books as you know were written a long time ago and tend to focus more on the beauty of the language and tend to gloss over the software engineering or composing large reliable problems. It is very rare to meet someone with significant experience in Prolog that can still remember what it was like to struggle with issues like that. Most "criticism" I read about Prolog are very superficial and are distracting from the more pressing best practices, so I'm extremely grateful that you shared this paper and I am really surprised I've never encountered it before.
I am actually really shocked to hear that the advice is to roll your own type system, and it's really interesting. I wonder if that is also the case for other expressive languages such as Forth/PostScript, etc. Most of the languages I work with more often (Python/JS/TypeScript/Clojure/C# ... but PARTICULARLY Python) tend to view runtime assertions and type-checking as an *anti-pattern*.
So I am really pleased and surprised to hear that a language as expressive and eloquent as Prolog, at least some folks find that writing that sort of code to be acceptable.
I realize this comment is getting long but the technique you mentioned of using goal_expansion/2 to compile away the assertions --
the dang thing about Prolog is the more you look into it, the more powerful you realize it is, but I'll be damned if it's not impossible to find these things out without someone telling you about them. Most languages seem to be possible to self-teach, but if I were to self-teach Prolog I never would've learned about the concept of monotonicity and I'd be using cut operators willy-nilly etc, lucky that I stumbled on Markus's blog/videos and lucky that I bumped into you!
Just like Python dislikes defensive runtime type checking, another interesting thing is that HEAVY use of metaprogramming tends to be viewed disfavorably in lisp, at least in Clojure! It's typically "don't write macros unless you have no other choice, because no one wants to read/learn your damn macros". I assumed it would be the same way in Prolog, but perhaps I'm wrong?
Anyway thanks again for the really patient response and for sharing your experience.
>> the dang thing about Prolog is the more you look into it, the more powerful you realize it is, but I'll be damned if it's not impossible to find these things out without someone telling you about them. Most languages seem to be possible to self-teach, but if I were to self-teach Prolog I never would've learned about the concept of monotonicity and I'd be using cut operators willy-nilly etc, lucky that I stumbled on Markus's blog/videos and lucky that I bumped into you!
I hear you. When I first learned Prolog, in the second year of my CS degree course, the first thing I did was to go to my university's library and gather up all the textbooks I could find about it. I literally came home bent over from the sheer weight of books. It still took me a very long time to learn the ins and outs of it. It doesn't help that Prolog has a small and probably shrinking community and many of the people with long backgrounds and deep knowledge of its practice are starting to retire from public fora; like O'Keefe.
There are only a few resources on the internet that can help you understand Prolog. Markus Triska's as you found out is one of the best (although I have a very different view of many things than he does, there's no doubt he is an expert in the language). When I was learning I got a lot of mileage out of a tutorial on the Amzi Prolog website:
The best thing you can do of course is to keep working with it as much as you can. Things eventually make sense and you learn what works and what doesn't with experience. That's not always easy to do of course. As I say in another comment I had to leave (my lucrative career in) the industry and enter academia (which pays peanuts) to find a place where I could really work as much as I wanted with Prolog.
>> I am actually really shocked to hear that the advice is to roll your own type system, and it's really interesting. I wonder if that is also the case for other expressive languages such as Forth/PostScript, etc. Most of the languages I work with more often (Python/JS/TypeScript/Clojure/C# ... but PARTICULARLY Python) tend to view runtime assertions and type-checking as an anti-pattern.
I don't want to sound even more like a zealot than I already do, but the thing is you can do all that in Prolog more easily than in other languages. In general the way I program in Prolog is that I create a language to describe my problem, and then solve my problem in that language. I don't mean that I necessarily design a new programming language and write a parser for it, although you can do that very easily in Prolog with Definite Clause Grammars (you get a decent parser and generator that way though not optimal). I mean that I make up a language to talk about the domain, and use it to describe my problems and their solutions. You do that too, in your solution/3 predicate above, where as far as I can tell you describe resources and actions of a robot etc. Again Prolog lets you do that easily: you can define the abstractions you need. That goes all the way up to the abstractions used in other programming paradigms, like functions or objects etc, and types.
>> I am particularly grateful because most of the books as you know were written a long time ago and tend to focus more on the beauty of the language and tend to gloss over the software engineering or composing large reliable problems.
Yeah, I tend to ignore those things. I love Prolog but I recognise it's not to everyone's taste. It's quite obvious. It's nice to hear you love it too :)
> But let's instead compare the incomplete declarativeness of Prolog to a fully-imperative, zero-declarative language like Python or C#.
There's no such thing as "fully-imperative, zero-declarative language" -- at least not one as high level C# or Python -- because declarative/imperative are programming styles, which languages can make more natural but which are used with all (well, higher-level than assembly) languages.
I think there are declarative elements in various high-level languages, e.g. Linq queries in C# so I guess it is an exageration to say "zero-declarative", but in general the level of declarative-ness is tiny compared to Prolog.
It sure is if you read enough "Functional Pearls" to think all you need for logic programming is some backtracking. Oh, and the cut. Because you can't control backtracking without the cut. Not if you don't understand what the backtracking is for in the first place! Mwahahaha.
Man, lately, I feel like this stuff has been following me around. I'd really like to deep-dive into logic programming and related paradigms. Just recently came across Answer Set Programming[0] (via Potassco's clingo[1]), and it has made me realize just how ignorant I am of the design space that's being explored here.
More personally, I recently spent enough time with first Scheme and then APL that the paradigms clicked for me, and the effect that had on the entirety of my outlook on work was dramatically changed as a result. For whatever reason, I feel like breaking down my ingrained technical paradigms has allowed me to integrate and strengthen my soft skills.
Plus, mind-expanding experiences are just plain fun. Looking for more of that juice!
I strongly recommend checking Souffle programming language. It's a dialect of Datalog that can output bulk CSV data that can be easily imported into other databases (like Duckdb or Excel etc). It creates an extremely intuitive framework for logical programming. I.e. you can visualize logical programming as each relation being a giant table of elements, "or" operation being akin to SQL `union all`, "and" operation being akin to SQL `join`, "not" operation being akin to `outer join ... where joined isnull` etc...
I use ASP at work! I used it as the core of a powerful code generator: I modeled the type system I wanted to implement, some base operations and derivation rules, and had it synthesize implementations for every possible operator between every possible pair of types. I run clasp and it dumps out thousands of lines of C# implementing a simple symbolic matrix linear algebra library. It's one of the most beautiful things I've made, imo.
Picture yourself as the goal of a problem
With cut-driven search and definite clause
Somebody calls you, you answer quite "no"-ly
A girl with Kowalskified eyes...
There already is a pretty major effort around the prolog community to build everything as much as possible around pure, monotonic prolog, and to provide a means to support multiple search strategies depending on the best fit for the problem. CLP libraries are also pretty common and the go-to for representing algebraic expressions relationally and declaratively.
I wouldn't say that the logic or relational way of describing effects is a bad thing either. By design it allows for multiple return values (foo/1, foo/2, ...) you can build higher level predicates that return multiple resources, which is pretty common for many programs. It makes concatenative (compositional) style programming really straightforward, especially for more complex interweaving, which also ends up being quite common. Many prolog implementations also support shift/reset, so that you can easily build things like conditions and restarts, algebraic effects, and/or debugging facilities on top. Prolog is also homoiconic in a unique way compared to lisp, and it's quite nice because the pattern matching is so powerful. Prolog really is one of the best languages I ever learned, I wish it was more popular. I think prolog implementations need a better C FFI interop and a nicer library ecosystem. Trealla has a good C FFI.
I think logic programming is the future, and a lot of these problems with prolog are fixable. If it's not going to be prolog, it'll probably be something like kanren and datalog within a lisp like scheme or clojure(script).
Out of my depth here, but I do have 5 Prolog books I can't seem to let go of. I used to dabble. A lot. Out of all my numerous prog-lang infatuations, only 3 have stuck: Erlang, Prolog and Picolisp. At most, 2 degrees of separation between those 3. No one mentioned Picolisp yet, so I have to, because it has a Prolog embedded in it. This seems like an appropriate place to do it.
there is a really interesting space between queries and prolog which includes mundane junk like encoding and rendering and data formatting changes that benefits in evaluation from having less power but maintains the lovely expressibility and analysis that we get from 'real' logic programming.
while I was trying to track down the license for it[1], I found https://git.ps.informatik.uni-kiel.de/curry/curry2go (also BSD-3) which says "A compiler and run-time system to compile and run Curry programs as Go programs"
There's an insightful critique of the paper on Reddit: https://www.reddit.com/r/ProgrammingLanguages/comments/1g1su...
...agree that it's weird the paper doesn't mention constraint logic programming, but it's perhaps pointing at it implicitly by saying "Replacing backtracking by complete search strategies"
Abstract. Logic programming has a long history. The representative of
logic programming in practice, the language Prolog, has been introduced
more than 50 years ago. The main features of Prolog are still present
today: a Prolog program is a set of predicate definitions executed by
resolution steps with a backtracking search strategy. The use of back-
tracking was justified by efficiency reasons when Prolog was invented.
However, its incompleteness destroys the elegant connection of logic pro-
gramming and the underlying Horn clause logic and causes difficulties to
teach logic programming. Moreover, the restriction to predicates hinders
an adequate modeling of real world problems, which are often functions
from input to output data, and leads to unnecessarily inefficient exe-
cutions. In this paper we show a way to overcome these problems. By
transforming predicates and goals into functions and nested expressions,
one can evaluate them with a demand-driven strategy which might re-
duce the number of computation steps and avoid infinite search spaces.
Replacing backtracking by complete search strategies with new imple-
mentation techniques closes the gap between the theory and practice of
logic programming. In this way, we can keep the ideas of logic program-
ming in future programming systems.
I didn't read past the abstract, but it sounds like they are just transforming logic-based programs into function-based programs. But: if I wanted functional programming, I wouldn't be writing in Prolog.
What would be interesting, would be to replace depth-first search while remaining in the world of predicates and Horn clauses.
Naïvely, writing only one logic relation of n parameters is about equivalent to writing n^2 functions (just decide for each parameter whether you give it or not as input). So there clearly is value there.
I say naïvely because on one hand you might not need all versions of the function, but on the other one you can also provide partial values, so it’s not either input or output.
Depth-first backtracking is trivial to implement, and it's trivial to blend into the language. Breadth-first backtracking is much harder to implement, and I'm not sure that it's trivial to build into a language (though this may be a lack of imagination).
Icon had both, but depth-first backtracking was the default and trivial to use, while breadth-first backtracking required using "co-expressions" (co-routines), though at least Icon had a trivial syntax for causing procedure argument expressions to be made into co-expressions. But Icon's approach does not make breadth-first backtracking be a first-class language feature like depth-first backtracking, and this is where my imagination gets stuck. To be fair, I've not truly thought much about this problem.
Hrm. I guess the converse applies if nodes can have infinite children. That said, even if your tree is infinitely wide and deep, we're only dealing with countable children, right? Thus a complete traversal has to exist, right?
For example, each node has unique path to root, so write <n1, n2, ..., nk> where each ni is the sibling ordinal of the node at depth i in that path, i.e. it's the ni-th sibling of the n(i-1)st node. Raising each of these to the ith prime and taking a product gives each node a unique integer label. Traverse nodes in label order and voilà?
However, that all assumes we know the tree beforehand, which doesn't make sense for generic call trees. Do we just smash headfirst into Rice on this when trying to traverse in complete generality?
No breadth first search is still complete given an infinite branching factor (i.e. a node with infinite children). "Completeness" is not about finishing in finite time, it also applies to completing in infinite time.
Breadth first search would visit every node breadth first, so given infinite time, the solution would eventually be visited.
Meanwhile, say a branch had a cycle in it, even given infinite time, a naive depth first search would be trapped there, and the solution would never be found.
Suppose you have a node with two children A and B, each of which has infinitely many children. If you performed an ordinary BFS, you could get trapped in A's children forever, before ever reaching any of B's children.
Or, suppose that a node has infinitely many children, but the first child has its own child. A BFS would get stuck going through all the first-level children and never reach the second-level child.
A BFS-like approach could work for completeness, but you'd have to put lower-level children on the same footing as newly-discovered higher-level children. E.g., by breaking up each list of children into additional nodes so that it has branching factor 2 (and possibly infinite depth).
Countable infinity does not work like that: two countable infinities are not more than one countable infinity. I think it falls into the "not even wrong" category of statements.
Yes, if you put two (or three, or countably many) countable sets together, you obtain a set that is also countable. The problem is, we want to explicitly describe a bijection between the combined set and the natural numbers, so that each element is visited at some time. Constructing such a bijection between the natural numbers and a countably-infinite tree is perfectly possible, but it's less trivial than just DFS or BFS.
If we're throwing around Wikipedia articles, I'd suggest a look at https://en.wikipedia.org/wiki/Order_type. Even if your set is countable, it's possible to iterate through its elements so that some are never reached, not after any length of time.
For instance, suppose I say, "I'm going to search through all positive odd numbers in order, then I'm going to search through all positive even numbers in order." (This has order type ω⋅2.) Then I'll never ever reach the number 2, since I'll be counting through odd numbers forever.
That's why it's important to order the elements in your search strategy so that each one is reached in a finite time. (This corresponds to having order type ω, the order type of the natural numbers.)
> "Completeness" is not about finishing in finite time, it also applies to completing in infinite time.
Can you point to a book or article where the definition of completeness allows infinite time? Every time I have encountered it, it is defined as finding a solution if there is one in finite time.
> No breadth first search is still complete given an infinite branching factor (i.e. a node with infinite children).
In my understanding, DFS is complete for finite depth tree and BFS is complete for finite branching trees, but neither is complete for infinitely branching infinitely deep trees.
You would need an algorithm that iteratively deepens while exploring more children to be complete for the infinite x infinite trees. This is possible, but it is a little tricky to explain.
For a proof that BFS is not complete if it must find any particular node in finite time: Imagine there is a tree starting with node A that has children B_n for all n and each B_n has a single child C_n. BFS searching for C_1 would have to explore all of B_n before it could find it so it would take infinite time before BFS would find C_1.
In practice, though, with BFS you'd run out of memory instead of never finding a solution.
Also, there shouldn't be many situations where you'd be able to produce infinite branches in a prolog program. Recursions must have a base case, just like in any other language.
This has to do with the ordering of search: searching a proof tree (an SLD tree, in SLD-Resolution) with DFS, as in Prolog, can get stuck when there are cycles in the tree. That's especially the case with left-recursion. The article gives an example of a left-recursive program that loops if you execute it with Prolog, but note that it doesn't loop if you change the order of the clauses.
This version of the program, taken from the article, loops (I mean it enters an infinite recursion):
last([E],E).
last([_H|T],E) :- last(T,E).
Ls = [3] ;
Ls = [_,3] ;
Ls = [_,_,3] ;
Ls = [_,_,_,3] ;
Ls = [_,_,_,_,3] ;
Ls = [_,_,_,_,_,3] .
% And so on forever
To save you some squinting, that's the same program with the base-case moved before the inductive case, so that execution "hits" the base case when it can terminate. That's half of what the article is kvetching about: that in Prolog, you have to take into account the execution strategy of logic programs and can't just reason about the logical consequences of a program, you also have to think of the imperative meaning of the program's structure. It's an old complain about Prolog, as old as Prolog itself.
I think what you mean is that he adds an argument that counts the times a goal is resolved with, thus limiting the depth of resolution? That works, but you need to give a magic number as a resolution depth limit, and if the number is too small then your program fails to find a proof that it normally should be able to find. It's not a perfect solution.
Yes, well not so much a constant value. He added an unbound variable and it was enough to alter the search. Indeed it's still more or a trick, but it got me interested if there were other more fundamental ideas beyond that.
That sounds like iterative deepening without a lower bound then. I guess that's possible. Maybe if you had a link to Markus' page I could have a look.
There are techniques to constraint the search space for _programs_ rather than proofs, that I know from Inductive Logic Programming, like Bottom Clause construction in Inverse Entailment, or the total ordering of the Herbrand Base in Meta-Interpretive Learning (ILP). It would be interesting to consider applying them to constraint the space of proofs in ordinary logic progamming.
Refs for the above techniques are here but they're a bit difficult to read if you don't have a good background in ILP:
> "Maybe if you had a link to Markus' page I could have a look."
e.g. here: https://www.metalevel.at/tist/ solving the Water Jugs problem (search on the page for "We use iterative deepening to find a shortest solution") finding a list of moves emptying and filling jugs, and using `length(Ms, _)` to find shorter list of moves first.
or here: https://www.metalevel.at/prolog/puzzles under "Wolf and Goat" he writes "You can use Prolog's built-in search strategy to search for a sequence of admissible state transitions that let you reach the desired target state. Use iterative deepening to find a shortest solution. In Prolog, you can easily obtain iterative deepening via length/2, which creates lists of increasing length on backtracking."
Thanks! Yes, it's iterative deepening without a lower bound. The trick is that iterative deepening is used to order the space of proofs so that the shortest proof (path through an SLD-tree) is found first. I use that approach often. The cool thing with it is that it doesn't stop in the first solution and will keep generating new solutions ordered by length of the proof as long as there are any. Some times you really want to know all solutions.
There is a bit of a problem, in that if there is no solution the lack of a lower bound will cause the search to go on forever, or until the search space is exhausted- and you don't want that. If you use a lower bound, on the other hand, you may be cutting the search just short of finding the solution. It's another trade-off.
Yeah, it's confusing. The article is referring to the incompleteness of Prolog implemented using Depth First Search. That's what the author means by "backtracking". I know this because I know "backtracking" is used in the logic programming community to stand for DFS, but if you don't know the jargon you'd be right to be confused. You can kind of, er, glean, that meaning in the article if you see how they refer to a "fixed" search strategy, and also notice that "backtracking" is not normally a search strategy since it can't search on its own. "Backtracking" is really "DFS with backtracking".
The article is pointing out that Prolog with backtracking DFS is incomplete with respect to the completeness of SLD-Resolution. To clarify, SLD-Resolution is complete for refutation, or with subsumption. Prolog is an implementation of SLD-Resolution using DFS with backtracking. DFS is incomplete in the sense that it gets stuck in infinite loops when an SLD tree (the structure searched by DFS in Prolog) has cycles, especially left-recursive cycles. The article gives an example of a program that loops forever when executed with DFS with backtracking, in ordinary Prolog.
SLD-Resolution's completeness does not violate the Church-Turing thesis, so it's semi-decidable: SLD-trees may have infinite branches. To be honest I don't know about the equivalence with Horn-SAT, but Resolution, restricted to definite clauses, i.e. SLD-Resolution, is complete (by refutation and subsumption, as I say above, and respecting some structural constraints to do with the sharing of variables in heads and bodies of clauses). We got several different proofs of its completeness so I think we can trust it's true.
Edit: where does this knowledge about Horn-Sat come from? Do you have references? Gimme gimme gimme.
Is this about the problem that Prolog tends to ping pong infinitely between leafs? I wrote a fully declarative solitaire game solver and remember this being a big issue forcing me to memorize past states of the backtracking and basically exclude them by another predicate. This is obviously slow. I thought, why not at least have a plugin to avoid trivial cases where backtracking gets stuck switching between A and B. Or a plugin for a stochastic traversal of the solution tree.
There are. Tabling (available in most mature implementations) helps when recalculation of the same states is a problem. Meanwhile, custom search strategy is always an option to implement directly in Prolog. You'll see this in many Advent of Code solutions in Prolog when it is applied to path finding puzzles, in which depth first search is rarely a workable solution.
Reminded me of the “rules and schemes” concept I was thinking of about 10 years ago that would separate “pure logical” rules from the stuff it takes to turn them into a program that runs by either forward chaining (production rules) or backwards chaining (like prolog). I got so far as writing a macro compiler that would let you write a base set of rules and rewrite them for different purposes such as transforming document A into document B or going in the opposite direction.
I like how they let you write functions to control the search order because boy that is essential.
I'm not an expert, and I've only just skimmed the paper, but it seems to me that this is a kind of partial evaluation for Prolog, eh? (Perhaps "partial resolution" is more technically correct?)
I think some Prolog systems do something like this already as an optimization, but I could be totally off on that.
[Note I'm sick and tired; literally. I may not be firing on all cylinders in the following.]
The article is fudging things with its use of "backtracking" as a stand-in for backtracking Depth First Search (DFS)- the latter is the search strategy that is "fixed" in Prolog. And it's not really fixed. Prolog programs can also be executed by "tabling" a.k.a. SLG-Resolution, which basically replaces DFS with Breadth-First Search modulo momoization. Tabling avoids an important source of "incompleteness" in Prolog, that of non-terminating left-recursions.
To clarify, that is what makes Prolog "incomplete": that executing Prolog programs by DFS makes Prolog loop infinitely when encountering some left-recursions. The article gives the example of a last/2 predicate:
last([_H|T],E) :- last(T,E).
last([E],E).
This does indeed loop. But this one doesn't:
last([E],E).
last([_H|T],E) :- last(T,E).
?- last_(Ls,3).
Ls = [3] ;
Ls = [_,3] ;
Ls = [_,_,3] ;
Ls = [_,_,_,3] ;
Ls = [_,_,_,_,3] ;
Ls = [_,_,_,_,_,3] .
And that's what the article is pointing out with the allusion to "a higher, declarative programming style which frees the programmer from thinking about low-level control details". With such a "higher declarative programming style" the programmer does not have to think about program structure or execution strategy and can write whatever, however, and still get results!
The problem with that argument, which is as old as Prolog and possibly even older than that, is that it's an argument from taste, or, indeed, "style", to use the article's terminology. More specifically, for every non-terminating Prolog program that can be written, we can most likely write a Prolog program that does terminate, and that is (success-set) equivalent to the non-terminating program, by keeping in mind the structure of the search space for proofs traversed by ordinary Prolog with DFS, or tabling. And possibly with a few cuts here and there (oh, the humanity!). The article is simply arguing that it is "more declarative" to not have to do that. But, why is "more declarative" better? It's style all the way down.
As to the second axis of the argument, the horror of predicates that do not neatly map to "many" real world problems that are better represented as functions, asking whether we can liberate logic programming from predicates is like asking whether we can liberate the First Order Predicate Calculus from predicates, or, equivalently, make ice cream without the ice or the cream. Sure we can. The question is again: why? I don't see a clear justification for that. In particular, setting implementation details aside (as the article does on this point) SLD-Resolution is already sound and refutation-complete, or complete with subsumption. That is the best that can ever be achieved, and the article doesn't seem to claim anything else (or the ghosts of Alonzo Church and Alan Turing would be very, very sad). So this, too, seems to be a matter of taste: functions are more stylish than predicates. M'kay.
In fact, if I may blaspheme a little, this is what Church did with his Lambda calculus: he turned everything into typed functions to extend First Order Logic (FOL) into a higher order. And in the process completely destroyed the simple elegance of FOL. Why?
Incidentally, SQL's recursive queries completely solve the cycle problem by essentially keeping track of the results as a database, which then allows it to prune search results that have already been seen. That's what you get when using UNION in a recursive query, but don't use UNION ALL, as that turns off that pruning effect. This is elegant, but not online.
That's the problem with breadth-first search though: how to make it online?
Conversely, that's the advantage of DFS: it's online because the only state it needs is a search path, and the search path is O(log N), which is small enough to consider online.
My SQL is rusty after years of misuse, but what you describe is similar to tabled Prolog. "Tabling" or SLG-Resolution, basically uses memoization to avoid having to re-derive parts of a proof tree that have already been traversed (that's useful because in a proof tree there are often many identical sub-trees under different branches). It also switches the execution strategy from DFS to BFS and delays the execution of goals until they can be proved. That way it avoids infinite left-recursions.
But there's no way to always ensure termination unless one restricts expressivity. Right? Halting problem and all that. Recursion, like iteration, must always risk going infinite, else it is not complete.
The tradeoff between DFS and BFS is like you say and as the article above points out that's the reason DFS was chosen as the original execution strategy for Prolog. Tabling on the other hand can be very memory-hungry (in my applications I often run out of tabling space on my 64GB RAM laptop). In the end, with languages like Prolog, it's the old joke: "sound, complete, efficient- choose two".
Thanks for confirming this. With your reply in mind I think there is no reasonable way to make BFS the default for a logic language, and the programmer will just have to choose between BFS and DFS.
In other words, the answer to TFA's title question is, I think, "no" as to backtracking/DFS.
To be honest I'd like it to be possible to completely do away with graph search in Resolution-based theorem proving. Resolution itself assumes no search, it's only an inference rule that removes contradictions from a theory. The original paper on Resolution proposes various ways to implement it but they all come down to search of some sort. I guess that's all we know how to do in AI: search :/
WITH RECURSIVE transitive_closure AS (
SELECT parent, child FROM things
UNION
SELECT tc.parent, t.child
FROM things t
JOIN transitive_closure tc ON tc.child = t.parent
)
SELECT * FROM transitive_closure;
where `transitive_closure` is a table that gets all the results of the computation. The engine will run the second part of the query repeatedly until no new rows are added to the `transitive_closure` table. (If you change `UNION` to `UNION ALL` and there's circular paths then this will not terminate.)
Sounds a lot like whatever "tabling" is in Prolog.
Likely. It depends on how the transitive_closure results are computed. In tabling it's still by resolution so you can still get stuck in infinite loops, e.g. on infinite right-recursions. I think maybe that's more similar to UNION ALL?
I should probably read a bit about this again. I rarely used recursive queries in SQL when I worked with it, not least because a couple of times I did, I got into trouble because they went haywire :)
> In tabling it's still by resolution so you can still get stuck in infinite loops, e.g. on infinite right-recursions. I think maybe that's more similar to UNION ALL?
Must be. With UNION you can't end up with an infinite loop unless you have infinite data (or you're implementing a table-valued Ackermann function and so you have... not infinite results but for all practical intents, yeah).
It makes the same analogy that Prolog (or logic programming languages in general) have been strongly influenced by the resolution algorithm. In practice that means that if you write a non-trivial program, if performance is not right you'll need to understand the execution model and adapt to it, mainly with the pruning operator (!). So while the promise is to "declare" values and not think about the implementation details, you're railroaded to think in a very specific way.
I personally found that frustrating to find good solutions essentially unworkable because of this, in comparison with either imperative or functional paradigms that are significantly more flexible. As a result, Prolog-style programming feels limited to the small problems for which it is readily a good fit, to be integrated into a general program using a general-purpose language. I may be wrong on this, but of the 50 people that learned Prolog around the same time as me, none kept up with it. Meanwhile, other niche languages like Ocaml, Haskell and Scheme had good success.
Rethinking the language foundation could remove these barriers to give the language a broader user base.
The paper title doesn't even make sense? So he wants to "liberate" Logic Programming from predicates? Predicate Logic is First-Order Logic ... ie what YeGoblynQueenne says in another comment.
From a mere practical PoV, who is the author even addressing? Prolog, from the get go, was introduced with NLP, planning problems, and similar large combinatorical search spaces in mind and is used for the convenience it brings to these applications. That FP could theoretically be more broadly used is completely besides the point; Prolog's focus is its strength.
The argument is basically that Prolog is not 100% declarative and that if we jump through a few hoops, and translate it all to functional notation, we can make it "more declarative". But let's instead compare the incomplete declarativeness of Prolog to a fully-imperative, zero-declarative language like Python or C#. We'll find I believe that most programmers are perfectly fine programming completely non-declaratively and don't have any trouble writing very complex programs in it, and that "OMG my language is not purely declarative" is the least of their problems. I hear some old, wizened nerds even manage to program in C where you actually can drop to the hardware level and push bits around registers entirely by hand O.o
I was quite hooked to Prolog in a previous life. Then the limiting factor was the tooling, for really practical use.
Could you tell a bit about your Prolog environment?
https://www.swi-prolog.org/PceEmacs.html
I suppose it is a bit spartan, but it has a ton of functionality that I find indispensable [1]. For example, when I place my cursor on a variable in the editor it highlights all the variables that unify with it in the same clause, and it will highlight singleton variables in a different colour so you can catch errors caused by typos easily. It is also aware of things like imported predicates, undefined or dynamic predicates, multi-file predicates etc, and will highlight them accordingly. It has some (limited) auto-complete and code-expansion etc. and a status line that gives you very good information about the kind of term you have your cursor on - where it's defined if it's a predicate, its name and arity and how many clauses etc, basically much of the information you can get from querying the program database with various program inspection built-ins. Some of my colleagues use VS Code to write Prolog instead and I keep shaking my head and grumbling about the errors they keep making that they wouldn't if they used the SWI-Prolog IDE instead. Kids, these days. In my youth, we wrote all our Prolog on Notepad! And compiled on Dos!
(nope. never)
SWI also has a graphical debugger, which however I never use:
https://www.swi-prolog.org/pldoc/doc/_SWI_/xpce/prolog/lib/g...
I know some people swear by its name but I prefer the textual debugger. Again this one looks a little spartan :)
More goodies in SWI: cross-referencer:
https://www.swi-prolog.org/gxref.md
And profiler:
And lots and tons of facilities for debugging and error handling, unit testing, all sorts of libraries, a package manager etc.________________________
[1] You can customise the IDE colours too. There's a dark theme:
https://www.swi-prolog.org/pldoc/man?section=theme
There's some screenshots here:
https://swi-prolog.discourse.group/t/questions-about-ide-the...
I did an MSc in data science first, then started a PhD to study Inductive Logic Programming (ILP), which is basically machine learning × Prolog (although there's also ASP ILP these days). I got my PhD last summer and I'm now doing a post-doc on a robotics project with Meta-Interpretive Learning (MIL), a recent form of ILP. Here's my latest publication:
https://github.com/stassa/ijclr_2024_experiments
Which is still a bit proof-of-concept. We're still at the very early stages of practical applications of MIL and so there's a lot of foundation work to do. Bliss :)
Thanks for the pointer :)
Oh, and yesterday's HN thread about Rama is very interesting as well: https://news.ycombinator.com/item?id=41833629
thanks for your help
https://eshelyaron.com/sweep.html
When I grow up, I'll give it a try :)
(I’m generally of the opinion that both laziness and backtracking are bad defaults for general-purpose programming, but they’re handy when you need them.)
...
...
...
Because the typo was in a functor (not predicate or singleton variable) there was no IDE or language support, Prolog assumed that I wanted an reported the wrong answer. of course the snippet I showed was part of a larger example. In other languages it would've taken me 5 minutes to bisect the program or debug and find the error but it took me 3-4 hours. I ended up needing to write a lot of error correcting code, basically a half-assed type system, and that code ended up being more substantial than the actual program logic. Is this common? Am I "doing it wring"?Right now this seems to have all the downsides of programming exclusively with "magic strings", and I haven't been able to find any cure for it or even seen this problem discussed elsewhere.
*Edit:*
I even rewrote it for SICStus and downloaded their IDE and taught myself Eclipse just to use their IDE plugin, and found that setting breakpoints didn't help the problem, because naturally due to the fact that the functor is in the predicate signature, the predicate is never stepped into in the first place!
I could linearize the arguments and include them in the body but this destroys the indexing and can put me into "defaulty representation" territory.
the answer is always: do a lot of manual stuff that the language should do for you. I can, but I can't get a team to.
For the record I never have problems like that and I'm sure I'm not special. Well, not in that way. This all comes under the heading of "learn what works". You have to do that with any language.
Edit: as a slightly less flippant answer (sorry) Prolog doesn't "do a lot of manual stuff that the language should do for you" because the Prolog community doesn't think the language should do those things for you and I agree. Take for instance inheritance and composition, like in object orientation. There's no reason Prolog should do that for you. If it did, it would shoehorn you into a certain programming paradigm that can feel like a straightjacket when you don't need it, but you absolutely need to use it because that's what the "language does for you". Prolog instead gives you the tools to do all the things you need, when you need them. It's more work, for sure, but it's also more freedom. I spent most of my career in the industry working with very rigidly OOP languages and that's one reason why I escaped into academia, where I can program in Prolog (see what I did there?) all day without having to declare a class property if I don't want to. And if I really wanted to, I could go all the way and write something like Logtalk:
https://logtalk.org/
Another example of something that Prolog doesn't do for you, and that you don't always need, but can always do yourself if you need it, is typing; like I point out in the sibling comment. Why should Prolog do that for you? Are we going to have the whole argument about strongly typed vs. weakly typed languages all over again? I think not. If you want types in Prolog, you can roll your own. Now try to write, say, C# code without types, when you really don't need the bother.
To be clear, there are only some tools, or let's say abstractions, that you need to implement on your own. The good thing about Prolog is that it has a very mature ecosystem of libraries and packages that you can pick off the shelf and work with immediately. Although that is mostly true for SWI-Prolog, where most of those libraries are found. You normally have to write some translation layer to port them over to a different Prolog- and that is totally a PITA. But there's no reason not to use SWI-Prolog, which is free as in speech and as in beer.
For example, SWI-Prolog has http libraries that have everything you need to set up a web app, using Prolog itself as the database and the back-end language both- it's like a LAMP stack except the stack is LP: Prolog running on Linux:
https://us.swi-prolog.org/FAQ/PrologLAMP.md
I reckon the only reason this is not widely used in the industry is because web devs, like everyone else, don't have the time and energy to learn a whole new language, especially one so scary as Prolog. Quoting from the link above:
Our final problem is the lack of masses of Prolog programmers one can easily hire.
The cure is to not try to program with "magic strings". You don't need to, and if you really want to, then you should try to understand what exactly it is that you're doing, and do it right.
Specifically, what you call "magic strings" are what we call "functions" in First Order Logic, and that Prolog calls compound terms, like carring_rock(robot(R)) which is in fact two compound terms, nested. Prolog lets you nest compound terms infinitely and if you really want to hurt yourself, one great way to do it is to nest terms everywhere.
The alternative is to understand that when you use compound terms as arguments to predicates (or other terms) you are really _typing_ those arguments. Above, "carring_rock(_)" is the type of the second argument of result/3, and "robot(_)" is the type of the single argument of carring_rock/1. The catch is, of course, that Prolog is not a typed language and it doesn't care if you want to hurt yourself. So if you need to have types like that, then you should write some code to explicitly handle types and do some type checking. For instance, right out the top of my head:
Note that this is not the right way to throw errors but I can't now.A simpler thing I'd advise, but that's going into style territory, is to keep variable names as short as possible. One reason it's hard to spot the mistake in the code above is that the source code is all cluttered with long variable names and the nesting of compound terms makes it even more cluttered. An IDE with good syntax highlighting can also help. Perversly, I find that there are many people who code in Prolog either without syntax highlighting at all or in IDEs that are not aware of common results of typos, like singleton variables. The SWI-Prolog IDE is good for this and Sweep for Emacs has a good reputation also (although I haven't tried it):
https://eshelyaron.com/sweep.html
Edit: it just occurred to me that the project I'm working on currently, in my post-doc, involves quite a bit of typing using compound terms as arguments, like you do above. I've opted for a program synthesis approach where the predicates I need with typing are automatically generated from a specification where I define the arguments of predicates' types. Doing the same thing by hand is probably my number two recommendation of how not to code in Prolog. Number one is "the dynamic database is evil", but does anyone listen to me? Never.
Edit 2: Covington et al's Coding Guidelines for Prolog make the same point:
https://arxiv.org/abs/0911.2899>> I could linearize the arguments and include them in the body but this destroys the indexing and can put me into "defaulty representation" territory.
Sorry, I don't know what either of those are: "linearize the arguments" and "defaulty representation".
Prolog only has first-argument indexing normally, although some implementations let you change that and use your own indexing scheme. Is that what you did?
On mobile and I want to dig into this some more tomorrow but let me start by addressing the last two points.
"Defaulty representation" is described here by Marcus Triska (towards the bottom of the page): https://www.metalevel.at/prolog/data
The terminology of linearizing the arguments I lifted from the SICStus IDE features page: https://sicstus.sics.se/spider/index.html
Thanks again and I'm reviewing your thoughtful comments, thank you again.
The "defaulty representation" Markus Triska discusses is indeed something to be avoided, but if compound terms start to proliferate the chance of typos causing problems like the one you had increases. The solution is ad-hoc typing as recommended by Covington et al.
The balance I think, between a "defaulty representation" and an explosion of "magic strings" is to make sure there are only a handful of predicates that expect their arguments to be typed as compound terms and that those predicates are at the base of a hierarchy of calls that pass those arguments around. Those predicates are responsible for reasoning about those typed arguments and they raise errors if something is off. That way you know quickly when there's a mistake, and where it is.
Another thing to keep in mind is that it's easy to test your Prolog predicates in isolation and check that they do what you think they do. This is the best way to catch errors that have to do with argument structure. What I mean is that you can run each predicate on its own, without having to start from the top of your program. You go to your repl and make some queries to the predicate you want to test with different arguments and see how it behaves. That will help catch errors. Unit tests can also help, if you have the patience to write them.
To be honest, I'm not going to defend Prolog for making this kind of thing easy to hurt yourself with. I love Prolog but it's not friendly to newcomers, exactly because of things like that, which you only learn with experience. Even now, after using it for ... 14 years or so, it still finds ways to hurt me. You gotta be careful with it.
I'm really enjoying the convington et al read, cool to see that O'Keefe is an author too!
Let me see if I can apply flattening to the example above:
With flattening only compound terms are replaced, not constants so "a" remains unchanged. It's similar to an ad-hoc type system again, but the cool thing is that it can be done automatically. There's an algorithm for it known to the Inductive Logic Programming (ILP) community. Let me see if I can find that paper... Yep:Flattening and Saturation: Two Representation Changes for Generalization, Céline Rouveirol, 1994.
https://link.springer.com/content/pdf/10.1023/A:102267821728...
The example is a bit unusual though because it only checks that A1 = p(X1) and doesn't do anything with X1, it even leaves it dangling as a singleton; same with X and X2. That's very unlikely for real-world code where you want to use unification as a mechanism to pass the state of the computation around your program. With flattening you want to have an extra argument in the new predicate that "returns" a value that you want to process further. I think maybe they meant to write it like this:
In which case the flattened version would be like: So, I guess, we see that this is a known gotcha and the logic programming community has different ways to work around it. In the ILP community there's a different motivation (to make a language finite, and therefore decidable, by removing "functions").>> I'm really enjoying the convington et al read, cool to see that O'Keefe is an author too!
Yeah. I haven't heard much from O'Keefe lately and I miss his advice. He was a regular contributor to the SWI-Prolog mailing list when I was doing my MSc in 2014. I think there was a bit of a problem with the SWI-Prolog community moving to Google and he hasn't returned since, even though the community is now on Discourse.
Modelled after Greenspun's Tenth Rule [1].
[1] https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule
Hindley-Milner type inference is unification over types and unification is built-in to Prolog. Functional programmers ignore this because their textbooks never refer to the original description of unification by Robinson. Wait who? Unification was probably invented by Damas, Hindley or Milner right? Or maybe Haskell Curry? Hey maybe it was John McCarthy?
In any case, what was suggested is, frankly, a development of the (tailored up) type system.
My experiece suggests that this development will not be without caveats, especially related to performance.
I am particularly grateful because most of the books as you know were written a long time ago and tend to focus more on the beauty of the language and tend to gloss over the software engineering or composing large reliable problems. It is very rare to meet someone with significant experience in Prolog that can still remember what it was like to struggle with issues like that. Most "criticism" I read about Prolog are very superficial and are distracting from the more pressing best practices, so I'm extremely grateful that you shared this paper and I am really surprised I've never encountered it before.
I am actually really shocked to hear that the advice is to roll your own type system, and it's really interesting. I wonder if that is also the case for other expressive languages such as Forth/PostScript, etc. Most of the languages I work with more often (Python/JS/TypeScript/Clojure/C# ... but PARTICULARLY Python) tend to view runtime assertions and type-checking as an *anti-pattern*.
So I am really pleased and surprised to hear that a language as expressive and eloquent as Prolog, at least some folks find that writing that sort of code to be acceptable.
I realize this comment is getting long but the technique you mentioned of using goal_expansion/2 to compile away the assertions --
the dang thing about Prolog is the more you look into it, the more powerful you realize it is, but I'll be damned if it's not impossible to find these things out without someone telling you about them. Most languages seem to be possible to self-teach, but if I were to self-teach Prolog I never would've learned about the concept of monotonicity and I'd be using cut operators willy-nilly etc, lucky that I stumbled on Markus's blog/videos and lucky that I bumped into you!
Just like Python dislikes defensive runtime type checking, another interesting thing is that HEAVY use of metaprogramming tends to be viewed disfavorably in lisp, at least in Clojure! It's typically "don't write macros unless you have no other choice, because no one wants to read/learn your damn macros". I assumed it would be the same way in Prolog, but perhaps I'm wrong?
Anyway thanks again for the really patient response and for sharing your experience.
>> the dang thing about Prolog is the more you look into it, the more powerful you realize it is, but I'll be damned if it's not impossible to find these things out without someone telling you about them. Most languages seem to be possible to self-teach, but if I were to self-teach Prolog I never would've learned about the concept of monotonicity and I'd be using cut operators willy-nilly etc, lucky that I stumbled on Markus's blog/videos and lucky that I bumped into you!
I hear you. When I first learned Prolog, in the second year of my CS degree course, the first thing I did was to go to my university's library and gather up all the textbooks I could find about it. I literally came home bent over from the sheer weight of books. It still took me a very long time to learn the ins and outs of it. It doesn't help that Prolog has a small and probably shrinking community and many of the people with long backgrounds and deep knowledge of its practice are starting to retire from public fora; like O'Keefe.
There are only a few resources on the internet that can help you understand Prolog. Markus Triska's as you found out is one of the best (although I have a very different view of many things than he does, there's no doubt he is an expert in the language). When I was learning I got a lot of mileage out of a tutorial on the Amzi Prolog website:
https://amzi.com/AdventureInProlog/index.php
The best thing you can do of course is to keep working with it as much as you can. Things eventually make sense and you learn what works and what doesn't with experience. That's not always easy to do of course. As I say in another comment I had to leave (my lucrative career in) the industry and enter academia (which pays peanuts) to find a place where I could really work as much as I wanted with Prolog.
>> I am actually really shocked to hear that the advice is to roll your own type system, and it's really interesting. I wonder if that is also the case for other expressive languages such as Forth/PostScript, etc. Most of the languages I work with more often (Python/JS/TypeScript/Clojure/C# ... but PARTICULARLY Python) tend to view runtime assertions and type-checking as an anti-pattern.
I don't want to sound even more like a zealot than I already do, but the thing is you can do all that in Prolog more easily than in other languages. In general the way I program in Prolog is that I create a language to describe my problem, and then solve my problem in that language. I don't mean that I necessarily design a new programming language and write a parser for it, although you can do that very easily in Prolog with Definite Clause Grammars (you get a decent parser and generator that way though not optimal). I mean that I make up a language to talk about the domain, and use it to describe my problems and their solutions. You do that too, in your solution/3 predicate above, where as far as I can tell you describe resources and actions of a robot etc. Again Prolog lets you do that easily: you can define the abstractions you need. That goes all the way up to the abstractions used in other programming paradigms, like functions or objects etc, and types.
>> I am particularly grateful because most of the books as you know were written a long time ago and tend to focus more on the beauty of the language and tend to gloss over the software engineering or composing large reliable problems.
Yeah, I tend to ignore those things. I love Prolog but I recognise it's not to everyone's taste. It's quite obvious. It's nice to hear you love it too :)
There's no such thing as "fully-imperative, zero-declarative language" -- at least not one as high level C# or Python -- because declarative/imperative are programming styles, which languages can make more natural but which are used with all (well, higher-level than assembly) languages.
Did I misunderstand what you mean?
For example, it is possible to embed backtracking logic programming [1] into Haskell with not a big effort.
[1] https://hackage.haskell.org/package/logict
Oh sorry. Did I let my schadenfreude out again?
We had to do goddamn Paint on Prolog. Yup.
More personally, I recently spent enough time with first Scheme and then APL that the paradigms clicked for me, and the effect that had on the entirety of my outlook on work was dramatically changed as a result. For whatever reason, I feel like breaking down my ingrained technical paradigms has allowed me to integrate and strengthen my soft skills.
Plus, mind-expanding experiences are just plain fun. Looking for more of that juice!
[0]:https://en.wikipedia.org/wiki/Answer_set_programming
[1]:https://potassco.org/
I wouldn't say that the logic or relational way of describing effects is a bad thing either. By design it allows for multiple return values (foo/1, foo/2, ...) you can build higher level predicates that return multiple resources, which is pretty common for many programs. It makes concatenative (compositional) style programming really straightforward, especially for more complex interweaving, which also ends up being quite common. Many prolog implementations also support shift/reset, so that you can easily build things like conditions and restarts, algebraic effects, and/or debugging facilities on top. Prolog is also homoiconic in a unique way compared to lisp, and it's quite nice because the pattern matching is so powerful. Prolog really is one of the best languages I ever learned, I wish it was more popular. I think prolog implementations need a better C FFI interop and a nicer library ecosystem. Trealla has a good C FFI.
I think logic programming is the future, and a lot of these problems with prolog are fixable. If it's not going to be prolog, it'll probably be something like kanren and datalog within a lisp like scheme or clojure(script).
This is a great resource for getting a good feel of prolog: https://www.youtube.com/@ThePowerOfProlog/videos
https://www.swi-prolog.org/pldoc/doc_for?object=section(%27p...
https://software-lab.de/doc/ref.html#pilog
See: - The fastest non-incremental embedded Datalog engine https://github.com/s-arash/ascent - The state-of-the-art non-embedded and non-incremental Datalog engine https://github.com/knowsys/nemo - A python library that contains an embedded incremental Datalog engine https://github.com/brurucy/pydbsp - A Rust library that provides a embedded incremental Datalog engine over property graphs https://github.com/brurucy/materialized-view
there's lots of exploring left to do
1: maybe this is it? it does include a lot of curry named submodules; anyway, BSD-3-Clause https://git.ps.informatik.uni-kiel.de/curry/pakcs/-/blob/v3....
What would be interesting, would be to replace depth-first search while remaining in the world of predicates and Horn clauses.
For that you want tabled Prolog, or in other words Prolog executed by SLG-Resolution. The paradigmatic implementation is XSB Prolog:
https://xsb.com/xsb-prolog/
SWI-Prolog also supports tabling but I think the XSB implementation is more mature.
I say naïvely because on one hand you might not need all versions of the function, but on the other one you can also provide partial values, so it’s not either input or output.
Icon had both, but depth-first backtracking was the default and trivial to use, while breadth-first backtracking required using "co-expressions" (co-routines), though at least Icon had a trivial syntax for causing procedure argument expressions to be made into co-expressions. But Icon's approach does not make breadth-first backtracking be a first-class language feature like depth-first backtracking, and this is where my imagination gets stuck. To be fair, I've not truly thought much about this problem.
Breadth first search is complete even if the branches are infinitely deep. In the sense that, if there is a solution it will find it eventually.
For example, each node has unique path to root, so write <n1, n2, ..., nk> where each ni is the sibling ordinal of the node at depth i in that path, i.e. it's the ni-th sibling of the n(i-1)st node. Raising each of these to the ith prime and taking a product gives each node a unique integer label. Traverse nodes in label order and voilà?
However, that all assumes we know the tree beforehand, which doesn't make sense for generic call trees. Do we just smash headfirst into Rice on this when trying to traverse in complete generality?
Breadth first search would visit every node breadth first, so given infinite time, the solution would eventually be visited.
Meanwhile, say a branch had a cycle in it, even given infinite time, a naive depth first search would be trapped there, and the solution would never be found.
Or, suppose that a node has infinitely many children, but the first child has its own child. A BFS would get stuck going through all the first-level children and never reach the second-level child.
A BFS-like approach could work for completeness, but you'd have to put lower-level children on the same footing as newly-discovered higher-level children. E.g., by breaking up each list of children into additional nodes so that it has branching factor 2 (and possibly infinite depth).
The Wikipedia article is fairly useful: https://en.wikipedia.org/wiki/Countable_set
If we're throwing around Wikipedia articles, I'd suggest a look at https://en.wikipedia.org/wiki/Order_type. Even if your set is countable, it's possible to iterate through its elements so that some are never reached, not after any length of time.
For instance, suppose I say, "I'm going to search through all positive odd numbers in order, then I'm going to search through all positive even numbers in order." (This has order type ω⋅2.) Then I'll never ever reach the number 2, since I'll be counting through odd numbers forever.
That's why it's important to order the elements in your search strategy so that each one is reached in a finite time. (This corresponds to having order type ω, the order type of the natural numbers.)
Can you point to a book or article where the definition of completeness allows infinite time? Every time I have encountered it, it is defined as finding a solution if there is one in finite time.
> No breadth first search is still complete given an infinite branching factor (i.e. a node with infinite children).
In my understanding, DFS is complete for finite depth tree and BFS is complete for finite branching trees, but neither is complete for infinitely branching infinitely deep trees.
You would need an algorithm that iteratively deepens while exploring more children to be complete for the infinite x infinite trees. This is possible, but it is a little tricky to explain.
For a proof that BFS is not complete if it must find any particular node in finite time: Imagine there is a tree starting with node A that has children B_n for all n and each B_n has a single child C_n. BFS searching for C_1 would have to explore all of B_n before it could find it so it would take infinite time before BFS would find C_1.
Also, there shouldn't be many situations where you'd be able to produce infinite branches in a prolog program. Recursions must have a base case, just like in any other language.
This version of the program, taken from the article, loops (I mean it enters an infinite recursion):
This one doesn't: To save you some squinting, that's the same program with the base-case moved before the inductive case, so that execution "hits" the base case when it can terminate. That's half of what the article is kvetching about: that in Prolog, you have to take into account the execution strategy of logic programs and can't just reason about the logical consequences of a program, you also have to think of the imperative meaning of the program's structure. It's an old complain about Prolog, as old as Prolog itself.There are techniques to constraint the search space for _programs_ rather than proofs, that I know from Inductive Logic Programming, like Bottom Clause construction in Inverse Entailment, or the total ordering of the Herbrand Base in Meta-Interpretive Learning (ILP). It would be interesting to consider applying them to constraint the space of proofs in ordinary logic progamming.
Refs for the above techniques are here but they're a bit difficult to read if you don't have a good background in ILP:
http://wp.doc.ic.ac.uk/arusso/wp-content/uploads/sites/47/20...
https://link.springer.com/content/pdf/10.1007/s10994-014-547...
e.g. here: https://www.metalevel.at/tist/ solving the Water Jugs problem (search on the page for "We use iterative deepening to find a shortest solution") finding a list of moves emptying and filling jugs, and using `length(Ms, _)` to find shorter list of moves first.
or here: https://www.metalevel.at/prolog/puzzles under "Wolf and Goat" he writes "You can use Prolog's built-in search strategy to search for a sequence of admissible state transitions that let you reach the desired target state. Use iterative deepening to find a shortest solution. In Prolog, you can easily obtain iterative deepening via length/2, which creates lists of increasing length on backtracking."
There is a bit of a problem, in that if there is no solution the lack of a lower bound will cause the search to go on forever, or until the search space is exhausted- and you don't want that. If you use a lower bound, on the other hand, you may be cutting the search just short of finding the solution. It's another trade-off.
Backtracking is a complete search of the problem-space.
What is incomplete is the Horn-SAT problem space, which is a subset of SAT, that can be solved in polynomial time, and is what Prolog is based on.
A complete logic system would have to solve SAT, which is NP-complete.
At least that's what I understood they meant by that.
The article is pointing out that Prolog with backtracking DFS is incomplete with respect to the completeness of SLD-Resolution. To clarify, SLD-Resolution is complete for refutation, or with subsumption. Prolog is an implementation of SLD-Resolution using DFS with backtracking. DFS is incomplete in the sense that it gets stuck in infinite loops when an SLD tree (the structure searched by DFS in Prolog) has cycles, especially left-recursive cycles. The article gives an example of a program that loops forever when executed with DFS with backtracking, in ordinary Prolog.
SLD-Resolution's completeness does not violate the Church-Turing thesis, so it's semi-decidable: SLD-trees may have infinite branches. To be honest I don't know about the equivalence with Horn-SAT, but Resolution, restricted to definite clauses, i.e. SLD-Resolution, is complete (by refutation and subsumption, as I say above, and respecting some structural constraints to do with the sharing of variables in heads and bodies of clauses). We got several different proofs of its completeness so I think we can trust it's true.
Edit: where does this knowledge about Horn-Sat come from? Do you have references? Gimme gimme gimme.
I like how they let you write functions to control the search order because boy that is essential.
I think some Prolog systems do something like this already as an optimization, but I could be totally off on that.
The article is fudging things with its use of "backtracking" as a stand-in for backtracking Depth First Search (DFS)- the latter is the search strategy that is "fixed" in Prolog. And it's not really fixed. Prolog programs can also be executed by "tabling" a.k.a. SLG-Resolution, which basically replaces DFS with Breadth-First Search modulo momoization. Tabling avoids an important source of "incompleteness" in Prolog, that of non-terminating left-recursions.
To clarify, that is what makes Prolog "incomplete": that executing Prolog programs by DFS makes Prolog loop infinitely when encountering some left-recursions. The article gives the example of a last/2 predicate:
This does indeed loop. But this one doesn't: And that's what the article is pointing out with the allusion to "a higher, declarative programming style which frees the programmer from thinking about low-level control details". With such a "higher declarative programming style" the programmer does not have to think about program structure or execution strategy and can write whatever, however, and still get results!The problem with that argument, which is as old as Prolog and possibly even older than that, is that it's an argument from taste, or, indeed, "style", to use the article's terminology. More specifically, for every non-terminating Prolog program that can be written, we can most likely write a Prolog program that does terminate, and that is (success-set) equivalent to the non-terminating program, by keeping in mind the structure of the search space for proofs traversed by ordinary Prolog with DFS, or tabling. And possibly with a few cuts here and there (oh, the humanity!). The article is simply arguing that it is "more declarative" to not have to do that. But, why is "more declarative" better? It's style all the way down.
As to the second axis of the argument, the horror of predicates that do not neatly map to "many" real world problems that are better represented as functions, asking whether we can liberate logic programming from predicates is like asking whether we can liberate the First Order Predicate Calculus from predicates, or, equivalently, make ice cream without the ice or the cream. Sure we can. The question is again: why? I don't see a clear justification for that. In particular, setting implementation details aside (as the article does on this point) SLD-Resolution is already sound and refutation-complete, or complete with subsumption. That is the best that can ever be achieved, and the article doesn't seem to claim anything else (or the ghosts of Alonzo Church and Alan Turing would be very, very sad). So this, too, seems to be a matter of taste: functions are more stylish than predicates. M'kay.
In fact, if I may blaspheme a little, this is what Church did with his Lambda calculus: he turned everything into typed functions to extend First Order Logic (FOL) into a higher order. And in the process completely destroyed the simple elegance of FOL. Why?
That's the problem with breadth-first search though: how to make it online?
Conversely, that's the advantage of DFS: it's online because the only state it needs is a search path, and the search path is O(log N), which is small enough to consider online.
But there's no way to always ensure termination unless one restricts expressivity. Right? Halting problem and all that. Recursion, like iteration, must always risk going infinite, else it is not complete.
The tradeoff between DFS and BFS is like you say and as the article above points out that's the reason DFS was chosen as the original execution strategy for Prolog. Tabling on the other hand can be very memory-hungry (in my applications I often run out of tabling space on my 64GB RAM laptop). In the end, with languages like Prolog, it's the old joke: "sound, complete, efficient- choose two".
In other words, the answer to TFA's title question is, I think, "no" as to backtracking/DFS.
Sounds a lot like whatever "tabling" is in Prolog.
I should probably read a bit about this again. I rarely used recursive queries in SQL when I worked with it, not least because a couple of times I did, I got into trouble because they went haywire :)
Must be. With UNION you can't end up with an infinite loop unless you have infinite data (or you're implementing a table-valued Ackermann function and so you have... not infinite results but for all practical intents, yeah).